Skip navigation

In spite of significant gains in server energy efficiency, power consumption in data centers is still trending up.  At the very least, we can make sure that the energy expended yields maximum benefit to the business.  A first step in managing power in the servers in a data center is having a fairly accurate monitoring capability for power consumption.  The second step is to have a number of levers that allow using the monitoring data to carry out an effective power management policy.


While we may not be able to stem the overall growth of power consumption in the data center, there are a number of measures we can take immediately:

  • Implement a peak shaving capability.  The data center power infrastructure needs to be sized to meet the demands of peak power.  Reducing peaks effectively increase the utilization of the existing power infrastructure.


  • Be smart about shifting power consumption peaks. All the watts are not created equal.  The incremental cost of generating an extra watt of power during peak consumption hours is much higher than the same watt generated in the wee hours of the morning.  For most consumer and the smaller commercial accounts flat rate pricing still prevails.  Real time pricing (RTP) and negotiated SLAs will become more common to put the appropriate economic incentives in place.  The incentive of real time pricing is a lower energy bill overall, although the outcome is not guaranteed.  In pilot programs residential consumers have complained that RTP result in higher electricity costs.  With negotiated SLAs the customer can designate a workload to be subject to lower reliability; for instance, instead of 3 9’s, or outages amounting to about 10 hours per year, the low reliability workload can be designated as only 90 percent reliable, and can be out on the average of two hours per day.


  • Match the electric power infrastructure in the data center to server workloads to minimize over-provisioning.  This approach assumes the existence of an accurate power consumption monitoring capability.


  • Upgrading the electrical power infrastructure to accommodate additional servers is not an option in most data centers today.  Landing additional servers at a facility that's working at the limit of thermal capacity leads to the formation of hot spots, this assuming that electrical capacity limits are not reached first with no room left in certain branch circuits.  Hence measures that work under the existing power infrastructure are to be preferred over alternatives that require additional infrastructure.



For the purposes data center strategic planning it may make economic sense to grow large data centers in a modular fashion.  If the organization manages a number of data centers, consider making effective use of the existing data centers, and when new construction is justified, redistribute the workloads to the new data center to maximize the use of the new electrical supply infrastructure.


Intel has built into its server processor lineup a number of technology ingredients that allow data center operators optimize the utilization of the available power system infrastructure in the data center.



Newer servers of the Nehalem generation are much more energy efficient, if only because of the side effect of increased performance per watt.  These servers also have a more aggressive implementation of power proportional computing.  Typical idle consumption figures are in the order of 50 percent of peak power consumption.



Beyond passive mechanisms that do not require explicit operator intervention, the Intel® Intelligent Power Node Manager (Node Manager) technology allows adjusting the power draw of a server and trade off power consumption against performance.  This capability is also known as power capping.  The control range is a function of server loading.  For the Intel SR5520UR baseboard on the 2U chassis, the server will draw about 300 watts at full load and its power consumption can be rolled down to about 200 watts.  The control range tapers down gradually until it reaches zero at idle.



For power monitoring, selected models of the current Nehalem generation come with PMBus specification compliant power supplies allowing real-time power consumption readouts.



The Node Manager power monitoring and capping capability apply to a single server.  To make this capability really useful it is necessary to exercise these capabilities collectively to groups of servers, to add the notion of events and a capability to build a historical record of power consumption for the servers in a group.  The additional capabilities have been implemented in software through the Data Center Manager Software Development Kit developed by the Intel Solutions and Software Group.  An additional Software Development Kit, Cache River allows programming access to components in servers and server building blocks produced by the Intel Enterprise Products Server Division (EPSD), including the baseboard management controller (BMC) and the management engine (ME), the subsystems that host or interact with the Node Management firmware.  EPSD products are incorporated in many OEM and system integrator offerings.


Data Center Manager implements abstractions that apply to collections of servers:

  •   A hierarchical notion of logical server groups
  •   Power management policies bound to specific server groups
  •   Event management and a publish/subscribe facility for acting upon and managing power and thermal events.
  •   A database for logging a historical record for power consumption on the collection of managed nodes.



The abstractions implemented by DCM on top of Node Manager allow the implementation of power management use cases that involve up to thousands of servers.


If this topic is of interest to you, please join us at the Intel Development Forum in San Francisco at the Moscone Center on September 22-24.  I will be facilitating course PDCS003, "Cloud Power Management with the Intel(r) Xeon(r) 5500 Series Platform."  You will be the opportunity to talk with some of our fellow travelers in the process of developing power management solutions using Intel technology ingredients and get a feel of their early experience.  Also please make a note to visit booths #515, #710 and #712 to see demonstrations of early end-to-end solutions these folks have put together.

I am consistently amazed by the stories I hear from customers and in industry publications about the power issues that data centers are facing these days.  Given the increased compute demand, decreasing budgets and power & cooling resource constraints, data centers simply cannot continue to operate as they have in the past.  These challenges are especially true for Cloud deployments, where the sheer scale of the installations magnifies any resource utilization inefficiencies – especially power – and reduces the TCO benefits promised.   Data Center Managers need new levels of understanding and control of their power resources in order to allocate capacity to seamlessly meet the needs of their customers, and instrumentation is evolving to provide those new capabilities that are required.



At its core, instrumentation is all about sources of data and points on control, and can be at the individual component level, coordinated server level, aggregated group level or even integrated into the facility and building management system level.   At IDF in SFO, you will see a wealth of demos and sessions that will highlight how OEMs and ISVs can use a wealth of instrumentation points - starting with Intel Xeon Processor 5500 features - to develop and deliver innovative management and power management capabilities that can be used to run a Cloud environment is a more efficient manner.  If you are at IDF, stop by one of the following sessions to learn more about instrumentation.



  • ECTS0004 - Improving Data Center Efficiency With Intel® Xeon® Processor Based Instrumentation
  • PDCS002 - Cloud Power Management with Intel® Microarchitecture (Nehalem) Processor-based Platforms
  • Meet The Experts – informal session in the Server Zone during the Tuesday evening Technology Showcase hours
  • Server Zone in the Technology Showcase to see the power monitoring and capping demos, including Intel Intelligent Power Node Manager.



I will be staffing the Meet The Experts event – stop by with your questions and thoughts on instrumentation!  See you at IDF Sept 22-24




Do you read the comic strip “Dilbert”?


If so, you know what a work environment based on cubicles looks like. Many of us involved with the Server System Infrastructure (SSI) Forum just finished our first “compliance and interoperability” (C&I) workshop and, interestingly, cubicles played a key role.



Cubicles are a useful compromise between noise, openness, ease of access and other factors. However, one thing a cubicle is not, is private. Why is that relevant to a C&I event? Let me explain.



“Compliance” refers to the conformance of a physical device, say a computer or plug-in card, to a written specification. “Interoperability” refers to the ability of the physical device to connect with other devices and perform according to predetermined tests.



A C&I workshop has elements of testing for specifications and for tests of devices connected together. Depending on the devices under test, testing can be extremely complex process, often involving entirely new-to-the-world components. In fact, multiple entirely new components can be connected together, based on untested specs and using the latest generation of test equipment.



Participating company’s most talented engineers work to get their components proven compliant and interoperable. That’s where secrecy comes in: engineers have to be able to work without being concerned about prying eyes.



Privacy is also essential for the tests themselves. Early results may not be positive, but those early results could be damaging to a company’s reputation, so they are correctly kept confidential.



How is this privacy achieved? The first C&I workshop was held at an Intel facility. At the lab there are cubicles, per the Intel norm. However, the larger than usual cubicles featured translucent fiberglass panels bolted to the cubicle walls. Also, a sliding lockable door was added to each cubicle.



During the three-day workshop, much was accomplished. Engineers from across the US, Israel and China, representing several blade components, were able to connect their devices together. There were two basic blade systems, one developed by Intel and one by a system OEM. They were developed independently and in parallel, but both were based on specifications provided by SSI.



SSI develops and promotes open specifications for blades and for chassis and power supplies for servers. It currently has almost 40 member companies around the world. SSI has produced 6 blade specs, currently in draft form, to be finalized by the time of the Intel Developer Forum (IDF), September 22-24. SSI has also made 3 switch specs from IBM BladeCenter available to SSI members.



There are two focus areas for specification in the “traditional” server area of SSI, one for electronics bays (chassis) and one for power supplies – with over 40 specs released since the inception of SSI. Current specs are always available on the SSI web site, and specs now in development for the next CPU generation will be available for prerelease access.



The C&I Workshop is an important first step on a long journey. Workshops will be held at independent test organizations, purpose-built for such activities. Workshops will expand in scope and participation, as we deliver on the promise of interoperability; really the central tenet of SSI.



See you at IDF! Please come to my session, EMTS006, “SSI Interoperability Delivered: How Server System Infrastructure (SSI) Specifications Provides Interoperable Components”, September 24, at 2:40. I suggest you attend my colleague, Steve Krig’s, lab ECTL001, “Lab: SSI Server System Infrastructure – Industry Open Blades Standards Compliance and Interoperability”, September 23, at 2:05 and 4:15, for a more technical description of C&I tools and methodologies. I also suggest you visit our booth to see our interoperability demo at booth number 520.



Jim Ryan, Chairman, SSI Intelligent Power Node Manager is a new technology that is available with the Xeon 5500 Series Platforms released earlier this year.  Many of you have asked me questions via Twitter (@Toadster) about "How can I use Node Manager?" - so I wanted to present some simple use cases to simplify the explanation of Node Manager and how you can best use the technology in your own enterprise.


First of all, let's explain the growth problem at hand.  As servers shrink in size, the density of each server 'footprint' is growing from a power perspective... a few years ago, a single 42U rack could hold about 21 servers (estimating 2U servers) - and usually hosting one or two apps/servers per physical server, depending on if you had single or dual-socket servers.  In modern datacenters, that same 42U rack can hold 42 servers (1U each) with 2P per server - so you have an immediate density increase of 2X the # of servers, and 2-4X the number of sockets - which can equate to 16X the number  processor threads per rack...  one good thing is that Intel has been developing newer technologies to keep the TDP of each CPU roughly the same over the same time period between processor updates... where you used to have 2 or 4 cores, you now have 8 to 16 cores at the same thermal envelope!


Knowing how much power your platform uses is a key factor in populating racks and rows in your datacenter.  Prior to Node Manager technology, most Datacenter Managers would base rack population on 'nameplate' power - or the (W) rating on your power supply.  That's the 'max' power utilized by the platform, and what the PSU is rated for (worst case).  See the image below...


NM Use Case - Using Actual Power Data to Increase Rack Density.jpg

As you can see - using Intel Intelligent Power Node Manager technology, you can view your system's power utilization in real-time using Intel Datacenter Manager and the administrator can implement the power caps to ensure your server rack stays within your required power limits.  By utilizing the 'actual' power limits instead of nameplate power, you can increase your rack density thereby increasing your ROI, and decrease your TCO!  Lets face it - everyone loves saving money!


Many of us are familiar with this next scenario... it's summertime, and the power company is announcing that the power grid is under strain.  Personal homes start having their A/C cut-off to save the power grid from brown-outs...  now your enterprise can help reduce those risks as well!


NM Use Case - On-Demand Power Reduction.jpg


Over the next few weeks, I hope to post more blogs/videos:


1. Single Node Power Monitoring & Management
2. Group/Rack Power Monitoring & Management
3. Thermal Monitoring & Management


Please provide some feedback, and post your questions and ideas for upcoming blogs!


Intel has just launched the Intel® Ethernet Server Adapter X520 family.  These NICs are Intel’s first 10 Gigabit adapter products that support “pluggable” optics.  This additional configuration option gives IT users a great deal more flexibility in how they deploy 10 Gigabit in their servers and datacenters.


The X520 Family of adapters support bailed optics that allow the removal or addition of different kinds of optics or support with no optics at all.  For previous 10 Gigabit products, if you wanted 10 Gigabit SR Fiber connectivity, you had to purchase a 10 Gigabit SR adapter.  But with the pluggable X520 adapter family, you can support SR, LR, or simply an SFP+ direct attach cable via the same card by simply removing / exchanging the optics.






With the X520 you can still buy an SR or LR fiber configured adapter, but you can also switch back and forth after purchase by ordering only the new optics that you want to support (not a whole new adapter).  In the case of the Direct Attach adapter that supports an SFP+ cage, but comes without optics inserted, you can still use Twin-Ax copper cables to run in the rack less than 7m length runs of 10 Gigabit, but you can also upgrade the Direct Attach adapter later with SR or LR optics as the needs for the particular adapter may change. You can also mix and match optics modules in a dual-port adapter, meaning you could have an LR module in one port and an SR module in the other. You could also throw a Twinax cable into the mix.


The Intel® Ethernet optics modules for the X520 family of adapters also support both 1 Gigabit and 10 Gigabit speeds to help with backward compatibility – an industry first.


Finally, while this new pluggable capability of Intel 10 Gigabit adapters adds a bit more usage flexibility from an IT perspective, the performance capabilities and advanced features for the datacenter I’ve discussed over the past 18 months are also supported.  The X520 is based on the Intel® 82599 10 Gigabit Ethernet Controller, so the end result is a flexible product that can help unleash server IO performance whether FCoE, iSCSI, Virtualization, Security, or just raw IO performance.  Regardless of your 10 Gigabit needs, the X520 probably has what your Server environment needs.



Ben Hacker


I have written in the past about key IT considerations while implementing virtualization.


One of the key elements that change going from a non-virtualized environment to virtual environment is the security model. The security model needs some additional considerations going to virtual environment.


I and a few of my colleagues who meet with IT end customers deploying virtualization on a regular basis have realized that there are some frequently asked questions/concerns and also misconceptions about protection in virtualized environment. We also did a bit of research on types of documents available to help IT understand the topic of security model in virtualized environment better, but found most articles to be either outright dismissive of security concerns or took a very opposite theoretical and conservative view on lack of security.


So with the help of our architects we developed the below white paper with an intent to help IT managers, strategists and implementers understand resource protection in virtualized environment better. We also address some of the frequently asked questions and typical misconceptions with security in virtual datacenter.


The white paper essentially takes a balanced view and provides an overview of security model changes, challenges and considerations that organizations must address when implementing virtualization. It introduces hardware, software, and policy measures available to help address those challenges, including their strengths and limitations and then closes with a brief discussion of some key issues associated with security in emerging cloud computing usage models.


Let us know what you feel.

I took a look at my calendar this morning and was surprised to see that Fall Intel Developer Forum 2009 is nearly upon us - just a scant 30 days until the doors of the Moscone Center open up to innovators, customers, press and the public to learn about how technology innovation is changing how everyone works, lives and plays.


IDF is going to be big for the Intel Server Group - along with our industry fellow travelers, we will put on over 30 classes & panels and more than 20 demos in the technology showcase that will explore the future direction of server products and technologies.  To build awareness and attendance, we will be trying something new by using the Server Room to give you a preview of server related IDF content taking a goal of having 30 tech experts deliver 30 blogs between today and the start of IDF on September 22.


I hope that you find this series of blogs to be both informative and helpful in planning you use of time at IDF.  If things go as planned, you should see 7 or 8 blogs a weeks for four weeks.  Look for blogs tagged with "idf_2009" and "idf_30in30" over the course of the next month, and send any comments along that you might have if you get the chance.


Looking forward to seeing you in SFO!



  "Part 4 of 6 "Virtualization Technology and Ecosystem Support"




Earlier this summer we announced Intel's soon to be released Intel Xeon server product code named "Nehalem-EX" to the world. This is a breath-taking architecture that our engineers have been developing for several years. The performance improvements of this new platform are truly incredible. We really are looking forward to providing the industry with the specifics....


However, the performance, the instrumentation and many of the features of this platform would not be available if not for a broad ecosystem (read village) of community support. I was able to get a sneak peek today of some of our reliability features in Nehalem-EX and cannot wait for the public to see these capabilities. The power management capabilities are capable of delivering huge ROI savings for consolidation of virtualized and non-virtual workloads. The memory capability up to 512GB is staggering. Does anyone remember when 512GB of Storage was considered impressive??


One of the most rewarding aspects of these hardware capabilities is the ability of the virtualization industry to innovate software solutions for the data center around these new capabilities with technologies like SR-IOV, Machine-Check architectures and Dynamic Resource pooling. Without the help of VMWare, Microsoft, Citrix, Red Hat and Xen-community users would struggle to take advantage of these features in a real-tme automated deployment model.

Beyond server this virtualization ecosystem is innovating client virtualization technologies, application virtualization and rapid application deployment models that are allowing for the ubiquity of virtualization technologies to permeate many different device form factors from the Data Center to the Desktop. This is compelling, innovative and delivers a high degree of ROI.


At Intel, we have thousands of software developers who remind us everyday that it is sometimes easy to forget their contributions and those of their key ISV colleagues. In virtualization, we have not only come to appreciate them, we couldn't exist without them. In difficult financial and social times, it takes a village to build a viable, vibrant and innovative community. I personally look forward to catching up with all of my colleagues in virtualization at VMWorld and IDF (Intel's Developer Forum) over the next 45 days and thanking them for their wonderful committment to our joint efforts. It has been quite a ride so far and the best yet to be concieved.


I wrote a few weeks ago about the end of the mini generation.  This time I thought I would dig out some data to support my case.  My personal anecdotal evidence is what I am hearing from my customers.  They are looking at replacing hundreds of legacy Unix servers with new high performance Xeon boxes.  I am not talking about a one-for-one replacement, but using virtualization to replace 5 to 25 of these older unix boxes with each Xeon 5500 server.  The economic incentives here are pretty staggering.


Why now?


I see multiple reasons

1)  Ecosystem maturity.  Enterprise class tools for virtualization, Linux, high availability from VMware, KVM, Xen, RedHat, Suse and others

2)  Performance.  The performance of 200-2005 vintage sparc and ultra-sparc boxes is easily replaced by Xeon – saving power, space, and potentially licensing.

3)  Applications readiness.  Applications like Oracle are now “made for linux” and do great on X-86 platforms

4)  Staff.  You have the expertise in Linux on Xeon, this is a growing area, capitalize on it.

5)  Economics.  There is real savings to be had in licensing, power, space, staff, sanity( sanity savings is subjective).



I hopped out to to look at some benchmarks.  Benchmarks are notoriously awful as measures of actual performance, but they do work – mostly – as a comparison of relative performance.


There isn’t a lot of Sparc data, and much of it is old, but if you are looking at replacing some aging 4+ year old Unix hardware, that may be just what you need.  (with respects to Bryce’s cash for clunkers blog).


For TPC-C the most recent Sparc result I found was from 2003. Running Oracle Database 10g EE on Sun Solaris 8 on 64 single cores of Fujitsu SPARC64 - 1.3 GHz processors, they delivered 595702 tpmC at $12.43/tpmC (


So if “this old machine" is setting in your landscape, gulping power and support costs, you could replace it today by running Oracle Database 11g SE1 on Oracle Linux 2 quad core Intel Xeon Processors X5570 2.93GHz  delivering 631766 tpmC at $1.08/tpmC (


The ROI on this must be about 10 minutes! Ok maybe that is a bit quick, but this is a data base! Export, Import, ta-da!  What are you going to do with all that extra rack space and power?


Replace a 64 socket platform with a 2 socket platform.  Amazing.  this could be 1U, or even a blade.  You could put it under your desk.  There have got to be some examples of older sparc and power boxes sitting in the landscape. Let me know what you have.






I would like to elaborate on the topic energy vs. power management in my previous entry.




Upgrading the electrical power infrastructure to accommodate additional servers is not an option in most data centers today.  Landing additional servers at a facility that's working at the limit of thermal capacity leads to the formation of hot spots, this assuming that electrical capacity limits are not reached first with no room left in certain branch circuits.




There are two types of potentially useful figures of merit, one for power management and one for energy management.  A metric for power management allows us to track operational "goodness", making sure that power draw never exceeds limits imposed by the infrastructure.  The second metric tracks power saved over time, which is energy saved.  Energy not consumed goes directly to the bottom line of the data center operator.



To understand the dynamic between power and energy management let's look at the graph below and imagine a server without any power management mechanisms whatsoever.  The power consumed by that server would be P(unmanaged) regardless of any operating condition.  Most servers today have a number of mechanisms operating concurrently, and hence the actual power consumed at any given time t is P(actual)(t).  The difference P(unmanaged) - P(actual) is the power saved.  The power saved carried over time t(1) through t(2) yields the energy saved.





Please note that a mechanism that yields significant power savings may not necessarily yield high energy savings.  For instance, the application of Intel(r) Dynamic Power Node Manager (DPNM) can potentially bring power consumption by over 100 watts, from 300 watts at full load to 200 watts in a dual-socket 2U Nehalem server that we tested in our lab.  However, if DPNM is used as a guard rail mechanism, to limit power consumption if a certain threshold is violated, DPNM may never kick in, and hence energy savings will be zero for practical purposes.  The reason why we do this is because DPNM works best only under certain operating conditions, namely high loading factors, and because it works through frequency and voltage scaling, it brings a performance tradeoff.




Another useful figure of merit for power management is the dynamic range for power proportional computing.  Power consumption in servers today is a function of workload as depicted below:



The relationship is not always linear, but the figure illustrates the concept.  On the x-axis  we have the workload that can range from 0 to 1, that is, 0 to 100 percent.  P(baseline) is the power consumption at idle, and P(spread) is the power proportional computing dynamic range between P(baseline) and power consumption at 100 percent workload.  A low P(baseline) is better because it means a low power consumption at idle.  For a Nehalem-based server, P(baseline) is roughly 50 percent of power consumption at full utilization, which is remarkable, considering that it represents a 20 percent over the number we observed for the prior generation, Bensley-based servers.  The 50 percent figure is a number we have observed in our lab for a whole server, not just the CPU alone.




If a 50 percent P(baseline) looks outstanding, we can do even better for certain application environments such as load-balanced front end Web server pools and the implementation of cloud services through clustered, virtualized servers.  We can achieve this effect through the application of platooning.  For instance, consider a pool of 16 servers.  If the pools is idle, all the servers except one can be put to sleep.  The single idle server is consuming only half the power of a fully loaded server, consuming one half of one sixteenth of the cluster power.  The dormant servers still draw about 2 percent of full power.  Hence, after doing the math, the total power consumption for the cluster at idle will be about 8 percent of the full cluster power consumption.  Hence for a clustered deployment, the power dynamic range has been increased from 2:1 for a single server to about 12:1 for the cluster as a whole.




In the figure below note that each platoon is defined by the application of a specific technology or state within each  technology.  This way it is possible to optimize the system behavior around the particular operational limitations of the technology.  The graph below is a generalization of the platooning graph in the prior article.  For instance, a power capped server will impose certain performance limitations to workloads, and hence we assign non time critical workloads to that platoon.  By definition, an idling server cannot have any workloads; the moment a workload lands on it it's no longer idle, and its power consumption will rise.




The CPU is not running in any of the S-states than S0.  The selection of a specific state depends on how fast that particular server is needed online.  It takes longer to bring up a server online in the lower energy states.  Servers in G3 may actually be unracked and put in storage for seasonal equipment allocation.




A virtualized environment makes it easier to rebalance workloads across active (unconstrained and power capped) servers.  If servers are being used as a CPU cycle engines, it may be sufficient to idle or put to sleep the subset of servers not needed.




The extra dynamic power range comes at the expense of instituting additional processes and operational complexity.  However, please note that there are immediate benefits in power and energy management accrued through a simple equipment refresh.  IBM reports an 11X performance gain for Nehalem-based HS22 blade servers versus the HS20 model only three years old.  Network World reports a similar figure, a ten-fold increase in performance, not just ten percent.




I will be elaborating on some of these ideas at the PDCS003 Cloud Power Management with the Intel(r) Nehalem Platform class at the upcoming Intel Developer Forum in San Francisco on the week of September 20th.  Please consider yourself invited to join me if you are planning to attend this conference.

We created a server refresh ROI estimator tool to help IT managers make sense of the significant OpEx savings they can achieve by making targeted investments in new server hardware. In my previous blog when we introduced the ROI tool back in April 2009, I talked about the capabilities of the estimator and the benefits of server refresh.  In the first 3 months, we have had nearly 4,000 users of the ROI estimator and of those users almost 800 users have printed reports to share with others in their organizations. The feedback we have received from users has been very encouraging.



CIO for major US hospital: “This would help my IT staff justify the financial value of the technology investment they are proposing. This has been a barrier to freeing up capital internally”

IT Manager for major US bank: “I used to have regular funding for technology refresh projects. It was a given for my budget.  However, with the increased constraints on capital, I now have to justify this type of spending”

Technology Sales Consultant: “This tool helped me work better with my customer to gain a deeper understanding of their server environment and allowed us to jointly identify high ROI investments to improve their infrastructure”



I have also heard many constructive suggestions for improvement.  As a result, we have continued to evolve the tool based on feedback from users.


Tool Training – How to Use: We heard that the benefits of using the Savings Refresh Estimator spanned many functional roles, making us realize that the use models for this type of tool and what users were looking for would vary dramatically from person to person.  This has challenged us to look at ways to streamline the user interface (something we continue to work on) for different users and analyses.  In the interim, we are in the process of developing a video training guide to help users understand how to use the tool to get maximum benefit.  We have a pdf training guide today that can help you get started now.


PowerPoint Output: What would we do without powerpiont? J We received feedback on the desire to make the output of this tool more sharable inside IT organizations and with business partners in a powerpoint format as a way to communicate the opportunity and benefits for server refresh investment.  So, we now have a powerpoint output option in the reports section that breaks down the benefits of server refresh for a variety of audiences from executive staff to facilities to finance.  Everyone inside your business can benefit from server refresh and now you can show them how.


Secure Analysis: We received feedback that many users wanted access off-line either as a way to use in meetings when connectivity was challenged or to protect internal data from exposure online.  We now have the ability for you to run the tool on your laptop to support these use models.


More … More … More Functionality. We heard lots of requests and ideas to expand the level of functionality and analysis capabilities.  We have to balance scope, complexity Keep these requests coming.  The following changes are incorporated into today’s estimator.


Virtualization to Virtualization Refresh Scenario – now included

Virtualization Loading: Can edit and change VM/server new and old

Custom Performance Data – enter you own performance data to better model what you expect to see in your biz

Depreciation Cycle – no longer fixed at 4yrs .. can adjust

Memory Sizing: information added to allow user analysis

Processor Description: allows user to cross reference data to other more familiar terminology.



Accuracy / Approach: We have also heard some feedback challenging us on different ways to look at refresh scenarios, especially as we learn more about how people are looking at and using virtualization and sizing their environments after refresh.  Sizing is a very customer-centric and application specific task that is difficult to model in a one-sized fits all.  We won’t be able to model every sizing situation, but are planning some future enhancements intended to help you self-evaluate.


I want to thank everyone in the community for their input on this tool and helping us to deliver a better product over time.  Keep the ideas coming.  Feel free to respond with comments here.





twitter: @chris_p_intel

Japan announced today that it has emerged from recession, following Germany and France’s announcements last week that their economies also grew in the second quarter. Moody’s Business Confidence survey shows that confidence has been steadily increasing since March ‘09. For the first time since September ’08, “Economic Recovery” nudges above “Economic Crisis” in Google Search Volume in early August.

In addition to this, economic forecasts (WW GDP, US GDP, and EU GDP) point to a recovery over the next 6 months.  A couple of quotes:

  • The direction of real GDP is even expected to turn from negative to positive in the current quarter. The academic arbiters of the business cycle at the National Bureau of Economic Research will eventually proclaim that the Great Recession ended sometime this summer.

      Moody’s Economy.Com – July 7, 2009

  • The global economy is beginning to pull out of a recession unprecedented in the post–World War II era

       International Monetary Fund, – July 8, 2009

So, why am I bombarding this blog with various optimistic economic data? Because if we really are pulling out of the abyss, I’m worried that many companies out there are sitting on servers that will not be ready for the increased demand right around the corner.   

John Gantz, IDC Vice President in his keynote speech at the start of this year’s CIO Summit in Auckland was quoted as saying there will be an unprecedented amount of IT-driven change in the next four years.  He projected that there will be a three-fold rise in mobile users and information will grow five-fold, resulting in heightened levels of security and privacy and questions on which data to store or throw away. He also mentioned that the number of interactions between people on networks will grow eight times.

So this got me thinking… Is your company looking to differentiate and go after more market share while your competitors are hunkered down and not investing in the downturn? My guess is that there are a lot of IT managers being asked to support more social media, offer more SaaS, deploy more virtual machines, and support more real time analytics to get a leg up on the competition.  My gut tells me that it will be hard to do all of this with older servers that were put into another year of extended warranty because that felt like the right move when the proverbial economic s**t hit the fan last year.

It’s critical to be prepared for when the recovery comes, and data points to an economic turnaround happening now – are you positioning your department to own it when it arrives?


In the management practices of most datacenters I see IT driving more efficiency from their infrastructure by switching to virtualization. As they do so and develop confidence over the years, the types of applications that are getting deployed with virtualization are also changing. Even the scalable enterprise apps are being considered as good candidates for virtualization.


IT managers now have the building blocks to consider deploying scalable enterprise apps using virtualization with increasing SMPness in VMM’s like 8-way vCPU capability in VMware vSphere, Intel Xeon virtualization hardware assists, NICs with virtualization hardware assists, and scalable system architectures based on Intel Xeon from IBM such as 3850, 3950 and BladeCenter servers. VMware HA and DR solution and chipset architecture with reliability and availability features further provide the confidence for business critical needs of enterprise apps. Capabilities such as VMware VMotion and FlexMigration help increase efficiency of the infrastructure with such environment.


Economic condition at the same time is also putting more pressure on the IT to deliver more value within a constrained budget. Refreshing old hardware and adopting virtualization are simple strategies to achieve such goals.

I’ll be talking about virtualization with the new VMware vSphere on IBM Intel processors on August 12, 1 PM EST.  I’ll be joined by Bob Zuber, Program Director, High Performance, IBM.   The webinar is designed to help IT managers better understand scalable virtualization infrastructure, enterprise application virtualization, reduced TCO, and efficiency benefits along with a special financing opportunity.


Register for the session on “Virtualize with the new VMware vSphere on IBM Intel processors and take advantage of special financing”.


In the meantime, I’ll be answering related questions leading up to the webcast, and during the webcast.  I’d also like to hear how you’re delivering more value in the data center within a constrained budget.

You’ve seen it on the front pages of the papers lately.  The program that offers consumers incentives to trade in older used cars for more fuel-efficient new cars is pushing auto sales into overdrive.  The $1B in govt. funding for it was burned through in less than a week. The U.S. House of Representatives rushed through an additional $2B in emergency funds just to keep the program going, but will need Senate approval if it’s going to extend beyond Tuesday August 4th. My guess is to make a continuation of the program palatable to the U.S. taxpayer, the incentive will need to be cut (from $4500 for a new fuel-efficient car to somewhere in the $1-2k range) but it’s great to seen people buying cars and stimulating part of the economy – while getting older fuel-inefficient cars off the roads.


I saw an interesting article talking about whether a similar program for servers would work…and though I think it’s a creative idea, I’ll argue that Intel and our OEM partners have been offering “Cash for Clunkers” for quite some time now – without any U.S. taxpayer help.  How? Through promoting the benefits of server refresh, a strategy that is proving to be one of the most beneficial investments to IT and business. Using the Xeon ROI Estimator I spent 2-3 minutes modeling potential savings by comparing 4-year old 2P Intel Xeon based servers to new 2P Intel Xeon 5500 based servers – and this is what I found:


An investment in one Intel Xeon 5500 based server (~$8.5k including purchase price, migration cost, and software validation) enables up to 10x performance per server, a 10:1 server consolidation opportunity vs. 10 older servers purchased 4 years ago that as an IT manager I can now get rid of.  So where’s the cash for the clunkers? Well, I would save over $4k a year in energy costs and over $11k a year in server / software maintenance costs by cutting out the old and putting in the new.  The 4-year total savings is about $38k, with a break even period of about 9 months. Not bad…and that doesn’t even take into consideration software licensing costs that I probably can save by cutting down the server count. Try modeling this yourself and check out the new PowerPoint report that you can generate from it – really explains the benefits in a way that the finance and facilities folks will find useful.


I also found this which explains why Intel IT decided to move ahead with server refresh in 2009 after current economic conditions forced Intel to re-evaluate the strategy. Analysis found that delaying server refresh for a year would increase costs by USD 19 million. And a refresh strategy also applies to the bigger 4 Socket and above servers as well, as documented in this server refresh brief. Server Refresh is a strategic investment for IT – the cash for clunkers program that keeps on giving.



Filter Blog

By date: By tag: