Currently Being Moderated

There are two technologies available to regulate power consumption in the recently introduced Nehalem servers using the Intel® Xeon® processor 5500 series.  The first is power proportional computing where power consumption varies in proportion to the processor utilization.  The second is Intel® Dynamic Power Node Manager (DPNM) technology which allows the setting of a target power consumption when a CPU is under load.  The power capping range increases with processor workload.


An immediate benefit of the Intel® Dynamic Node Manager (DPNM) technology is the capability to balance and trade off power consumption against performance in deployed Intel Nehalem generation servers.  Nehalem servers have a more aggressive implementation of power proportional computing where idle power consumption can be as small as 50 percent of the power under full load, down from about 70 percent in the prior (Bensley) generation.  Furthermore, the observed power capping range under full load when DPNM is applied can be as large as 100 watts out for a two-socket Nehalem server with the Urbanna baseboard observed in the lab to draw about 300 watts under full load.  The actual numbers you will obtain depend on the server configuration: memory, number of installed hard drives and the number and type of processors.


Does this mean that it will be possible to cut the electricity bills by one third to one half using DPNM?  This is a bit optimistic.  A typical use case for DPNM is as a "guard rail".  It is possible to set a target not to exceed for the power consumption of a server as shown in the figure below.  The red line in the figure represents the guard rail.  The white line represents the actual power demand as function of time; the dotted line represents the power consumption that would have existed without power management.




Enforcing this power cap brings operational flexibility: it is possible to deploy more servers to fit a limited power budget to prevent breakers from tripping or to use less electricity during peak demand periods.



There is a semantic distinction between energy management and power management.  Power management in the context of servers deployed at a data center refers to a capability to regulate the power consumption at a given instant.  Energy management refers to the accumulated power saved over a period of time.


The energy saved through the application of DPNM is represented by the area between the dotted line and the white graph line below; the power consumed by the server is represent by the area under the solid white graph line.  Since power capping is in effect during relatively short periods, and when in effect the area between the dotted line and the guard rail is relatively small, it follows that the energy saved through the application of DPNM is small.


One mechanism for achieving significant energy savings calls for dividing a group of servers running an application into pools or "platoons".  If servers are placed in a sleeping state (ACPI S5 sleep) during periods of low utilization it is possible to bring their power consumption to less than 5 percent of their peak power consumption, basically just the power needed to keep the network interface controller (NIC) listening for a wakeup signal.


As the workload diminishes, additional servers are moved into a sleeping state.  The process is reversible whereby servers are taken from the sleeping pool to an active state as workloads increase.  The number of pools can be adjusted depending on the application being run.  For instance, it is possible to define a third, intermediate pool of power capped servers to run lower priority workloads.  Capped servers will run slightly slower, depending on the type of workload.


Implementing this scheme can be logistically complex.  Running the application in a virtualized environment can make it considerably easier because workloads in low use machines can be migrated and consolidated in the remaining machines.

We are conducting experiments to ***** the potential for energy savings.  Initial results indicate that these savings can be significant.  If you, dear reader have been working in this space, I'd be more than interested in learning about your experience.


If this topic is of interest to you, please join us at the Intel Development Forum in San Francisco at the Moscone Center on September 22-24.  I will be facilitating course PDCS003, "Cloud Power Management with the Intel(r) Xeon(r) 5500 Series Platform."  You will be the opportunity to talk with some of our fellow travelers in the process of developing power management solutions using Intel technology ingredients and get a feel of their early experience.  Also please make a note to visit booths #515, #710 and #712 to see demonstrations of early end-to-end solutions these folks have put together.


Filter Blog

By author:
By date:
By tag: