Currently Being Moderated

PUE has been a hugely successful efficiency metric in quantifying the discussion of data center infrastructure efficiency. Of course, infrastructure is not the only thing in a data center, and we have proposed “SUE” as “Part B” of the data center efficiency equation to address the important aspect of compute efficiency. SUE is a similarly derived IT performance metric which is gaining traction in application.


Though neither metric is “perfect,” both have a low barrier for adoption and are meaningful in a big picture perspective (so long as you don't get too hung up on tricking the metric at the expense of other important parameters). Another powerful aspect driving acceptance of PUE and SUE is they fit easily into grammatical sentences. If your PUE is 2.0 you’re using twice the energy you need to support your current IT infrastructure. If your SUE is 2.0 you’re operating twice the number of servers you need to support you current IT workload. Both convey obvious business impact.


So what about the “holy grail,” data center work-efficiency?


There’s broad industry recognition (as Ian Bitterlin says, “it is the 1.0 that is consuming 70% of the power”), and a lot of work is going on to understand it. For instance, The Green Grid published the DCeE or a Data Center Efficiency Metric back in 2010, based on a view toward quantifying “useful” work output of the data center.


However, this sophisticated approach really has to do with application-level details and has not yet gained wide industry traction. This is partly, I believe, because of its complexity; the barrier to entry is an investment in highly granular data analysis which is more than many operators need or will support.


So I asked myself, “what are the alternatives?” Can we lower the barrier to entry in the way PUE and SUE have done for infrastructure and IT efficiency and define a Data Center Capital Usage Effectiveness (DCUE) taken as the ratio of two quantities with units of “Work/Energy?”


Well, the short answer is, we can. The starting point is the very simple idea that:


Work/Energy = Integrated (Server Performance * Utilization)/(Total Data Center Energy)


The big assumptions are: 1) it assumes statistical independence of server performance and utilization, 2) it tacitly assumes CPU performance and utilization drive work output (though this simplifying assumption can be removed with more complexity), and 3) it neglects things like network and storage efficiency (which are minority energy consumers in most data centers). Not perfect, but tractable.


The DCUE formula has the advantage of providing an easy entrée into the analysis of the work efficiency of the data center; it focuses on what many consider the big three: Infrastructure efficiency, IT equipment efficiency, and it quantifies how effectively the capital asset is being utilized (thanks to Jon Koomey for pointing that out to me).


Roughly, here is how the numbers work (these are made up data but are representative based on experience): Imagine a typical data center with a PUE of 2.0. If the data center is on a refresh cycle of six years its SUE will be about 2.4, and the server utilization might be about 20% in an enterprise with a low level of virtualization.


An efficient data center might have a PUE closer to 1.3, a more aggressive three year server refresh rate with an SUE of about 1.6, and might increase utilization to 50% with both higher rates of virtualization and perhaps utilize technology like “Cloud Bursting” to handle demand-peaks.


The math reveals a Data Center Capital Usage Effectiveness (DCUE) opportunity of about 6 times between the two scenarios.


Data Center
















In fact, a“Cloud” DCUE could even be higher with more aggressive server refresh, lower PUE, and higher utilization levels, whereas typical enterprise utilizations might be lower.


My friend Mike Patterson here at Intel is always challenging, “so… what does it mean?” Well just as PUE and SUE represent “excess” quantities, a DCUE of 24 means you are using about 24 X the energy (and hence data center capital) as you'd need at optimum efficiency. That means 24 times the data center capital. A pretty powerful argument to improve.


So there you have it, "The Big Three" for data center capital efficiency: 1. How efficient if your infrastructure? 2. How effective is your server compute capability? and 3. what is the utilization of your capital assets?


In subsequent blogs, I’ll talk more about these ideas and some of the issues we still need to think about. But until then, I'm curious what you think. Right track? Wrong track? Why?


Filter Blog

By author:
By date:
By tag: