The November 2011 Top500 super computer list released last week marked a milestone. Despite some notable new entrants, it was the first time in the history of the list that the top performing ten didn’t change.  Does this mean innovation in the "nose bleed" seats of the HPC arena has stopped? Hardly, it just means the focus might be shifting.

 

The Green500, published concurrently with the Top500, was more dynamic. In June 2011 the top two efficient supercomputers were BlueGeen/Q systems. Now the top five are BlueGene/Q. Surprisingly the top efficiency decreased slightly from 2097 Mf/Watt to 2026Mf/Watt. And while BlueGene continues to top the list, systems with GPU accelerators (from a range of manufacturers from nVidia to Intel) combined with CPUs from Intel and AMD made a strong showing.

 

What is surprising is that in the top ten of the Green500 and Top500 there almost no overlap (just one system). So, with the race to the top of  supercomputing increasingly about efficiency and performance leadership, what does leadership mean? Well, that judgment, of course, depends on the goal.

 

I recently proposed an approach to combine the Top500 and Green500 performance and efficiency scales into a single metric (which I will call “Exascalar” henceforth). The thinking behind it is straight forward: since both efficiency and performance are required as the industry pushes toward the next big goal a good metric will balance the two.

 

So what happened this time? Well before getting started please note this is an informal ranking done by me without formal peer review. Any errors are my responsibility. Comments, inputs, and corrections are appreciated.

 

As a refresher, the graphical representation of the performance and efficiency data shows how they are combined to form the Exascalar. Exascalar is the negative logarithm of  “how far away” a system is from meeting the Exascale goal of 1.0 Exaflops/20MWatt. Note the iso-power lines and iso-exascalar curves in the graph. (One reason I like this approach is that, for a given efficiency, I can directly read what is the expected performance limit in a given power envelope)

 

Exascalar Graph November 2011.jpg

Click to Enlarge

 

 

The details of the top ten based on the Exascalar are shown in the list below. The top three computers on the list are unchanged from the last look.  The RIKEN K Computer with its muscular performance, and then two systems based on Xeon5670 with nVidia GPUs. The third place system, The GSIC Center, Tokyo system based on Xeon 5670 and nVidia GPU is notable since it is the only system on the list having top ten performance and efficiency.

 

Next on the list is the DOE/NNSA/LLNL BlueGene/Q system, which ranks fourth in Exascalar based on the strength of its very high efficiency: it ranks fourth in efficiency and fourth in exacalar.  It’s a great example showing that efficiency and performance, and not just scale, counts. Judging from the position of the BlueGene/Q systems on the graph above, there certainly appears to be more headroom in the future, with its current power about one twentieth the power of the number one system.

 

Below number seven is where I think the race gets most interesting. The Chinese National Supercomputing Center in Jinan Sunway system makes a very strong showing in the combined ranking. It is the first system on the list that is in neither the top ten of performance nor efficiency; its strength is its balance of performance and efficiency.

 

Rounding out at number ten is a very strong showing by a Xeon E5 (Sandy Bridge-EP) system, again with strong balance between high efficiency and performance. It's a remarkable achievement for a processor this new to make it to a top ten spot and I think begins to show us what the future looks like.

 

 

Exascalar Ranking November 2011.jpg

Click to Enlarge

 

Overall six of the former Exascalar top ten remained on the list as compared to last Spring. Although the top of the list didn’t change, the tenth system improved from 3.75 to 3.65, a significant improvement in performance and efficiency (recall Exacalar is logarithmic).

 

The most significant moves were by systems with very strong efficiency and those achieving that delicate balance between high efficiency and performance. Systems that pushed performance over efficiency moved relatively down in the ranking this time. This could be a trend that will continue to define the future of supercomputing, though only time will tell for certain.

 

Supercomputing is first and foremost about performance, but is also increasingly constrained by power. Looking at both performance and efficiency combined may give us better insight into how the race to Exascale is shaping up, and ultimately who will win.

 

In reviewing this with my friend and colleague Mike Patterson, he asked me a very interesting question, “what information is contained in the slope of the line to Exascale?” I have an idea, but am interested in your thoughts. What, if anything, does the slope of the line to Exascale tell us?

 

And of course, any additional thoughts, comments or insights are welcome. Is the focus shifting? Does this provide insight? What do you predict will happen in the future?