Currently Being Moderated

There’s been an ongoing debate in the research communities for some time: do new emerging scale-out applications need brawny cores or wimpy cores in their server processors?  That debate has left academia and research and has found its way to the CTOs of companies with large datacenters.   To clarify, wimpy cores isn’t used pejoratively, it’s simply a way to characterize processor cores that trade off performance for lower power consumption.

 

Naturally, the simple answer to “brawny vs wimpy” is that it depends on the application. Where you need more CPU performance, brawny, using Xeon processors, seems like a good bet and where you may be IO constrained, wimpy, using Atom processors, might be a good choice. But that begs the question, which applications fall into which category?  As an example, Hadoop is often cited as a framework that would lend itself well to wimpy cores. Because Hadoop is used for many types of problems, it isn’t simple to generalize the right infrastructure – simple sort operations may be less CPU intensive while large sort or word count usage requires more CPU performance.  But even then, it’s not that simple, use of 1Gb networking may artificially constrain CPU scaling or the size of the problem may require larger nodes just for scalability. So how is one to decide?

 

I don’t think you should have to. I believe the industry needs infrastructure that allows them to deploy servers in an efficient manner and have different versions that meet their requirements. Not only do the servers have to adapt to a range of requirements, but they should have the full set of features that customers require: 64-bit pointers and arithmetic, ECC, virtualization support, software compatibility and offer it across both brawny and wimpy cores.

 

I’m excited today that I had the opportunity to talk at GigaOm’s Structure Conference in San Francisco.  I highlighted our roadmap that supports brawny core Xeon processors ranging from 45W down to dual-core 17W processors for those applications that will benefit from more thread-level performance.  I also announced that Centerton, our first Atom-based SoC for servers, is on track for production in the second half of this year and a follow on Atom-based SoC, based on the revolutionary 3D Tri-Gate 22nm transistor technology, codenamed Avoton, is coming in 2013. We demonstrated that with little optimization or tuning, we were able to see sub-9W power consumption for the node while serving web pages.  In addition, HP unveiled their first production Moonshot server, Gemini, and chose Centerton to lead in the platform.  By the end of this year, users will be able to put Atom based servers into production environments and try out data center class wimpy servers while leveraging the leadership features and range of performance and density configurations that Gemini will provide.

 

Datacenter infrastructure architects and IT decision makers have enough decisions to make.  Given the uncertainty in the evolution of code, it’s only going to get harder to predict the ideal infrastructure.   As hardware developers, we need to give customers the opportunity to have infrastructure flexibility and a choice of options that maintain consistency of features and compatibility.

Comments

Filter Blog

By author:
By date:
By tag: