The Data Stack

2 Posts authored by: huiskesk

Yesterday – Intel officially launched the Intel® Xeon® 5500 processor (formerly codenamed “Nehalem”) for servers and workstations. One of the most exciting uses of this new platform will be as a key building block in cloud computing infrastructure. Whether you’ve bought into the hype of cloud computing or are a jaded IT realist – you can’t afford to pass up this list of 10 reasons the Intel Xeon 5500 processor is perfect for the cloud.

 

 

 

 

 

  1. Efficiency. To get the greatest efficiency – the leaders of large-scale Internet providers place their datacenters next to hydroelectric power or other low-cost energy sources. Each watt saved flows straight to the bottom line. Similarly – cloud computing companies intensely scrutinize their server purchases – weighing some variation of this question: how much performance (and by extension, revenue) can I squeeze out of the equipment – versus the cost of procurement and operations. This is the essence of “efficiency”. And now – with Intel’s new Xeon 5500 processor – there’s great news for anyone building efficient cloud infrastructure. The Xeon 5500 can deliver up to 2.25X the computing performance at a similar system power envelope compared to Intel’s previous generation Xeon 5400 series1. (By the way – the Xeon 5400 is no efficiency slouch – as it’s been leading the industry-standard SpecPower results for two socket systems since the benchmark was created.2) Need more evidence of Xeon 5500 efficiency? Look no further than the amazing results announced by IBM – a score of 1860, which is a 64% leap over the previous high score for a two socket system.3 Results like this clearly demonstrate that the Xeon 5500 has the extremely efficient performance that cloud operators are seeking.
  2. Virtualization performance. If a cloud service provider has leveraged a virtualization layer in its architecture - the performance of virtual machines and the ratio of VMs to servers are key concerns. Enter the Xeon 5500 which boasts a stellar jump in virtualization performance, up to 2 times the previous generation Xeon 5400 series4 allowing virtualized clouds to squeeze even more capability out of their infrastructure.
  3. Adaptable. Cloud computing environments tend to be highly dynamic as usage ebbs and flows during the day, some applications scale rapidly while some shut down, and so on. To meet such shifting demand – it’s critical to have adaptable cloud building blocks. And here Intel’s Xeon 5500 shines: this processor has unique new intelligence to increase performance when needed (Intel Turbo Boost) and to reduce power consumption when demand falls (Intel Intelligent Power Management Technology).
  4. Designed for higher operating temperatures. Across the datacenter industry – there’s growing interest in the notion of running datacenters at warmer temperatures to conserve energy. For cloud computing mega-datacenters, this concept has been in practice for several years. But it’s not just the datacenter staff that needs to handle warmer climates - the equipment must tolerate the conditions as well. Intel’s Xeon 5500 has been designed to run at higher temperatures providing one more piece of the puzzle to enable more efficient cloud infrastructure environments5.
  5. 50% lower idle power. Cloud computing providers – like airlines and phone companies – need to run at the highest utilization possible to maintain a healthy P&L. Yet there are times when usage – and thus server utilization – drops and at these times, cloud service providers desire processors with low power consumption. The Xeon 5500 processor now boasts an idle power that’s up to 50% lower than the prior generation systems, reducing energy costs6.
  6. Advanced power management. Intel has incorporated special platform level power technologies into the Xeon 5500 platform – which open new avenues to managing server energy consumption beyond what’s already built into the processor. Intel Intelligent Power Node Manager is a power control policy engine that dynamically adjusts platform power to achieve the optimum performance-power ratio for each server. By setting user-defined platform energy policies – Node Manager can enable datacenter operators to increase server rack density while staying within a given power threshold. While results vary based on the type of application and server – Intel demonstrated up to 20% improvement in rack density by using Node Manager in a recent proof-of-concept with Baidu, a leading search engine7.
  7. High Performance Memory Architecture. Cloud computing and other highly scalable Internet services are often relying on workloads where it makes more sense to keep large volumes of memory in DRAM, close to the CPU, rather than on slower, more distant hard drives. “Memcached” – a distributed caching system used by many leading Internet companies – is but one example. The Intel Xeon 5500 offers several exciting memory architecture benefits over the previous generation: (1) Up to 3.5X the memory bandwidth8 by leveraging an integrated memory controller and Intel Quick Path Interconnect (QPI), (2) supports a larger memory footprint (144GB versus 128GB), and (3) DIMMs and QPI links automatically move to lower power states when not active. In these new caching and distributed workloads, where large memory architectures are crucial, the Intel Xeon 5500 offers real advantages.
  8. Perfect when paired with SSDs. Few technologies get datacenter gurus more excited than solid state drives – which can offer impressive performance gains over their rotating hard drive cousins at far lower energy consumption. But with SSDs that can read 1000 times more data into the CPU versus a HDD – you want a ravenous processing beast to handle the traffic. And – you’re catching on to the blog theme – the Xeon 5500 can provide up to 72% better performance using SSDs than even the previous generation Xeon systems9. Intel Xeon 5500 is truly a perfect engine to complement SSDs.
  9. Ideal for optimized server boards. For cloud infrastructure – where every watt is a pernicious tax – you need more than just an extremely efficient processor such as the Xeon 5500. You also need an optimized server platform that has been stripped of every unneeded feature, configured with world-class energy efficient components, and designed for reduced airflow that minimizes the use of fans. One such product is an Intel server motherboard – codenamed “Willowbrook” which has an impressively low idle power below 70W, considering it’s a dual Xeon 5500 performance rocket10.
  10. A competitive lever for cloud operators. Lastly, for a service provider scaling out its infrastructure – systems based on Intel Xeon 5500 processors could offer a competitive advantage versus service providers whose servers are 2 to 3 years old. Because of the performance leaps in Intel server processors in the past few generations – Intel Xeon 5500 based servers can handle the same performance load as up to three times the number of 3-year old dual core servers11. The benefit is clear: providing the same performance level but with far fewer servers means a leg-up on those service providers with more antiquated, less efficient infrastructure.

 

 

 

If you have made it through this lengthy top 10 list – you should have a better sense for the advantages of Intel’s latest processor for cloud computing environments. Of course, the best way to really see the benefits is to get an Intel Xeon 5500 based system from your preferred vendor and test with your own code.

 

 

 

 

 

1 - 11For Footnotes, Performance Background, and Legal information, please refer to the attached document.

If you follow the IT industry – you can’t escape the “cloud”. Whether online articles, industry seminars, and blogs – the hype over cloud computing is everywhere. And don’t expect it to die down in 2009.

 

Yet amidst all the hype – there are still a lot of questions and confusion about the “cloud”. At Intel – we get asked a lot about cloud computing, and one of the top questions is: “Is cloud computing really new?”

 

The answer is not as clear-cut as it may seem.

 

First – what is “cloud computing” anyway? There are many industry definitions, many very useful and some not as good. Some pundits want to label everything the cloud, while others have intricate and nuanced definitions where very little could be considered cloud computing.

 

Intel has it own view of the cloud – centered, not surprisingly, on the architecture providing the cloud processing, storage, and networking. This “cloud architecture” is characterized by services and data residing in shared, dynamically scalable resource pools. Since so much of the cloud’s capabilities – and its operational success – depend on the cloud’s architecture – it makes sense to begin the definition there.

 

A cloud architecture can be used in essentially two different ways. A “cloud service” is a commercial offering that delivers applications (e.g., Salesforce CRM) or virtual infrastructure for a fee (e.g., Amazon’s EC2). The second usage model is an “enterprise private cloud” -- a cloud architecture that’s for internal use behind corporate firewall, designed to deliver “IT as a service”.

 

Cloud computing – both internal and external – offers the potential for highly flexible computing and storage resources, provisioned on demand, at theoretically lower cost than buying, provisioning, and maintaining more fixed equivalent capacity. 

 

So now that we’re grounded on our terminology… we return to this question of the cloud being new or just repackaged concepts from an earlier era of computing.

 

Turns out that it’s both: cloud architectures do represent something new – but they build on so many critical foundations of technology and service models that you can’t argue the cloud is an earth-shattering revolution. It’s an exciting, new but evolutionary shift in information technology.

 

The rich heritage of cloud computing starts with centralized, shared resource pooling – a concept that dates back to mainframes and the beginning of modern computing.  A key benefit of the mainframe is that significant processing power becomes available to many users of less powerful client systems. In some ways, datacenters in the cloud could offer similar benefits, by providing computing or applications on demand to many thousands of devices.  The difference is that today’s connected cloud clients are more likely to be versatile, powerful devices based on platforms such as Intel’s Centrino, which give users a choice: run software from the cloud when it makes sense, but have the horsepower to run a range of applications (such as video or games) that might not perform well when delivered by the “mainframe in the cloud”.

 

Another contributing technology for the cloud is virtualization. The ability to abstract hardware and run applications in virtual machines isn’t particularly new – but abstracting entire sets of servers, hard drives, routers and switches into shared pools is a relatively recent, emerging concept. And the vision of cloud computing takes this abstraction a few steps further – adding concepts of autonomic, policy driven resource provisioning and dynamic scalability of applications. A cloud need not leverage a traditional hypervisor / virtual machine architecture to create its abstracted resource pool; a cloud environment may also be deployed with technologies such Hadoop – enabling applications to run across thousands of compute nodes. (Side note: if you’re interested in open source cloud environments, you might check out the OpenCirrus project at www.opencirrus.org – formed by collaboration between Intel, HP, and Yahoo.)

 

The key point here is that just because it’s an abstracted, shared resource – doesn’t mean it’s necessarily a cloud. Otherwise a single server, running VMWare and a handful of IT applications, might be considered a cloud. What makes the difference? It’s primarily the ability to dynamically and automatically provision resources based on real-time demand.

 

What about grid computing? Indeed – if you squint – a grid environment looks considerably like what we’ve defined as a cloud. It’s not worth getting into a religious argument over grid versus cloud – as that’s already been done elsewhere in the blogosphere. Grids enable distributed computing across large numbers of systems – and so the defining line of what constitutes grid and cloud is blurry. In general cloud architectures may have an increased level of multi-tenancy, usage based billing, and support for a greater variety of application models.

 

Finally – one of the key foundations of cloud computing isn’t really a technology at all, but rather the “on demand” service model. During the dot-com boom, the “application service provider” sprung up as a novel way to host and deliver applications – and they are the direct forefathers of today’s Software as a Service (SaaS) offerings. One of the ways “on demand” continues to evolve is in the granularity of the service and related pricing. You can now buy virtual machines – essentially fractions of servers – by the hour. As metering, provisioning, and billing capabilities continue to get smarter, we’ll be able to access cloud computing in even smaller bites… buying only precisely what we need at any given moment.

 

So to wrap up – the cloud is truly a new way of delivering business and IT services via the Internet, as it offers the ability to scale dynamically across shared resources in new and easier ways. At the same time - cloud computing builds on many well-known foundations of modern information technology, only a few of which were mentioned here. Perhaps the most interesting part of the cloud’s evolution is how early we are in its development.  

Filter Blog

By date:
By tag: