I am writing this sat in the departure lounge at Nice Airport, awaiting my flight home after having spent the week in Cannes at VMworld EMEA '09, which has just finished. Focus of this years event was somewhat different from last year, last year everything was 'Green' this year we were all living in the Cloud, or at least that ís the way the keynotes painted the picture. Most of exhibitors and attendees however still seemed to have their feet firmly planted on the ground in the reality of using todayís technology.

 

Sticking with the keynotes, last year they were very product and feature centric whilst this year one got the impression that VMware were providing more of a vision of where they see their software taking the industry - and this was most definitely into the cloud !

 

Another shift was in the client space where last year VDI was the buzz word and thin clients would solve all our business needs, this year there seemed to be an acknowledgement that the rich client has a place ( even if it might be virtualised ) so for those PowerPoint junkies amongst us who want all the cpu power they can get at their fingertips whenever and wherever they are this is good news. In terms of virtualising clients one bit of news from the conference was that VMware plan to develop a bare metal ( type 1 ) hypervisor for clients that takes advantage of Intel's VPro Technology to provide amongst other things out-of-band management and authentication of the hypervisor.

 

Back into the clouds, 'IT as a Service' was one of the keynote themes with VDC-OS and Vcloud enabling this. For those that didn't attend the event the keynote videos are here. To support the 'IT as a Service' story SAP presented on their IT infrastructure, of particular interest was the proposition that in the future Cloud computing would become much like the airline industry with low cost providers competing with full-service providers,  pricing varying by time and demand, aggregators ( a.k.a bucket shops ) selling off excess capacity and resource over-commit becoming a feature of using the cloud.

 

Walking the show floor it was clear that if VMware were in the clouds most of the rest of the folks in Cannes were clearly facing up to today's reality of deploying technology to enable their business's. Much of the focus this year seemed to be on storage and back solutions, with management and networking also be key topics. It also felt like there was a more technical bias amongst the attendees than last year - maybe due to the current economic climate.

 

One parting thought is that what ever else is happening in the IT industry the momentum behind the virtualisation train continues to grow and its something we all need to be taking into consideration when planning our IT strategy.

mawrigh1

Team “Virtualization”

Posted by mawrigh1 Feb 24, 2009

Last Sunday concluded the Amgen Tour of California bike race which, for those who don’t follow cycling, was a 9 day road race through California covering 780 miles.  The eventual winner, Levi Leipheimer, won by only 36 seconds in overall time to the number two finisher!

 

Now, you may ask, “What does cycling have to do with virtualization?”  Well, many customers believe that the VMM or core virtualization software, in itself, is what “virtualization” means.  It is true that the VMM is the core and most obvious part of virtualization, but all the supporting components around virtualization: management, security, automation, provisioning, reliability, performance, etc. are what actually allow it's users to achieve the ROI they’re expecting and the reduction in TCO from implementing this new paradigm.  If your looking for a start to trying to determining the ROI of a virtualization implementation with Intel take a look at Intel’s ROI Estimator. (http://www.intel.com/technology/virtualization/technology.htm?iid=tech_vt+tech) .

 

When I watched my first bike race, I didn’t get it.  Cycling seemed an individual sport, each rider trying to ride the course, on his own, with the fastest time, in a large group of riders.  Now I realize that what cycling really is, is a team sport. Each team is comprised of a complex network of riders, each with different roles.  Levi’s team, “Team Astana”, like all teams, has a large support staff that you don’t see, comprised of coaches, strategists, mechanics, etc.  Everyone has a role to play in trying to get just one team rider over the finish line the fastest.

 

The supporting components of virtualization that play key roles in providing virtualization’s true value include a network of software and hardware components.  On the hardware side, Intel’s latest 6-Core Xeon 7400 CPU, improves performance by as much as 50% from previous generation processors (http://www.intel.com/performance/server/xeon_mp/summary.htm?iid=products_xeon7000+body_benchmarks). It pays to have a fast machine.  Much like Levi’s high tech roadbike. (http://www.intel.com/technology/virtualization/).  

 

In future BLOGs I'd like to try and help answer the following questions:

 

  • How Intel is taking advantage of the advances in virtualization management, and how is this impacting operational efficiencies?  What are your experiences with virtualization management?
  • What are the best strategies/Best Known Methods (BKM’s) for implementing and using management with server virtualization?
  • How can an integrated lifecycle management approach help in our virtualization implementations?
  • What have you seen with the role of automation in reducing costs?

 

I look forward to passing on the BKM’s I am discovering in the areas of virtualization management as I consult with Intel customers around the world. . .and, I may throw in a few additional cycling tidbits because as you all now know: cycling and virtualization are surprisingly parallel!

 

Mark

whlea

"Live From" VMWorld Europe 2009

Posted by whlea Feb 24, 2009

For those of you on tight travel budgets, you're in luck. I'll be blogging here about the VMworld Europe 2009 Event. We are on location this week in Cannes France, where movie stars are everywhere, well not really, but check out this shot on the side of my hotel:

 

Cannes Riviera01.JPG

 

Day 1:

Ok, now that I have your undivided attention, let's talk about what's happening at VMworld Europe 2009. Paul Maritz, President & CEO of VMware kicked off the event this morning in the Louis Lumiere Grand Auditorium. Seems this is the same spot where the stars gather for the annual Cannes Film Festival. Anyway, nice place and Mr. Maritz started off talking about where virtualization technology has been and where's it going in 2009. First, The Problem: Rising complexity and tight IT budgets....

 

Keynote01.JPG

 

Next, Mr. Maritz described for us a new product from VMware, vSphere, which addresses the overall Datacenter cloud computing foundation....

 

Keynote02.JPG

 

Looking ahead to 2009, VMware sees vSphere as the Virtual Datacenter OS, a foundation for your internal & external clouds...

 

Keynote03.JPG

 

If you want to check out the full keynote, click here: http://www.vmworld.com/community/conferences/europe2009/agenda/keynotes/1. Check back for more updates on VMworld Europe 2009......

 

Day 2:

Ok, so Day 2 is wrapping up and with all the new capabilities of ESX 4.0, vSphere, many will want to talk about virtualizing apps that previously were in the "It's too complex to virtualize" catagory. With Intel continuing to innovate on new hardware virtualization features and next generation cpu architecture around the corner, the question is "Can all applications be virtualized?". Check out Jim Blakely's Blog here: http://www.vmworld.com/thread/2490 inside the Intel Virtual Booth on VMWorld.com. I'm sure Jim would take the challenge if you think you have an application that is too complex or traditionally un-virtualizable.

 

What's really amazing about VMworld is how open the discussions are how much VMware wants to share with the industry and the eco-system as we like to call it. Today's schedule was kicked off by Dr. Steve Herrod, CTO and Sr. VP of R&D at VMware, you can view his keynote here: http://www.vmworld.com/community/conferences/europe2009/agenda/keynotes/2.

 

The Intel booth was busy today again with chalktalk sessions from Microsoft, IBM, Sun, & VMware. I'll be posting these sessions soon as I get a better upload connection. Here's one photo for those who are curious how the Intel booth looks...

 

SANY0043.JPG

 

I'll be back tomorrow with more from VMworld Europe 2009, stay tuned...

 

Let’s face it; it’s getting harder to measure server density in rack units, and measuring by compute threads in a rack isn’t getting any easier with the core/thread counts increasing year over year.  I still remember from 12 years ago when Intel was acquiring companies who were really good at piecing together single core multi-processor systems and those systems were literally hanging from engine hoists (for demo purposes) because they were so large… I believe they had eight Intel Pentium Pro processors and 128MB of RAM. In comparison - today’s netbooks have more 4 times that amount of memory, in a base configuration.

Modern server micro-architectures have such a large increase in transistors alone, that it’s hard to equate the exponential growth in the complexity of the systems. While power must still be consumed, the same amount of power can be distributed across several cores and platforms now - which is more power efficient, but it also adds more complexity as the number of nodes increase. But just because you have more nodes, doesn’t mean that you can’t manage their efficiency.

David Ott (from the Intel Software Services Group) presents many of the provisioning/power/manageability problems at hand in the video below (5m16s), and explains how Intel is providing the 'touch points' to manage server platforms:

http://software.intel.com/media/videos/2/1/8/a/0/a/e/218a0aefd1d1a4be65601cc6ddc1520e_player.jpg

 

With the upcoming Intel Xeon 5500 Series Processors, not only do you have a high-performing platform; and in Intel fashion they’re also more power-efficient.  With the capabilities to self-throttle power usage via managed P-states per node or be managed via policies by group, time, etc.  Managing for servers isn’t new, but the way that Intel is doing it is a huge leap ahead in manageability at the node level.

So I ask:

  • What manageability tools are you using for your enterprise servers today?
  • Is Intel Node Manager on your (or your OEM's) roadmap to gather information on a ‘per server’ basis?
  • Would more discrete information enable you to run your datacenter more efficiently?
  • What manageability items do you struggle within your own datacenter, and what would you like to see in future platforms?

If Power Manageability is new to you, I highly suggest you check out Intel Dynamic Power Datacenter Manger, and if you're running a Linux based server - please check out http://www.lesswatts.org to ensure you have the latest ACPI compliant kernel.

And as a fun exit, here’s a video that we shot in one of our labs – further strengthening the need for virtualization

(and more importantly – the need for virtualized networks!)

 

Only a few years ago, customers seldom considered server energy efficiency when buying servers. Today, server energy efficiency is often one of the key purchase criteria. And for some customers, energy efficient performance is the #1 criteria. Going forward in time, it is expected that the majority of people will use energy efficient performance (sometimes referred to as performance/watt) when evaluating servers.

From a customer point of view, the request is simple: "I want both high performance and reduced power consumption…at the same time." From a product design viewpoint, the "opportunity" to reduce power while still improving performance comes with some unique tradeoffs that are often complex. How much performance is needed? How much can/should power consumption be reduced? If power consumption is reduced, what impact will that have on performance? Etc, etc.

Processor design cycles are quite long and are started many years before a product actually comes to market. Because of the long design cycle, there is comprehensive process at the beginning to determine product features based on expected market needs. At the time the Nehalem architecture was being developed, customers were just starting to evaluate servers based energy efficient performance, but the Nehalem processor design team decided to make energy efficiency a fundamental "feature" of the processor. The good news is the team correctly predicted the market requirements with the upcoming Intel® Xeon® 5500 Processors (aka. Nehalem). Servers based on Nehalem processors are expected to provide customers with exactly what they have been requesting…"knock your socks off" performance along with reduced power consumption.

As Wayne Gretzky once famously said: “A good hockey player plays where the puck is. A great hockey player plays where the puck is going to be.” With Nehalem, Intel is definitely skating to where the puck will be.

 

So are you among the approximately 40% of data center managers that are projected to run out of power or cooling capacity in the next 12-241 months and need new options to deal with ever increasing demand for compute capacity? In my discussions with IT professionals, it’s clear that a “business as usual” approach to the design and operation of the data center is no longer sufficient.

 

In the coming weeks, you will see a number of bloggers write about using Intel Xeon Processor 5500 (Nehalem) servers to refresh the data center – a concept first discussed on this site back in late 2007 - to more efficiently use limited power, cooling and floor space resources in the data center. Today, I want to touch on another means of addressing these issues at hand - using instrumentation as a source of data and controls to better monitor and manage the data center.

 

Individual pieces of the data & control picture have steadily come into the mainstream via instrumentation of individual server components. Think processors that allow power & frequency to be modulated. Power Supplies that report system level power consumption. Memory that reports its temperature. Fans that can scale RPMs and power to the actual air flow requirements. Really cool capabilities, but these somewhat fragmented sources of data and control don’t provide the capability to manage at the rack or data center level. The challenge at hand is to take all of these individual points of component instrumentation and develop system and data center level capabilities – what I call extended instrumentation – to provide unique and innovative tools that data center managers need.

 

One of the more exciting extended instrumentation capabilities that has evolved is power capping. Power limits or caps defined and communicated by console management software are enforced by system level functionality, enabling the ability to limit system power in a dynamic fashion. Applications of the use of power capping range from increasing performance density to temporarily shedding compute load to ride through power or thermal events in the datacenter to enabling power based dynamic resource balancing. Power Capping gives IT managers a tool to squeeze additional compute performance out of their existing data center – making more efficient use of their limited and valuable power, cooling and floor space resources to lower costs, improve availability and extend the life of the current data center.

 

Are you evaluating this capability? Are you using it already? I’m interested in discussing your thoughts on instrumentation and power capping.

 

1. http://www.infoworld.com/article/08/03/26/Datacenters-heading-for-cash-crunch_1.html

Virtualization 1.0 is yesterday’s news; the days of virtualization being used only as a tactical tool to drive consolidation and higher system utilization are quickly ending. For the most part, companies have figured out how to get improved utilization, and are using server virtualization in a wide range of usage models across development, testing and some rather interesting production/mission-critical scenarios.   Its use is gradually maturing from simple partitioning and encapsulation to leveraging the mobility of virtual machines to improve management and operations of IT environments. This is allowing the change in deployment models for virtualization from typical scale-up approach (SMP with large Memory servers) to a scale-out model.

Virtualization 2.0 includes a host of new use cases (shouldn’t be surprising to anyone) that include:

·         Load-balancing for SLA Mgt

·         Power-optimization

·         High availability (no downtimes)

·         Disaster recovery and business continuity

·         Hosted clients

·         SOA & Utility computing.

I see three key foundational tenets as the underpinnings for these usages.  First are the “abstraction” and the “convergence” of compute servers, storage and networks. It has been happening, but virtualization 2.0++ is driving (and will continue to drive) a seismic rethink in how Data centers are architected, and the data center would be a “Fungible” pool of infrastructural resources, for a wide variety of services that IT provides to run the businesses.   I will get deep into the implications of this to IT operations, etc, in a follow on blog, but will leave you with this thought.  The new control point in the data center, both architecturally and operationally, would be the integration of compute, storage and network virtualization architectures.  Key industry players like IBM, HP, Cisco, EMC, VMWare and Microsoft are introducing integrated solution architectures targeted at positioning themselves as the first vendor of choice for this emerging direction.  This foundational tenet, coupled with the merits of Service-Oriented Architectures (SOA), is providing an infrastructure for ‘Cloud Computing’.

The Second core tenet is the mobility of Virtual machines - The migrate-ability of the ‘encapsulated’ Virtual machines on this abstracted infrastructure for the best performance, operational cost and SLA management.  They are no longer tied to a server or a set of servers. In some cases they are not tied to a datacenter; hybrid models are emerging where these VMs would execute in the ‘enterprise’ data center, or on external clouds – the optimal place for the best TCO, and SLA management  (Yes, yes, there are security, compliance, accounting, performance concerns… I agree)

The third core aspect is Manageability.  The abstraction and the mobility, coupled with IT’s job of ensuring security, reliability and compliance brings a totally new set of requirements for Manageability.   

If done right, the benefits of Virtualization 2.0 (and 2.0++) to IT shops would be in the form of reduced administrative costs, improve productivity even as demand goes, reduce energy and cooling costs, etc,   however, there are quite a few challenges with the adoption of Virtualization 2.0.  Let us briefly look at these.

Challenges with Virtualization 2.0

1.      There is a significant challenge in the management of large scale virtual infrastructures. There are no clear boundaries and responsibilities in terms network, storage and datacenter management teams.  The emphasis on monitoring and management in Virtualization 2.0 is shifting from virtual machine (VM) management to service management; i.e., knowing how a business service is performing and which components of the Data Center (network, server, VM, applications) are working properly and which are not. Hence, it's no longer sufficient to just monitor the uptime or resource usage levels of virtual machines and physical servers and conclude that the entire IT infrastructure is working right.   More granular monitoring and management of resources would be needed to provide precise QoS and SLA management.

2.      VM Mobility – The Mobility of Virtual machines puts requirements on the underlying server CPU architectures, and has challenges with networks and storage.  Such mobility occurs via either a cold migration - which simply copies the virtual machine and restarts a copy somewhere else. Or a live migration, which moves a live running virtual machine, while maintaining state.  There are clear cases where cold migration is sufficient, but the flexibility and agility that is inherent with the virtualization 2.0 use-models requires the ‘live migration’ of VMs. 

·         VM Mobility and the ‘Compatible CPU Architecture’ requirements: Successful migration relies on compatibility between the processors of the host servers within a cluster. For live migration to take place, the source and destination servers must be in the same cluster and must have processors that expose the same instruction set., In the past, it has not been possible to mix servers based on different processor generations, each of which support different instruction sets, within the same cluster without sacrificing the ability to live migrate VMs across hosts supporting different instruction sets. As a result, IT organizations have needed to create separate clusters for different server generations. This has limited our ability to provide an agile data center environment because it creates islands of compute capacity, resulting in data center fragmentation.  Intel’s VT FlexMigration assist, together with VMWare’s Enhanced VMotion, provide a solution. These products are designed to allow IT to maximize flexibility by creating a single pool of compute and memory resources using multiple generations of Intel processor-based servers within the same cluster.  This can reduce the number of pools, increase the efficiency and utilization of servers.

·         VM Mobility & networks: Today, when Virtual machines move on the virtual infrastructure, its network properties and policies are not retained.  Connection state, ACL, Port Security properties, ACL Redirect, Qos Marking, etc are lost as these VMs move across hosts.   Technologies like the VMWare distributed switch, and Cisco’s Nexus 1000v are specifically targeted to address the ‘Network and Security’ aspects of VM Mobility.

  1. Licensing in Virtual environments:  Licensing rules for applications, development tools, data management tools and operating systems often make a completely virtual environment more costly than the organization expects.   Most all ISVs are looking at ‘virtualization’ friendly licensing models, but they are far from being there.  Example:  With Oracle database servers, if you have a 16core server as your host, it doesn’t matter if you database VM uses 4 vCPUs, you would still need the license for 16 cores.  If you would “Live Migrate” the VM, you would need the license on each of the host… This gets prohibitively expensive and impractical.

  1. 10G Networks and Converged Fabrics: The Compute power on the servers has increased dramatically, and with the advent of 8 core processors, the bottleneck clearly moves out of the server, and on to the network and storage bandwidths and throughput.  Virtualization 2.0 will require the consolidation of network traffic and will also increase the need for more bandwidth to the server, both of which will be possible as enterprises make the move to converge and consolidate data, storage, and inter process traffic on 10GbE networks.  10GbE and the converged networks need new switches, access cards, and also a rethink of how applications view the network I/O.  

  1. Security and Isolation guarantees – The hosting of multiple ‘services’ on an abstracted virtualized infrastructure has very specific needs on Security and isolation, multi-tenancy isolation, compliance and audit requirements..  In addition to providing these on a server (for a given service), the infrastructure has to guarantee these across the infrastructure – doesn’t matter on which server (and where) the service and data reside/execute, they need to be secure and isolated. 

In conclusion, Virtualization 2.0 would have a dramatic impact on the architecture of the data center, and also IT architectures and operations.  IT shops will use virtualization for administrative cost reduction, better resource allocation, and more flexibility in a mobile world.   Coupled with Service Oriented Architectures,, the promise of true service-oriented/utility computing might be closer than it has ever been with Virtualization.

Would love to hear your thoughts and views on this..

Last week I read Shannon’s blog about an “unmarketable server” - I got a real and personal taste of the power of this new product last week. I had the opportunity to interview two customers for a video that will be available when we introduce this product in the coming weeks. These customers had access to early hardware and shared their testing results and perspectives of this new product. The information was eye-opening for me.

As I flew back home on Saturday, I was reminded of how I felt as a kid getting ready for Christmas. When I was young, I couldn’t wait for Christmas morning so I could open up my presents and play with my new toys all day long. That is the way I feel with the new Intel Xeon processor 5500 series (codename Nehalem) about to launch later this quarter – I can’t wait.

In short (and I have to save the details for the video because I’m required to by non-disclosure), these customers are moving forward with plans to invest in new server technology because of the dramatic performance and energy efficiency gains that a technology refresh can provide them. Both of these customers are seeking a competitive advantage in their respective businesses and despite the economy, they see prioritized investment in new server technology as a means to enhance their services, reduce costs, streamline efficiency and better support their customers.

When I asked the question about economic conditions and the relative importance of buying new technology today for their business – the customers did not blink – investing in new server technology and refreshing aging servers is of critical importance to their business.

It was clear to me that these customers are looking forward to an early Christmas this year with the introduction of Xeon 5500 servers.

Stay tuned to Intel’s online server community www.intel.com/server for more information.

Sometimes when I come across a story like this, it reminds me how fortunate I am to be in an industry where such amazing technologies continue to emerge. The video below was captured last week at the Oracle Tech Day event at the Oracle Headquarters where I met up with Hamid Djam, Product Manager for the Exadata product and Bob Moore who is the Group Manager at HP for Strategic Marketing & Industry Standard Servers. In this video they are discussing how Oracle & HP teamed up with Intel Quad-Core processors to deliver "The World's Fastest Database" machine. Check out the video, this is cool stuff .

 

Thanks for viewing !

Having bounced from Engineering to Sales to Marketing in my career I have found some unique interactions between those organizations along the way. But I have recently come across something for the first time that seems particularly noteworthy. I am finding that many of the internal discussions I am having about our upcoming products are largely void of the usual marketing fluff. You could argue that this blog and my previous statement is itself marketing, but oh well.  I am also not saying that I don’t still visit an end user who is having trouble picking out a server topology, an infrastructure to virtualize on or maybe they are having datacenter challenges or power constraints and we provide them with advanced product info.  All of that still happens regularly and I expect it will continue for a long time. Rather, I am referring to the solutions we are starting to propose for those problems.

I am sure everyone in marketing can remember some product that they were responsible for that kept them up nights. The feature set wasn’t quite right, the price was out of whack, competition was breathing down their necks or competition was the incumbent in a certain area. Those are tough days and you only hope that the future products in the hopper are leadership and there is balance to your present day effort. For a while I have seen segments where products are “unmarketable”. You can pretty much leave the marketing guys at the door when you walk in to a High Performance Computing account, Financial Services Account or Internet Portal Datacenter. They want hardware and you can take your PowerPoint slides and “shove them $#@^%.” That may be a direct quote J

Still, that was certain segments. They did their own benchmarking and they made their decisions based on the exact workloads and configurations they are running. Many Enterprises, Datacenters and Small/Medium Businesses rely on third party data, benchmarks or word of mouth to make their purchase decisions. We have been talking to them under non-disclosure lately about our next generation Nehalem based products and the responses have been rather unique. In short, Nehalem appears to be “unmarketable”. I find myself pretty much trying not to mess things up when talking about the product. There have been some early public discussions about the performance and the message boards seem to be taking a keen interest in how the platform looks. The launch will happen later in Q1 and I for one am looking forward to seeing what exciting new things companies are going to be doing with them.

I recently read articles about Disney*’s Q109 earnings.  Like everyone else, they were being affected by the current economic conditions.  No surprise there. I found interesting that they are actually seeing positive financial benefits in virtual worlds (VWs).  Club Penguin* was noted as already or near profitable and Disney planning to expand and develop their virtual world business in the next few years.  A note to Disney, you got a winner, my 4 yr old son, nearly jumped out of his chair after watching the World of Cars* trailer and can’t wait to play it.

If you have any young children, then you are probably going through or have experienced the Webkinz* craze. Those plush pets you can buy with secret codes that allow kids to create the same pet in the virtual Webkinz world.  They can care for their pet, buy stuff for their pet and play games with other virtual pet owners.  My daughter, through various sources, has accumulated about 10, my son has 2 and their friends many more.

These are just two examples that often regarded as brilliant business models and how, on what we term inside Intel, Immersive Connected Experiences (ICE) are changing the virtual world as well as the actual world.  Initial examples of ICE include two main categories: Simulated Environments such as virtual worlds, online multiplayer games and 3-D movies, and Augmented Reality where images from the real world are combined with digital information to provide an enhanced view of the globe around us.

With this kind of success, it will surely create a flood of interest from entrepreneurs and venture capitalists in the space.  Does that next big hit already exist?  I don’t think so.  Is there room for more?  I believe so.  At Intel, we are investing in research, technologies, hardware and software products, and initiatives to spur innovation in this space so that it will move creators beyond today’s nascent virtual worlds and online games.  Our goal is to remove key technical barriers to adoption through hardware and software innovations that improve end user experiences as well as the development of standards that improve interoperability.  Some few examples of Intel’s efforts are in:

-Visual Computing technologies for client and server platforms including tools and developments services

-Immersive Connected Experiencesand Tera-scale research

-Collaboration with Super Computing’09 conference to create a new virtual world, ScienceSim, for immersive science 

-OpenSim open source and COLLADA  standards group participation

-Sponsoring the Virtual Worlds Roadmap SIG which seeks to increase the success rate of virtual world-based ventures

So, what would VWs like Webkinz or Disney look a few years from now?  Can somebody come up with something better than the seemingly and deceptively simple Webkinz or Club Penguin?  What opportunities would venture capitalists see in virtual worlds, for kids or adults, other users or uses?  Some things are certain, computing technology will evolve, broadband connectivity will become ubiquitous, users become more sophisticated, these are good news for the VW space and those hoping to invest and profit from it.

Jimmy Leon

*Other names and brands may be claimed as the property of others.

Every morning we hear about the staggering job losses mounting up in businesses around the world. Hundreds of thousands of jobs have been lost so far. Unfortunately, no one seems immune from the impacts of this recession. In fact, the recession is now impacting the data center and a new segment of the work force is at risk – your servers!

Would you keep an employee who worked less than 4 hours per day, over-spent valuable resources and was someone you had to manage constantly – obviously, the answer is NO! That is the situation today with install base single-core servers.  Aging servers are a perfect target for downsizing in this tough economy. Industry analyst IDC estimates that there are approximately 30 million servers installed in businesses around the world and about 40% of those use single-core processors (4 years old or older).

Let’s look at the 2008 performance review of these single core servers.

ð       Excessive Spending Habits: For the performance they deliver, these servers take up too much space and over-consume power and cooling resources.

ð       Lazy Work Habits: A typical non virtualized server runs at only 10-15% utilization – meaning they sit idle a majority of your work day.

ð       Needs Excessive Management: Aging servers require more maintenance. Extended warranties are expensive (estimated $600-1200 per server depending on the type of server) and if you don’t extend the warranty, the risk of downtime is on IT and the business. While the costs to maintain a server vary widely , during a recent discussion with Forrester research, they indicated that an aging server can cost up 3x the costs of an in-warranty server (under standard 3 yr manufacturer support).

Continuing to use these old servers is not a wise business strategy. But if you fire your existing infrastructure, who can you hire to do the work? Simple, you hire fewer new multi-core servers running virtualization to replace a large number of install base servers.

But, is replacing them worth the effort … I mean, why fix what ain’t broke? About 2/3 of IT’s budget is consumed maintaining existing infrastructure (source Gartner), leaving a measly 1/3 for innovation and value add business capability. So in this recession, unless you are focused on reducing OpEx, the IT budget that you are cutting is likely restricting your business competitiveness and new service delivery - the value of innovation.

Replacing old servers with new offers both cost and productivity advantages for IT in addition to improved services and competitiveness for business. Read some of the success stories from businesses in 2008 where proactive IT investment commonly resulted in 30-40% reductions in total costs, enhanced business services, improved competitiveness and rapid financial ROI. In fact, the business ROI on replacing an old server with new is staggering and in many cases can pay for itself in less than 12 months, by reducing power / cooling costs, avoiding new construction, simplifying and reducing maintenance costs, reducing applicaiton and OS licensing costs and more.

What characteristics should you look for in a new server hire? (to maximize this savings and accelerate ROI)

ð       Versatile Performance. Consider a wide range of benchmarks and application usages when evaluating capability of the server you intend to hire.  Servers hired today for a specific task may likely get re-purposed over their lifetime.

               Also ... if your workload is specialized and data demanding (like database / enterprise resource planning / business intelligence) consider a specialized

               server with unique skills, like larger compute, I/O and memory scalability to handle these larger workloads with increased reliability and headroom for peak loads.

ð       Energy Efficiency. Newer multi-core servers feature nearly 10x the performance / watt of single core servers. Use the SPECPower benchmark to assess which servers are the most energy efficient.

ð       Virtualization. When virtualizing servers, hire servers that can support robust consolidation ratios and built for flexibility and versatility. Many new hardware-assist technologies help boost the ability to migrate virtual machines (application/OS combination) from one server to another.

ð       Standardization. Unlike hiring employees where diversity is valued and encouraged, using a smaller number of reference designs in your IT environment, can lower operating and support costs.

A final consideration for hiring new servers is total cost of ownership. Just like hiring people, you must consider the incidental or hidden costs behind the salary and sign-on bonus (do these still exist today?). The average life for a server is 4 years. Buying an inexpensive server for your needs today may optimize today’s budget but may end up costing you over the long run in software licensing, power/cooling. Intel IT recently did an ROI analysis on buying higher end processors and found that using higher end processors reduced TCO significantly – by doing more with less.

Last year, Intel IT fired about 20,000 servers and more are expected to receive pink slips in 2009 - read more about this in the 2008 Intel ITannual perfomance report

If your goals are to lower costs, improve services and boost revenue while increasing business competitiveness, then replacing aging server infrastructure is an Intelligent Investment. Learn more at www.intel.com/go/xeon

Are your single core servers at risk of losing their jobs?  If not, they should be!

So the Question is ... Will You Cut IT Costs and Boost Business Competitiveness by downsizing your Server Infrastructure in 2009?

Chris

The current economic environment is unprecedented in our lifetime and is having multiple impacts on Enterprise decision making. IT spending is under severe scrutiny with IT budget reductions forecasted throughout most Enterprises in ’09. Even with reduced budgets, IT needs to continue to improve business productivity and competitiveness. So what can you do to manage all these conflicting conditions?

Maybe this type of environment represents an opportunity to make some changes with respect to your IT Policy. Could this be a good time to simplify and standardize your IT environment by looking at a broader range of choices that are now available. These choices may not have existed in the past due to some of your decision criteria not being meet for your hardware or software needs. Hardware and software evolve at a rapid pace, and the capabilities to meet your needs are significantly different today than what was available 5-7 years ago when you made previous decisions.

Equipment nearing the end of depreciation cycles or lease contracts offer another opportunity to look at the cost and performance of your existing architectures Vs other architectures that are available today. In my previous blog I shared some thoughts on performance and pricing of RISC systems Vs x86 based platforms. There are significant savings that can be made be choosing x86 hardware without trading off on your performance needs. Selecting x86 hardware could enable you to execute your IT refresh and replacement strategy in a reduced Capex budget environment. Sometimes it seems that offsetting a purchase may be a prudent thing to do, but at some point you will have to replace these systems to meet business productivity requirements. In the meantime you will have to spend incremental budget paying extra $’s for maintenance and support for systems that you had planned to replace and you may also not meet the demands placed on you to support your business needs. I also read recently that under the proposed US Stimulus package there may be some provisions for accelerating depreciation on new equipment purchases. This could be another factor to consider in terms of which option will cost you most in the long-run.

One other thought I had was the ability to re-allocate $’s within your overall TCO to spend on other aspects of your solution needs. If you could save money on the hardware cost would it free up $’s for you to spend on the overall solution?. For example could you afford to pay the software license costs and support more users for your ERP environment.

Consolidating older generation RISC based platforms to current x86 based platforms could be another way to offset some of the associated costs associated with maintaining and supporting your RISC environment.  I read a paper recently published by Dell where they talked about the performance difference between V440 SPARC Servers and todays R900 systems. They talked about the R900m being 14 times as fast as V440. This led me to conclude that I could consolidate a distributed workload from a number of older V440s and run that workload on one system. This sounds like a pretty good deal to me as I can save some space in my datacenter, save some energy costs, probably get some savings on software license and support costs.

Another factor to consider is the whole issue of payback. In the current environment everyone is being asked to justify the payback on their investment to be 12 months or less. What if I said that you could get a 9 month payback on your investment in a new hardware platform purely on the basis of savings from power & cooling savings and lower OS maintenance costs. Would these types of savings be enough to justify your investment and consolidating multiple legacy RISC servers to a current x86 platform?. Well that type of payback is attainable, and there are other savings like software license costs, administrator and operator costs that are not really included in the calculations.

Ok, so the counterside to my argument is that it is hard to move a workload from RISC to x86. The savings I get from moving will not be offset by the money I spend to move. It is a fair argument, but there are Customers who have done the transition and saved some significant money by doing so. Avis in Europe are one example that comes to mind where they talk about reduce their TCO by 50% moving from RISC to x86 platform

One of the other comments I often hear relates to it being technically hard to move my solution if it is running on UNIX/RISC to x86 offering. I agree you are moving move one architecture to another and there are some challenges to do so, but there are resources out there to help you. Principled Technologies wrote two reports recently that discussed how you could move your Oracle database to Solaris or Linux running on Xeon. Don’t worry, these were not marketing papers, they actually did this migration in a real lab environment and documented the technical ‘how to’.

Ok, so these are some of my thoughts, let me know what you think?.

Here’s the 8th follow-up post in my 10 Habits of Great Server Performance Tuners series. This one focuses on the eighth habit: Use the Right Tool for the Job.

IMG_2361_noExif.JPG

There are many different reasons why people undertake performance analysis projects. You could be looking to fine-tune your compiler-generated assembly code for a particular CPU, trying to find I/O bottlenecks on a distributed server application, or trying to optimize power performance on virtual server, just to name a few. As I discussed in habit 2, there are also different levels where you can focus your investigation – mainly the system, application, and macro or micro-architecture levels.

It can be overwhelming thinking of all the different ways to collect and analyze data and trying to figure out which methods apply to your particular situation. Luckily there are tools out there to fill most needs. Here are some of the things you should consider when trying the find the tool(s) that are right for you.

  1. Environment – Many tools work only in specific environments. Think about your needs – are you going to be performing analysis in a Windows or Linux environment, or both? If you are analyzing a particular application, is it compiled code, Java*, or .NET* based? Is the application parallel? Are you running in a virtual environment?
  2. Layer – Will you be analyzing at the system, application, or micro-architecture level, or all 3 ? At the system level, you are focusing primarily on things external to the processor – disk drives, networks, memory, etc. At the application level you are normally focused on optimizing a particular application. At the micro-architecture level you are interested in tuning how code is executed on a particular processor’s pipeline. Each of these necessitates a different approach.
  3. Software/Hardware Focus – Finally consider whether you will mainly be tuning the software or the hardware (platform and peripherals) or both. If you plan to do code optimization, you will need a tool with a development focus.
  4. Sampling/Instrumentation - For software optimization tools in particular, there are 2 main methods used to collect data. Sampling tools periodically gather information from the O/S or the processor on particular events. Sampling tools generally have low overhead, meaning they don’t significantly increase the runtime of the application(s) being analyzed. Instrumentation tools add code to a binary in order to monitor things like function calls, time spent in particular routines, synchronization primitives used, objects accessed, etc. Instrumentation has a higher overhead, but can generally tell you more about the internals of your application.

After determining your specific needs, take a look at the tools out there (you might start with the lists available on wikipedia or at HP’s Multicore Toolkit.) Of course I recommend you also check out the Intel® Software Development Products. There are several specifically for performance analysis:

  • Intel® VTune™ Performance Analyzer works in both Windows* and Linux* environments and provides both sampling and an instrumented call graph. The sampling functionality can be used to perform analysis at all levels – system, application, and micro-architecture. It is multi-core aware, supports Java* and .NET* and also allows developers to identify hot functions and pin-point lines of code causing issues.
  • Intel® Thread Profiler is supported on Windows*. It is a developer-focused tool that uses instrumentation to profile a threaded application. It supports C, C++, and Fortran applications using native threading, OpenMP*, or Intel® Threading Building Blocks. Intel® Thread Profiler can show you concurrency information for your application and help you pinpoint the causes of thread-related overhead.
  • Intel® Parallel Amplifier Beta plugs into Microsoft* Visual Studio and allows C++ developers to analyze the performance of applications using native Windows* threads. It uses sampling and a low-overhead form of instrumentation to show you your applications hot functions, concurrency level, and synchronization issues.

Finding the right tool for your situation can greatly reduce frustration and the time needed to complete your project.  Good luck, and keep watching The Server Room for information on the last 2 habits in the coming months.

If you follow the IT industry – you can’t escape the “cloud”. Whether online articles, industry seminars, and blogs – the hype over cloud computing is everywhere. And don’t expect it to die down in 2009.

Yet amidst all the hype – there are still a lot of questions and confusion about the “cloud”. At Intel – we get asked a lot about cloud computing, and one of the top questions is: “Is cloud computing really new?”

The answer is not as clear-cut as it may seem.

First – what is “cloud computing” anyway? There are many industry definitions, many very useful and some not as good. Some pundits want to label everything the cloud, while others have intricate and nuanced definitions where very little could be considered cloud computing.

Intel has it own view of the cloud – centered, not surprisingly, on the architecture providing the cloud processing, storage, and networking. This “cloud architecture” is characterized by services and data residing in shared, dynamically scalable resource pools. Since so much of the cloud’s capabilities – and its operational success – depend on the cloud’s architecture – it makes sense to begin the definition there.

A cloud architecture can be used in essentially two different ways. A “cloud service” is a commercial offering that delivers applications (e.g., Salesforce CRM) or virtual infrastructure for a fee (e.g., Amazon’s EC2). The second usage model is an “enterprise private cloud” -- a cloud architecture that’s for internal use behind corporate firewall, designed to deliver “IT as a service”.

Cloud computing – both internal and external – offers the potential for highly flexible computing and storage resources, provisioned on demand, at theoretically lower cost than buying, provisioning, and maintaining more fixed equivalent capacity. 

So now that we’re grounded on our terminology… we return to this question of the cloud being new or just repackaged concepts from an earlier era of computing.

Turns out that it’s both: cloud architectures do represent something new – but they build on so many critical foundations of technology and service models that you can’t argue the cloud is an earth-shattering revolution. It’s an exciting, new but evolutionary shift in information technology.

The rich heritage of cloud computing starts with centralized, shared resource pooling – a concept that dates back to mainframes and the beginning of modern computing.  A key benefit of the mainframe is that significant processing power becomes available to many users of less powerful client systems. In some ways, datacenters in the cloud could offer similar benefits, by providing computing or applications on demand to many thousands of devices.  The difference is that today’s connected cloud clients are more likely to be versatile, powerful devices based on platforms such as Intel’s Centrino, which give users a choice: run software from the cloud when it makes sense, but have the horsepower to run a range of applications (such as video or games) that might not perform well when delivered by the “mainframe in the cloud”.

Another contributing technology for the cloud is virtualization. The ability to abstract hardware and run applications in virtual machines isn’t particularly new – but abstracting entire sets of servers, hard drives, routers and switches into shared pools is a relatively recent, emerging concept. And the vision of cloud computing takes this abstraction a few steps further – adding concepts of autonomic, policy driven resource provisioning and dynamic scalability of applications. A cloud need not leverage a traditional hypervisor / virtual machine architecture to create its abstracted resource pool; a cloud environment may also be deployed with technologies such Hadoop – enabling applications to run across thousands of compute nodes. (Side note: if you’re interested in open source cloud environments, you might check out the OpenCirrus project at www.opencirrus.org – formed by collaboration between Intel, HP, and Yahoo.)

The key point here is that just because it’s an abstracted, shared resource – doesn’t mean it’s necessarily a cloud. Otherwise a single server, running VMWare and a handful of IT applications, might be considered a cloud. What makes the difference? It’s primarily the ability to dynamically and automatically provision resources based on real-time demand.

What about grid computing? Indeed – if you squint – a grid environment looks considerably like what we’ve defined as a cloud. It’s not worth getting into a religious argument over grid versus cloud – as that’s already been done elsewhere in the blogosphere. Grids enable distributed computing across large numbers of systems – and so the defining line of what constitutes grid and cloud is blurry. In general cloud architectures may have an increased level of multi-tenancy, usage based billing, and support for a greater variety of application models.

Finally – one of the key foundations of cloud computing isn’t really a technology at all, but rather the “on demand” service model. During the dot-com boom, the “application service provider” sprung up as a novel way to host and deliver applications – and they are the direct forefathers of today’s Software as a Service (SaaS) offerings. One of the ways “on demand” continues to evolve is in the granularity of the service and related pricing. You can now buy virtual machines – essentially fractions of servers – by the hour. As metering, provisioning, and billing capabilities continue to get smarter, we’ll be able to access cloud computing in even smaller bites… buying only precisely what we need at any given moment.

So to wrap up – the cloud is truly a new way of delivering business and IT services via the Internet, as it offers the ability to scale dynamically across shared resources in new and easier ways. At the same time - cloud computing builds on many well-known foundations of modern information technology, only a few of which were mentioned here. Perhaps the most interesting part of the cloud’s evolution is how early we are in its development.  

We all know that IT is using virtualization on x86 servers to solve tough data center challenges (server sprawl, accelerating power and cooling costs, the need to extend life of current facilities, achieving high-availability and disaster recovery through live migration, etc.) 

But which x86 servers are they using? According to IDC’s q3’08 Server Virtualization tracker, 85% of all the x86 servers deployed in 2008 for virtualization were based on Intel® Xeon® processors. 

Ok, next question: Is there a benefit to going with scalable 4 Socket servers (Multi-processor) vs. 2 socket servers (Dual-Processor)?  It’s a religious argument really, but as IT budgets continue to tighten, scalable 4 Socket servers offer more ‘capabilities’ (i.e. processors, memory, I/O ports and reliability features) that enable higher consolidation ratios. 

So I thought I would write about 5 specific scenarios where you should see a benefit to scalable 4 Socket (MP) servers over 2 Socket newest (DP).  Tell us if you agree or disagree.

1. Higher Consolidation Ratios for Memory-Constrained Apps

Do you have a bunch of apps that you need to keep running but at the same time face tremendous pressure to address the challenges listed above?  A key advantage of scalable servers is that they can be configured with more memory than smaller 2S servers, typically 2x-4x more.  Often times, especially with multi-core processors, virtual machines will run into memory constraints before they run into processor constraints.  A 2x-4x memory capacity advantage can translate into 2x-4x the VMs.  Scalable servers also tend to use available memory more efficiently, since code and data can be stored once and shared among multiple virtual machines.  Solvay Pharmaceuticals, for example, intends to run with consolidation ratios as high as 25:1 on 4 Socket Xeon servers. 

2. Performance and Reliability for Business-Critical Workloads

Intel’s launch last September of the Xeon 7400 processor (6-cores, 16mb shared L3 cache) brings 24 processing cores and up to 256gb memory (32 dimm slots x 8gb dimms) to a 4 Socket Server environment.   This provides a lot of resources for demanding applications and unexpected workload spikes.  Tests within Intel’s IT department have shown that 4-socket servers show much less variation in throughput than comparable 2-socket servers as virtualized workloads are increased. 

3. Faster and More Cost-Effective Test and Development

Development teams can be demanding.  The faster IT can provision testing environments for the developers the better.  Scalable servers offer more headroom to deploy additional dev environments when needed, without waiting for new physical servers to be provisioned. Scalable servers can also support a broader range of applications, including enterprise applications that may require the processor, memory and I/O resources of a large, multi-processor system.  Using the same Solvay Pharmaceuticals example listed above, they were able to deploy new apps in 10 min vs. 1 week prior to deploying virtualization on Xeon based servers.

4. Larger and More Robust Flexible Resource Pools

With VMware Virtual Infrastructure, applications can be migrated without downtime among all the servers in a resource pool, which can include up to 32 physical hosts (in a VMware HA* or VMware DRS* cluster).  Using larger, scalable servers would simply expand the capacity of those resource pools due to the additional memory, processors, I/O, etc. 

5. Better Utilization of Limited Data Center Resources

Many data centers are operating at or near the limit of their power, cooling and networking capacity. By using larger, scalable servers to increase consolidation ratios, IT can reduce power and cooling requirements and share local area network (LAN) and storage area network (SAN) ports more efficiently – all of which can help defer the high cost of new data center construction. 

Let us know what you think…

It is inevitable that as processing power increases, and cores multiply on servers, that new bottlenecks will be found. If you make a processor that can do more, you will increase your system performance until you find where the next area of focus is. I’ve talked on several occasions about the system bottlenecks that are now apparent by using the latest Xeon processors or when a many virtual machines are loaded onto a single physical box, and what can be done to improve them. As you may have noticed, a key laggard that will frequently be found in modern multi-processor servers is the network connection. Moving to 10 Gigabit clearly alleviates this issue in many high horsepower systems, and the addition of advanced features like VMDq and SR-IOV will continue to push the envelope on Network I/O in virtualized environments.

 

However, it is clear that for some applications, especially in the HPC market and certain applications in the Financial Service industry, an even more performing I/O solution than standard 10 Gigabit will be needed when latency is of the utmost importance. A solution that has existed in this space for a few years, but is gaining momentum is what is referred to as Remote Direct Memory Access (RDMA), which lets one server directly place information into the memory of another server; essentially bypassing the kernel and networking software stack.

 

At first blush, this solution may sound odd. Why bypass the entire stack; doesn’t this complicate things? Well, it certainly does add some complications by requiring a modified OS stack and support on both sides of the network link, but there are some telling details about where real world latencies come from that make this methodology attractive in certain circumstances.

 

 

If you look at the typical breakdown of CPU utilization in the context of processing networking traffic, you see the workload consumed by buffer copies, application context switching and some TCP/IP processing. If you look at this (albeit, in a simplified way) visually, there is a vertical stack of tasks that need to take place in the server:

 

 

iWARP Stack Before1.JPG

There are application buffer copies from the app to kernel which then get handled by the NIC. Additionally, there is the TCP/IP processing that takes place within the OS and is a large consumer of CPU cycles, and there are also I/O commands that add additional latencies into the communication process.

So the question is what to do to help reduce these latencies? RDMA has been adapted for standard Ethernet via the IETF Internet Wide Area RDMA Protocol (iWARP). Until the iWARP specification, RDMA had been a capability only seen using Infiniband networking. By porting the goodness of RDMA to Ethernet, iWARP offers the promise of ultra low latency, but with all the benefits of standard Ethernet.


With Intel’s acquisition in October of NetEffect, we now have one of the leading iWARP product lines for 10 Gigabit Ethernet. Within these products, the iWARP processing engine can help eliminate some of the key bottlenecks described above, and provide very low latency and high performance networking for even the most demanding HPC applications.


The first item that an iWARP engine can address is to help offload some of the TCP processing task which can bog down processing power as bandwidth loads increase. The Intel NetEffect iWARP solution can offload this TCP processing by handling the sequencing, payload reassembly, and buffer management in dedicated hardware.


The next item that iWARP addresses is the extra copies that need to be done by the system when transferring data. iWARP extensions for RDMA and Direct Data Placement (DDP) allow the iWARP engine to tag the data with the necessary application buffer information and place the payload directly in the target Server’s memory. This eliminates the delays associated with memory copies by moving to a so called ‘zero copy’ model.


Finally, iWARP extensions also implement user-level direct access which allows a user-space application to post commands directly to the network adapter without having to make latency consuming calls to the OS for requests. This along with the other pieces of iWARP provides dramatically reduced latency and increased performance.


The diagram below summarizes what the new system stack looks like after the implementation of iWARP. Much simpler and which much lower latency.


iWARP Stack After2.JPG



 

 

One obvious issue that is raised after thinking about the above diagram is what modifications need to be made at the OS or application level. Clearly applications need to be modified to be iWARP compatible and this can be a time consuming process. This is one of the reasons that this solution has been slow to gain adoption. However, there is an Open Fabrics Alliance (OFA) which is working on a unified stack for RDMA for the open source community. The OFA has an Open Fabrics Enterprise Distribution (OFED) release which is a set of libraries and code that can unify different solutions that use RDMA. There have been several OFED releases so far and further plans are coming to align and expand various RDMA capabilities. In this way, applications that run using the OFED stack under Infiniband can be run without any changes over iWARP and Ethernet.


As more applications get modified to support the feature set of iWARP RDMA, there will be a wider understanding and acceptance in the HPC community of the incremental performance, cost, and standards advantages of using Ethernet with RDMA for the most performance sensitive applications. Moving from standard non-iWARP Ethernet to iWARP enabled Ethernet provide more bounded latency reduction from ~14 us to <<10 us… now that is fast.


We live in exciting times!


Ben Hacker

--

For those looking for some more detail, there is a nice whitepaper on iWARP located here.

For those interested in learning more about the Open Fabrics Alliance (OFA) please see here.

Do you really need a workstation?  It depends…. While I am biased, the actual investment difference between an entry level Intel-based workstation and an Intel-based business desktop with similar graphics features has compressed so much that today the answer is -------------------------------------yes you really do need a workstation. I was recently at an ISV's user meeting and workstaion vendors were demonstrating visually compelling reasons to make that leap to a workstation from a desktop.   So what is an Intel-based workstation when compared to a business desktop?  Similar to the difference between a professional athlete and recreational athlete; workstations are typically faster and smarter at what they do.  Workstations are purpose built to do a job.  They provide the necessary processing capacity, access to the professional grade graphics adapters and enough expansion to accommodate the memory capacity you need to work with a: • Bigger canvas if your are digital content creator, • Larger assemblies if your are product designer, or • More complete oil reservoirs if you are a geophysicist. But if you want to really increase the pace you can create, you may want to step up to a virtual workbench that delivers near supercomputing performance to your workstation.  That is another blog. To learn which Intel based workstation is best for your needs please use our “Mobile, Professional, Expert—Which processor is right for your workstation needs tool” on http://www.intel.com/products/workstation/processors

Last year I did a series of entries on the opportunity to avoid brick and mortar ( or steel and drywall in most cases ) data center expansion.  The math was pretty compelling.  If you have depreciated enterprise servers in production, and at 10% utilization today, and the current "state of the art" Xeon servers will easily give you 5x or more performance gain.  In this scenario, you can get 50x the capacity in the same space and power.  My formula was pretty conservative - 5x performance, 5x utilization with virtualization, and 2x the rack density by measuring actual power and using high efficiency designs.

 

Intel's Core i-7 variant for Xeon ( code named Nehalem) only enhances this opportunity.  The Xeon 5500 series processor launches in March 2009, and although performance benchmarks are still pretty scarce, the buzz in the industry is that this is the biggest leap in Intel Xeon performance in many years. 

 

Consider the impact - if this processor doubles current Xeon 5400 performance, that could yield 100x the compute capacity in the same footprint and power.  Before you get out the sledge hammer you really should evaluate server refresh.  An aggressive server refresh with state of the art Intel Xeon based systems can deliver the performance and capacity for business needs, AND avoid capital data center expansion.  In this economy, avoiding real estate expansion should make you a hero at work.

 

If this is not enough, take a look at optimizing your cooling ratio.  see whitepaper_energy efficiency in the data center.pdf or Reducing Data Center Cost with an Air Economizer.  This could give you another few thousand watts per rack, to increase density and push your multipliers even higher.

Filter Blog

By author:
By date:
By tag: