Skip navigation



Open Cloud – Where it Makes Most Sense


Openness and standardization have been an eternity topic for computer industry. Since the early 80’s when the PC revolution led by Intel processors, open hardware standards have revolutionized the computer industry with standardized hardware components and building blocks. HW stadnards, USB, PCI-E, SATA, SAS, etc. are common to servers and PCs alike. At the same time many software standard emerged DLL, CORBA, Web Services, etc. to ensure software interoperability. Open standards have become the gene pool of today’s computing infrastructure.


How will open standards and open source solutions play in the cloud computing era? As we look at the most popular cloud service providers today, Google, Microsoft, Amazon, etc. None of them have open standards, at most they have open interfaces for others to interact with, but the cloud solution stack is mostly proprietary. If past history is a mirror of the future, we can foresee that as cloud services become more popular, open standards will play more and more important roles. A natural question to ask is how much open standards can play in the context of cloud computing? That is a question interesting to many of us. Let me try to share my opinion on this.


As indicated in the chart below, the level of open standards decrease as we go higher up to the cloud services stack. At the very bottom, the hardware building blocks, we need strong interoperability and inter-changeable (disposable?) components. They should be general independent of cloud middleware and application services. At the infrastructure as a service (IaaS) and platform services (Platform as a service – PaaS) layers, cloud operators are more likely to use open standard and generic building blocks to build their infrastructure services, even though they have to be optimized and work well with the cloud environment (cloud middleware or cloud OS) of their choice. While in the upper layers of cloud solution stack, where and application services (SaaS) are defined, there are a greater needs for cloud operators to offer differentiated services. That is where they will put their “secrete source” for competitiveness. It will be much more difficult to drive open standard building blocks/ components, other than focusing on interoperable interfaces, such as web services standards.


Based on the analysis above, it is safe to assume that open standard and open source opportunities are most promising at HW building blocks, IaaS, and PaaS layers. That should be where the industry is more likely to build consensus. While for the upper layers, especially SaaS, we should focus on interface standards, not as much on standard building blocks and open source solutions.


Intel has been a leader for HW standard building block for the last 30 years and has changed the industry. It is natural to assume that Intel should focus IaaS and PaaS building blocksas well as how these open standards could be applied at open datacenters (ODC) as“adjacent” growth opportunities to embrace the booming cloud computing. Some conventional wisdom says that Intel is not relevant to cloud, as cloud computing be definition abstracts HW. I would say just the opposite – Intel will continue to play a critical role to define and promote open standards and open source solutions for IaaS and PaaS, so that the cloud can actually mushroom. There is a strong correlation between how fast cloud computing can proliferate and how well Intel plays its role to lead the open cloud solutions at IaaS and PaaS layers. What do you think?







Definition of taxonomy:

      ODC– (Open Data Center) Currently stands for a set of interoperable technologies optimized for IaaS, PaaS and SaaS datacenters.  At the most basic levels, these optimizations will also apply to traditional enterprise as well in areas such as power management but higher level management will be tailored for IaaS and SaaS high density datacenters.

      SaaS – Software as a Service:  is a model of software deployment whereby a provider licenses an application to customers for use as a service on demand.  Examples include Google apps,, etc.

      PaaS – Platform as a Service:  It facilitates deployment of applications without the cost and complexity of buying and managing the underlying hardware and software layers, providing all of the facilities required to support the complete life cycle of building and delivering web applications and services entirely available from the Internet—with no software downloads or installation for developers, IT managers or end-users

      IaaS – Infrastructure as a Service:  Rather than purchasing servers, software, data center space or network equipment, clients instead buy those resources as a fully outsourced service. The service is typically billed on a utility computing basis and amount of resources consumed (and therefore the cost) will typically reflect the level of activity.


Economic down turn and Linux

Posted by mitchk May 27, 2009

Join us at a video webcast where Novell, HP, and Intel, hosted by IDC, bring forward the power of Linux in today’s datacenter.  While we have seen customer’s embrace system refresh as a financial efficiency tool, IDC survey reveals that Linux is a key solution for survival in this economic climate, from front end to mission-critical back end.  The timing to consider Linux on Intel server platforms is right. 

Three speakers from industry leading companies will introduce what innovation in a truly open industry standard environment delivers to datacenter operations by saving money while meeting daunting performance and productivity requirements. 

Intel speaker:  Dylan Larson, Director, SPG Marketing

HP speaker:  Stephen Bacon, Linux Marketing, BCS

Novell speaker:  Justin Steinman, VP Marketing

IDC host:  Al Gillen, Program VP, System Software


The webcast will be held on June 9th, 11am to noon, EDT. 

Whether it is public clouds, or private clouds, or internal clouds, or,…, One thing is very clear. Simple migration of current applications to cloud doesn’t work effectively. So, the question is what would be considered a good ‘application architecture’ for the cloud?  It may not be one, but there are some key design principles.   Before we look at those, let us look at the characteristics of cloud. These drive the application architecture for clouds, for the most part.






Any cloud operating environment (COE) would have the following minimal set of attributes.
  1. Multi-tenancy and shared infrastructure – more applications, users, transactions / compute host
  2. Elasticity & horizontal Scalability – Resource scaling up or down, depending on demand and usage. This helps capacity and demand planning.
  3. Pay as you go – Don’t need to procure entire capacity or pay for worst case demand planning… Pay by subscription or based on usage
  4. Automation and flexible management - Self service, flexible and dynamic assignment of workloads to optimal resource utilization










There may be multiple architectural approaches to leverage and “play well” in these COEs. Irrespective of the approach, the key design principles would be :
  1. Be a good tenant on a shared infrastructure – Applications have to be cognizant that they live in a shared environment. Ex: finer granularity (locks, etc.) optimized use of resources, proper authentication & isolation.
  2. Built for scalability – This is probably the hardest for application developers. It cannot be done in isolation. Applications would have to talk with infrastructure (COE) and vice versa, to be elastic. For the infrastructure to provide the elasticity, applications have to provide hooks for monitoring utilization by the infrastructure, and the management and administration of these applications.  This has far reaching implications. When you decide to use “Google Apps” or Microsoft Azure, you would be locked into a set of patterns for accessing data, code for scaling, etc.
  3. Parallelism - this might be obvious, but also one of the hard ones for application developers.  Most applications have constraints with either serial execution, single points of contention like session/application state, memory, file and dataset locks.. All these hamper parallelism.
  4. Configurability v/s Coding : The good apps on Cloud would be highly configurable… a lot of the behavior (including function and workflow) is driven by meta-data. Optimization based on “Locality” and Semantics are two other key concepts that should be configurable v/s hard-wired in applications.
  5. HW independence/Abstraction – so apps can run on the ‘best’ and optimal hardware from performance, scale and TCO perspective. Virtualization is a great model. This could be the basis for simpler federation between different cloud environments.
  6. Distributed and Composite architectures – Capabilities exposed as services. An app is a composition of bunch of services/apps (not objects and libraries like we are used to) that in turn adhere to the same set of design principles.









So, how do enterprises leverage the power of the Cloud?  Enterprises don’t have the luxury of re-writing all their applications to play well in the cloud. And, not all existing applications are architected with the above mentioned design principles.   Does this mean only new “green field” developments are well suited for the “Cloud”?  If enterprises have deployed SOA and the web2.0 architectures , do they have a head start with the cloud migration?  Are there other design principles that you see?







What do you think?


When you’re planning a backpacking trip, whether it’s for several hours or several days, space is at premium.  Not only do you need to think about tents, sleeping bags, clothing, first aid, and navigational gear (among other things), but also how to keep yourself properly hydrated and fueled up.  Oh yeah, you have to figure out how to cram all of this gear into your pack…and carrying an additional pack is not an option!


Odds are you’ll be heading into the wilderness and won’t be able to re-supply for a while, so one of the limiting factors will be the amount of food you can carry.  Running out of fuel in the middle of nowhere makes for a potentially disastrous situation.


So let’s look at the nutritional numbers and how best to fuel the trip:

  • Fats:  ~9 calories per gram, and typically found in nuts and oils
  • Carbohydrates and proteins:  ~4 calories per gram, and typically found in sugars, grains, and meats


If you’re trying to maximize the number of calories you can carry in order to sustain you during your trip, you probably want to pack more foods with a higher fat content (such as peanut butter) than carbs or protein.  More calories per gram à more energy in your pack to get you where you want to go.


You can probably figure out where I’m going with this analogy – low power CPUs are all about helping maximize your performance per rack, just like packing foods with more calories per gram help deliver more energy in a limited amount of backpack space.


Depending on your specific rack power or overall datacenter power / cooling environment, low power SKUs might be a good fit to help maximize your performance per rack.  For the Intel® Xeon® 5500 series, there are two low power CPU options available, both spec’d at a 60W Thermal Design Point (TDP):  Xeon® L5506 (2.13 GHz) and the Xeon® 5520 (2.26 GHz).  These two SKUs have the same features as the corresponding Xeon® E5506 and E5520 SKUs, just lower in power. 


If you’re buying LV Xeon® 5400 CPUs today, such as the L5420, expect a big jump in performance per rack with the Xeon® L55xx SKUs due to lower overall system power and higher performance.  Similar story if you’re evaluating the Xeon® E5506 or E520 SKUs – same performance with L55xx SKUs with lower system power, so higher performance per rack.


Have questions – ask me on this blog or Ask An Expert in the Server Room.

Evolution happens.  In matters of nature, Charles Darwin is clearly the dominant expert and the Theory of Evolution has stood the test of time and much scrutiny by scientific experts for decades.  However, the pace of innovation and change that happens in nature is insufficient as a means to evolve technology.


Darwin's Theory of Evolution is a slow gradual process. Darwin wrote, "…Natural selection acts only by taking advantage of slight successive variations; she can never take a great and sudden leap, but must advance by short and sure, though slow steps”


Personally, I’m very happy that the Theory of Evolution does not govern the pace of innovation for computing.  In matters of technology evolution, Gordon Moore is the expert.  In 1965, Dr Moore observed a trend in silicon manufacturing that has subsequently driven the pace of innovation, revolutionalized an industry and quite possibly society.  Moore’s Law states that the number of transistors on a chip will double about every two years.  Read more about Moore’s Law


In March 2009, Intel introduced the Nehalem microarchitecture where the innovative use of these transistors is truly phenomenal.  Besides offering nearly double the performance for many 2 socket server applications since last year, these processors offer a FIVE - fold improvement in energy efficiency versus the first generation of intel xeon quad-core processors introduced only two years ago.


ð       5x lower idle processor power (10W vs 50W)

ð       5x the number of power states between full power and idle power operation

ð       5x faster transitions (lower latency) between these power states


Today, Intel’s Boyd Davis discussed the innovations coming soon for the expandable 4-socket and larger servers, codenamed Nehalem-EX processor.  For this segment of the server market the bandwidth gains are expected to be staggering - offering up to 9x the memory in bandwidth over the highly scalable six-core xeon 7400 based servers available today.  Learn more about Nehalem EX


The question for IT managers and business leaders is how fast are you evolving your compute infrastructure capability.  Older single-core processor infrastructure is consuming valuable resources (space, power/cooling, maintenance) while often running underutilized and consuming full power.  Replacing that infrastructure can deliver dramatic benefits in performance and operational efficiency.  The savings from server replacement can provide a rapid payback on investment – stimulating re-investment or improved business results.


The faster you move … the more competitive you can become.  Just like in nature, business is survival of the fittest.  How fit is your IT infrastructure? 


Take advantage of Moore’s Law – Evolve Faster.

Alan Priestley, Enterprise Marketing Manager with Intel gives a chalk talk on Intel's hardware assist technology VT. His talk covers hardware assists for virtualized environments and specifically for the processor, chipset and network hardware. Check it out.


Today Intel provided a server product update for the upcoming Nehalem-EX processor and the expandable platforms based on it.  Here’s a recap of some of the interesting messages communicated to the press:



  • Nehalem Architecture and Quick Path Architecture are coming to the EX (MP) segment, 4 Socket Servers and above. 
  • EX Servers are ideal for server consolidation / virtualized applications, data demanding enterprise applications and technical computing environments.  Both Itanium and Xeon processors based systems represent an attractive alternative to more expensive, proprietary RISC-processor based systems.
  • EX Servers are designed for the high-end.  They offer more capabilities (i.e. memory, RAS, cores/threads, sockets) than 2 Socket Servers that IT managers require for business drivers such as large scale server consolidation, high data demands, virtualization, and scalability.
  • Up to eight cores / 16 threads and a whopping 24MB of cache.
  • Up to 9x the memory bandwidth vs. today’s 4-Socket Xeon 7400.  The performance will be dramatic – the highest-ever jump from a previous generation processor. 
  • 2x the memory capacity with up to 16 memory slots per socket (that’s 64 DIMMs on a 4 Socket Server), and four high-bandwidth QuickPath Interconnect links.
  • New levels of scalability: from large memory 2 socket systems through 8 socket systems, and even more with OEM node controllers.  Matter of fact, there are over 15 8-Socket+ designs from 8 OEMs currently. 
  • IBM showed their 8S Nehalem-EX server design running 128 threads (8 Sockets x 8 cores x 2 threads due to Hyper Threading)…an industry first. 
  • New RAS features traditionally found on Itanium, such as Machine Check Architecture (MCA) Recovery which detects CPU, memory, and I/O errors, works with the OS to correct, and helps recover from otherwise fatal system errors. 
  • Nehalem-EX is scheduled for production in the second half of 2009, with OEM systems in early 2010.


Stay tuned over the next few days – we’ll post a video from the event.  Also look for some informative blogs over the next 1-2 weeks that will offer more of an in depth view of Nehalem-EX’s 4 Socket capabilities, performance, scalability, RAS, and Virtualization.



Posted by CHRISTOPHER PETERS May 26, 2009

Fast Food. Fast Servers. Fast Savings.


A recent customer who worked for the US Department of Defense expressed an interest in using the Intel Xeon Server Refresh Savings Estimator off-line (no internet connection) due to security concerns of using their own internal business data over the internet.


For those of you who may have similar concerns, here is a procedure that will give you access to the ROI estimator on the safety of your own laptop or desktop computer.

     Remember that song from Meatloaf?  I always wondered, “What is that thing he won’t do?” I thought of it the other day when I was in Europe visiting several Intel® resellers.  What really struck me was that there is not a single thing these professionals won’t do to make sure they offer the best hardware and software solutions to meet their customers’ needs.  No matter what the size of your company, they make it a point to deliver. They just love providing the right solution for every situation.


We spent some time talking about the specific needs of the small and medium businesses. The Intel resellers were really enthusiastic about the recent launch of the Intel® Xeon® processor 5500 series. Now they have even more options to offer their customers. For growing companies that are looking for a competitive advantage, the intelligent and adaptive performance of these new processors are just what they need.




     Whether you’re looking to transition to your first server or update your existing servers, it’s important to have the right resource guiding you. “An average small or medium company is totally dependent on their information technology these days. If their server is not working as it should or isn’t appropriate for their needs, they are in BIG trouble. Finding a reseller that can act as a trusted advisor in identifying the right equipment, installing it, and maintaining the device through things like Service Level Agreements (SLAs) is critical,” explained Olaf Pas, an Intel reseller in the Netherlands.


“We’re really excited about what the latest generation of Intel server processors can offer our customers. The virtualization capabilities allow us to aggregate our customers’ small business server, their SQL server and the terminal server in one machine. This can save them a ton on their electricity bill,” Olaf continues. And who doesn’t want to save?  Finding the right server solution to help customers save money and get more performance is what local resellers love.




     So, if your employees and customers are hungry for more data responsiveness and your business is hungry for more productivity and cost savings, perhaps it’s time for a little Meatloaf …and the expertise and attention of an Intel reseller – your local Techoloogy expert.

















Learn more about the new Intel Xeon server processors:


There is quite a buzz around “cloud computing” architectures these days.  This general term for what Gartner defines as, “a style of computing in which massively scalable IT-related capabilities are provided ‘as a service’ using Internet technologies to multiple external customers”, has become a little convoluted. It started out simple enough, but then the term got subdivided into “private” vs. “public” clouds, those architectures which are hosted internally and those which are hosted externally.  Internally we can build virtualized resource pools to run our applications or externally push them outside our data center to be hosting by the likes of Amazon, Google., Microsoft, or AT&T.  Then Cisco thought they should add some additional “clarity” into the mix introducing “virtual private” clouds and “open” clouds, as opposed to “closed”? And then there’s the term “inter-cloud”, and last but not least the “federating trusted private cloud”. Clearly, these are brilliant minds at work, but if you are getting a little confused, not to worry, so is everyone else.  Maybe another good blog between David Smith from Gartner, and James Urquhart from Cisco can help sort it all out for us (


Whether you are looking at cloud computing as a new compute architecture or simply trying to improve your areas of strategic advantage internally.  Intel technologies continue to be the building blocks that form the foundation of any high performance compute architecture. 


I’m not just talking about Intel Xeon 5500 Series processor, which in its own right, is the most efficient, powerful processing architecture that Intel has developed yet, by far.  I’m talking about all the other Virtualization Technologies (VT) that Intel has developed around the Xeon 5500.  The Power Management components built in with Node Manager, the different P-States of the processor to not only reduce frequencies and power use, but to even go into an over clocked mode for very high utilization requirements.


Intel’s newest 10Gig NIC ( supports Virtual Machine Device Queues (VMDq) and FibreChannel over Ethernet (FCoE) technology.  FCoE allows you to consolidate your SAN fabric and network infrastructure, thus improving efficiencies and reducing complexity and costs.  Intel's IT organization, which keeps our company running also doubles as a test lab and has tested FCoE in-house.  Diane Bryant, Intel’s CIO talks about how they are delivering strategic value by providing these types of solutions that enable Intel's growth and transformation.   Check out the video link where she talks about how (2) 10 Gig NIC’s are replacing (7) 1 Gig NIC’s, and (2) HBA’s per server without reducing quality of service or performance.  Intel is seeing reduced costs by as much as 25% through the subsequent reductions in cabling, ports, switches, HBA’s etc. (


So, things might be a little “nebulous” (pun intended) about which “cloud” we’re in, but we shouldn’t be about which technology to use to support it.



How do I migrate my solution from RISC to x86 architecture is one of the questions that I get asked a lot these days. It is a very fair question to ask as it is only human nature to want some level of comfort when planning a transition.


There are multiple different paths available to migrate solutions and there are numerous different variables which need to be considered. There is no 'one-size fits all' approach to migration. Factors such as operating system environment, type of workload, whether packaged application and the level of custom code in your solution all come into play when trying to plan out your migration.


So, without writing 'war and peace' (an extremely long novel) I just wanted to share some perspectives and point to some resources in that jungle of resources that might help you navigate your way through a solution migration


Firstly, if your solution is an off the shelf application then moving it from one architecture type to another is a straight forward  porting and recompile process. Contrary to some popular beliefs there is not a whole separate set of application vendors and titles where the applications just run on Unix/RISC combination. Most application vendors have made their products available on multiple different operating systems that run on multiple different architectures. Unfortunately there is no master index or website out there that I have come across that would simplify the process of seeing who supports what application on what operating and what architecture. (let me know if there is something like this). Unfortunately it is a hard grind and requires a visit to each application vendors website to ensure that their application that is part of your solution is supported on your choice of operating system and your server platform of choice.


Luckily, it is not all doom and gloom and hard work. One very useful site around Solaris is the tool on SUN Microsystems website that allows you to check what applications  run on Solaris sparc or Solaris x86.

The last I checked over 80% of applications that run on SPARC also run on x86.

My suggestion if you draw blank here is to approach your application vendor and make business case on why they should support a Solaris x86 version. Likelihood is that the application vendor has a version running on Linux/x86 already, so getting a version to run on solaris/x86 is not a huge engineering effort. Mainly the application vendor will want to see some real demand so they can justify the support model.


There are also some useful guides out there developed by HP, Dell, Intel, SUN, IBM, Redhat, Microsoft and others that are technically focussed on the 'how to' migrate your solution.

Here is just a sample of some of the resources.


Lastly migrating custom code is a more challenging project. There are many organizations with significant experience and expertise that offer services to assist in migration projects. Leverage these organizations to help. I know at first blush there may be concerns of the cost of paying for migration services, but look at the bigger picture. In a lot of cases the TCO benefits and improved performance will deliver business benefits that will outweigh the cost of migration in the long run.


Hopefully this is helpful. I would really like to hear what have your experiences have been with migration or what are the challenges that you face as you look forward towards migration?


Go faster, save gas...

Posted by omarsultan May 22, 2009

I had the opportunity to chat with Intel's Chris Peters about the energy efficiency and performance features on the new Xeon processor.  These cool technologies allow the CPU kick into overdrive when the OS requests and then drop into a more energy efficient state when the load is not there.  Its part of the continuing trend to use virtualization and automation to better tune and align IT to actual business needs.  From a Cisco perspective, this technology allows us to deliver a more energy efficient and energy smart platform while improving the overall ROI of the Cisco Unified Computing solution.


Developing a server refresh strategy requires coordination .. among IT, business units, facilities, finance and possibly others


For many organizations, who buys the servers, maintains them and sees the power bill are all different silo'd organizations.  The issue in developing a strategy is that if each of these independent organizations don't get together refresh may never happen - why?  Because each organization only sees a portion of the overall costs and savings, what is right for one group may show a negative impact or cost.  However, because the new benefits of server refresh (doing more with less) touches so many pieces of the collective organization that the end result is usually a positive.  Kind of like how athletes need to rely on each other to achieve a common goal - winning the game.


So how do you get everyone on the same page?  In sports, this is the role of the coach or in some cases the on-field leader (quarterback, captain ...).  Last week i sat in on a data center summit hosted by Intel IT.  Inside intel, the quarterback is corporate finance who can see all the pluses and minuses that impact the corporate P&L and help optimize a decision that is best for the company and shareholders.


Last year Intel IT saved $45M in operational savings and cost avoidance while supporting growing compute demands.  Read the 2008 Annual Perf Report

Intel IT in combination with Alinean and myself helped develop a savings estimator


to help you assess your opportunity for savings


  • Who is your Quarterback for Server Refresh?
  • Is your organization even in the game?


As they would say in Disney's High School Musical - Get your Head in the Game




Change is always hard.

Posted by W_Shimanek May 21, 2009

What do Jack Welch, Henry Ford and Albert Einstein have in common?

In their day they were innovators and they all accepted the need to adopt and change. 

Einstein once said “The definition of insanity is doing the same thing over and over again and expecting different results.” He obviously had no interest in repeating history over and over again.

Ford was heard to say “I am looking for a lot of men who have an infinite capacity to NOT know what can't be done.” He obviously was looking for people who asked “why” as opposed “why not.”  He sought thinkers and tinkerers.

Jack Welch had the shortest, but the most interesting quote.  “Change before you have to.”  As we all know he embraced change and created it too.

Technology gives us an opportunity to look at what we are doing now and find a better way to do it.  Used correctly, it can help you do more in less time with higher quality results.  Take the digital workbench (aka dual processor Intel® Xeon processor 5500 series based workstation) as an example.  While it can do CAD, it is really capable of much more.  It can help users model more what if’s than ever.  It can help users create and test ideas digitally long before they are made into physical prototypes.  While these new workstations can do this many continue to do just CAD.

What new workflows can you think of that will radically change the rate of your innovation?

Often this term is used around the HPC industry referring to the use of HPC to help companies and R&D accelerate the process of innovation.   One close to home example that comes to mind, because I own one of his products, is James Dyson, inventor of the Dyson* vacuum cleaners.   By all means he is a success story, but the road to success, by his own account, was paved by many failures along the way.  According to Dyson, it took him 15 years, nearly his entire savings and 5127 prototypes to develop his creation.   Could HPC technology have streamlined his success?



A May 2008 study on the Industrial use of HPC for innovation, sponsored by DARPA, DOE, and Council on Competitiveness, concluded that “HPC-based virtual prototyping and large-scale data modeling provide breakthrough insights that dramatically accelerate and streamline not only ‘upstream’ R&D and engineering, but also ‘downstream’ business processes such as data mining, logistics and custom manufacturing.”    And while, “the United States is the largest consumer of HPC…. some U.S. firms are not applying HPC as aggressively as they could.”





Going back of Dyson’s example, granted a decade ago the use of HPC was limited to few industries, the HPC industry has grown dramatically and so the availability, access and affordability of the computing resources that can significantly streamline the time of discovery to accelerate innovation.  The availability, affordability and combination of powerful workstations, servers, and software can allow a designer to more quickly and efficiently innovate ideas for form, fit and function.  You might want to read this blog on “Why you need a Digital Workbench” by Thor Sewell.   And while cost might be mentioned as one of the barriers to HPC adoption, can you afford to persevere for 15 years and risk your savings?, like Dyson did.  As of matter of fact, companies are innovating around providing HPC services to the “masses”  by lowering the adoption barriers on a “pay-as-you-go-basis.” So while Dyson learned from each of his 5127 so called “failures” over 15 years and nearly his entire savings, HPC technology allows us to accelerate the time from idea to reality process you can more quickly streamline innovation cycle process:  from creating, to simulating, to analyzing , to visualizing.






*Other names and brands may be claimed as the property of others



We all have an attachment to legacy…



I have one myself.  A 25 year old sports car I adore to drive and to take care of.  It is the mighty legacy of the car’s history carries that makes it meaningful for me to own.  It may require more attention and cost to maintain than the cars sold today with the latest technologies.  But the additional cost of ownership very much makes sense to me as it is the hobby I love. 



In my work, I also see other examples of those attached to legacy.  I talk to people attached to mainframes and RISC/UNIX technologies, legacy technologies.  IT managers pay extra to keep legacy up and running.  It is not cheap to run mainframes and RISC/UNIX platforms.  How are they justifying the extra cost required?  I’m sure they use a different justification than my keeping a beaten up car running.  It’s not a hobby for them. 



I’ve come to this question since it is no longer true that mainframes and RISC/UNIX platforms are the only choice for enterprise workloads, even for the ones most important to businesses.  Increasingly, Linux and Intel architecture technology alternatives are accepted to run such business critical workloads.  Why?  Because they are both capable and cost effective.  Enterprises always are seeking to operate at the most efficient cost structure as possible, and today’s economy makes this even more important. Hence the choice.  Cost justification only gets easier. 



I joined a small team of Red Hat and Intel employees, working on highlighting all of the ways that Red Hat/ Intel technologies do wonders for enterprise workloads.  We continually ask the question is it still acceptable for an enterprise to pay extra for the legacy.  We ask what is it that Red Hat/ Intel technology combo lacks inhibiting replacement of legacy platforms.  We ask how do we change the perception so that we accelerate adoption of Linux and Intel technologies in enterprise datacenters. 



Our studies uncovered a major opportunity for consideration of RHEL running on Intel platforms as the choice for workloads conventionally ran on RISC/UNIX.  We recently delivered two webinars which you can download here: 

April 28 webinar:

May 14 webinar:



Anyone interested in the topic, go explore our web site,, which is still in early stage.  We will be adding content as we move forward.  Download webinar contents just delivered and learn how Red Hat and Intel are teaming up. 

I am encouraging Red Hat team also to participate in this community.  . 


More to come… 


However, If you use virtualization … make sure it’s VMware and Intel. 



Virtualization is a proven way to reduce capital expenditures and operating costs, and in this turbulent economy, the right virtualization solution can help your business remain viable and competitive while keeping an eye on future growth.  Join VMware® and Intel to learn about advanced solutions you can employ today to save money while providing your business with the dynamic infrastructure it will need to succeed in the future.


Learn how to:

ð       Reduce costs without sacrificing capacity or performance.

ð       Improve flexibility and responsiveness to changing business needs.

ð       Increase the efficiency and resiliency of your IT infrastructure.


Register today for the June 3 2009 webcast at 9am PST

Today's video blog features Massimo Re Ferre', IBM Senior IT Architect, giving a chalk talk on some of the scenarios where you are looking to "Scale Up" or "Scale Out" your virtual infrastructure. Check out this short video from VMWorld Europe 2009:


In this installment on uses of server power management we continue the discussion on using this capability for other uses beyond server rack density.


Intel(r) Data Center Manager (Intel DCM) is a software development kit that can provide real time information to optimize data center operations.  It provides a comprehensive list of publish/subscribe event mechanisms that can form the basis of a sophisticated data center management infrastructure integrating multiple applications where applications get notified of relevant thermal and power events and can apply appropriate policies.


These policies can span a wide range of potential actions:  dialing back power consumption to bring it down below a reference threshold or to reduce thermal stress on the cooling system.  Some actions can be complex, such as migrating workloads across hosts in a virtualized environment, powering down equipment or even performing coordinated actions with building management systems.


Intel DCM also provides inlet temperature or front panel thermals along with a historical record that can be used to identify trouble spots in the data center.  This information provides insights to optimize the thermal design of the data center.  The actions needed to fix trouble spots need not be expensive at all; they may involve no more than relocating a few perforated tiles or installing blanking panels and grommets to minimize air leaks in the raised metal floor.  Traditionally, the hardest part has been identifying the trouble spots, involving time consuming temperature and air flow measurements. Intel Data Center Management provides much of this data ready made from operations. Typically this type of analysis is done by a consulting team and the cost of this exercise is high, anywhere between $50,000 to a $150,000 for a 25,000 square foot data center.  This analysis yields a single snapshot in time which becomes gradually more inaccurate as  the equipment in the data center is refreshed and reconfigured.


Deployment scaling can range from a small business managing a few co-located servers in a shared rack in a multi-tenant environment to organizations managing thousands of servers.


The event handling capability is an software abstraction implemented by the Intel DCM SDK running in a management console.  From an architectural perspective, and the fact that the number of nodes managed can range in the hundreds, it makes more sense to implement this capability as software rather than firmware.  Node Manager is implemented as firmware and it typically controls one server. The choice of SDK over a self-standing management application was also deliberate.  Although Intel DCM comes with a reference GUI to manage a small number of nodes as a self-standing application, it shines when it's used as a building block for higher level management applications.  The integration is done through a Web services interface. Documentation for Intel DCM can be found in

So, one of the cool things about partnering with Intel on Cisco UCS is that Intel gave us a great platform to innovate with the Intel Xeon 5500 Series.  One area where we did this was to take advantage of Intel QuickPath technology to create our extended memory technology.  This Cisco exclusive technology allows our two socket server blades to access up to 384GB of memory, which provides a number of practical benefits that can radically shift the economics of your server infrastructure.


The first scenario is fairly simple and straightforward--the additional memory allows you to be more aggressive with your server consolidation/virutalization efforts by driving higher VM density on each physical server blade.  Fewer physical servers translates to both lower capex and opex.


To this address this point, some customers have been moving up to increasing the number of sockets per server to gain more on-board memory.  Again, this approach carries a couple of significant caveats.  The first downside is the higher cost of the server blade itself, but the second downside might be even worse: higher software licensing costs associated with the higher socket count.


Finally, the UCS gives you an interesting option where you have more modest workloads.  Because the UCS B-Series blade server supports more physical DIMM slots, customers can use the less expensive 2GB or 4GB RDIMMs if they don't need the full 384GB per server.  For example, you can deliver the same 192GB of memory per server (that is the previous max) using 4GB RDIMMs, which are substantially less expensive than the 8GB RDIMMs, while maintaining full flexibility to upgrade memory if future needs dictate.


To get more details on the Cisco UCS Extended Memory Technology, check out this white paper.

Manageability, security, and performance are always hot topics in the computing world. At times the focus shifts between them as needs and technologies change, but these areas have remained key vectors of enterprise computing for a long time. However, in many cases these usability vectors conflict with each other. IT managers’ desire for security and manageability may lead to extra applications and process hoops for end users, which can decrease performance. Increasing the ability to remotely and seamlessly manage a pc almost always adds security headaches that must be dealt with. Enterprise IT design is always about finding the right tradeoffs and improving the process over time.



One technology that has been around for quite a while to help improve security is IPsec (aka, IP Security). IPsec is a set of protocols for securing and authenticating IP packets by encrypting their contents in an end-to-end manner. Most people are familiar with IPsec as the underlying technology for facilitating Virtual Private Network (VPN) connections from the outside of an organization’s LAN to inside the network. IPsec secures the Internet to Intranet tunnel in this case.



Using IPsec to set up a VPN can be a bit of a pain because you have to key in an access code or password and it’s far from seamless. On the IT manager’s side, this setup does not eliminate security problems because the VPN tunnel only secures the network pipe once it is established. There is nothing stopping the end user from browsing the web on their work computer or somehow exposing it to a virus before connecting to the corporate network in a secured way. This has a few downsides from a manageability perspective. First, the security is compromised because of potential infections transferred from an insecure network to the corporate network due to lack of continuously active protection. Second, the manageability of this solution is lacking because enterprise systems outside of the corporate network are not manageable until the user manually connects to the VPN gateway.



So while using IPsec to help create a VPN connection provides functionality that is secure and provides outside-in access to the corporate network, it requires additional configuration by the end user, is not seamless for either user or administrator, and is generally provided by an additional application running on the system. This is all non-optimal.



Enter Microsoft* DirectAccess*. In Windows* Server 2008 r2 for servers and Windows* 7* for clients Microsoft* will be supporting a seamless IPsec support layer called DirectAccess*. What this will provide is the ability to integrate the encryption/authentication of IPsec directly into the Operating System so the end user connects securely outside and inside the corporate network to the systems and applications they need via IPsec. Because this is integrated into the OS, the set up of the security and connection details are more seamless from both an IT person and end user perspective. Initial configuration is obviously required, and each IT organization must set up the security policies to their own specifications, but once that is done the system is up and running.



Microsoft*’s implementation of this functionality at the OS level, so each application can have its own secure IPsec tunnel. This can provide secure access both outside and inside of the corporate network. Up until recently, using IPsec internally has not been of much focus, but recent estimates suggest 80% of successful attacks come from internal threats, so encrypting and authenticating internal data is now in focus for IT administrators. Microsoft* DirectAccess* allows for this new seamless security model.



Now this all sounds well and good… but what’s the catch? Well, a key angle here to note is that IPsec is a highly CPU intensive technology. Encryption and decryption of IP packets in real time can easily swamp a CPU core when attempting to push much more than a few hundred megabits of network data. For a typical end user system, a few megabits of data across a few IPsec connection applications will likely not cause much heartache, but for network servers that are hosting potentially thousands of simultaneous IPsec connections while trying to drive multiple Gigabits of I/O the performance results will be much more… uhh, what’s a nice way to say ‘unimpressive’?



In order to solve this issue, Intel networking products offload the computationally expensive encryption engine (AES-128) onto the LAN Controller while the IPsec configuration, management, policy creations etc all remain in the OS to keep usability simple. Intel offers both dual port 1 and 10 Gigabit networking solutions that support not only solid performance on standard networking workloads and advanced virtualization features, but also the ability to offload IPsec in hardware to improve system performance under large IPsec I/O workloads.



For companies looking to enable IPsec into their network environment using DirectAccess*, they have the potential to improve security, reduce complexity, and enhance manageability of their end clients. They just need to remember that in order to make this all work seamlessly on the server side without choking off processing performance, offloading the IPsec workloads to I/O hardware will be a requirement.



Intel® Ethernet® can deliver this support in adapter or down on motherboard form factors while supporting a wide range of Enterprise class performance and virtualization features. So is this a way to improve security and manageability without impacting performance? It seems that way to me.





Ben Hacker


For more information on DirectAccess* --


I haven’t seen as hyped a term in the data center arena since…um…virtualization.  Everyone is talking cloud, promising cloud, and believing cloud.  But what exactly is this thing called cloud? Is it outsourcing services to a provider, the next generation of virtualization, or something completely different?  There are a lot of definitions, and everyone has opinions…strongly held opinions that led me to my Rolling Stones inspired title to this post (yep, those can sing along now).  Chip Chat decided to get to the bottom of the cloud story, so we were excited to spend some time recently with Intel’s queen of the cloud, Raejeanne Skillern.  Check out my conversation with her here.

Penguins are Fun … And Their New Servers Are Even Cooler!


Next week I will have the privilege of being a guest speaker at an online seminar hosted by Penguin Computing on how a Nehalem refresh can impact your HPC environment.  I will be joined by Matt Jacobs, VP of WorldWide Sales, Penguin Computing where we will cover the performance and economic benefits of upgrading to the latest Penguin systems powered by the Intel Xeon 5500 family of processors (Nehalem). With dramatic improvements in performance, efficiency and density, you may be able to own the latest systems for less than the cost of operating your current systems. Learn how and get more details on this exciting new platform from Penguin and Intel. 


Join Matt and I Live on May 20, 2009 11:00 am - 12:00 pm (Pacific Standard Time).  Register Today




ð        Learn more about Penguin Computing here

ð        Learn more about the Intel Xeon Processor 5500 series in the Server Learning Center here

I was over in Cannes France a few weeks back listening to some of the Chalk Talks at the Intel Booth. Here's a short video with Guillaume Field, Enterprise Technologist with Dell.His chalk talk covers basics of the VMmark tiles, and how to compare different plaform configurations, such as 2-socket & 4-socket and also between different server vendors. If you are benchmarking a virtualized environment, this is a must see.



Thanks for watching

A moment of silence for the mini


I suppose I need clarify of which mini I speak.  It isn't the car and it is not the skirt


The mini I refer to is that mid-sized server that runs the applications that your company depends on.  There are quite a few flavors in this class, from the classic VAX with Cutler's first child VMS ( moment of silence), to the Unix family - all those AIX boxes running on power processors, the Solaris bunch running on various flavors of Sparc, and HPUX on PA-RISC and Itanium.

It is the twilight for the mini server.  Of course, like an Alaska summer, twilight may last a really long time.


My rationale for this position has to do with the size of the enterprise IT problem, and the capacity of the server.

background: In the past there have been "tiers of servers" at the "low" end we have all those "x86" boxes running variants of Windows and Linux.  In the middle we have the class above, and at the high end mission critical level there have been mainframes, Superdomes, Non-Stop, and other run the world systems.   Application demand has also grown, but the individual application growth has not matched the growth in server capacity.  The middle class is being squeezed.  Just check those TPC and Spec numbers vs Sparc


What has changed:

  • The performance of the Xeon - Xeon base x86 servers have eclipsed the performance of the "mini" architectures
  • The X86 OS is ready for prime time - Companies can run their largest applications on Xeon platforms with Windows orLinux
  • Xeon Virtualization - Virtualization allows IT managers to fully utilize powerful hardware, and optimize their data center
  • Grid solutions - Grids and clouds provide near limitless scale with Xeon platforms, without the need for monster SMP solutions
  • Lead Platform - The primary development, and first release, platform for many ERP and Database providers has shifted to Xeon


example - 1n 2002 A large company payrol system, that I worked on, required a 16 way mini platform to meet service levels - all data processed in less than 7 hours.  Today that same application fits easily into a four processor Xeon platform.  By this time next year it should fit easily into a two socket Xeon box.  The motivation for a "Mini" servers in this environment has vanished.


Almost every enterprise application today runs best on a Xeon processor based server.   Customers building out new capacity are optimizing on a Xeon based, virtualized architecture.   For web servers, data base applications, and ERP systems , Xeon based servers provide great price performance and phenominal performance.


If there is a soft spot in your heart for the mini, take a few minutes, visit the data center, and spend some lquality time while you still can.


Instrumentation – sources of data and points of control – on processors, chipsets and subsystems are the foundation for innovative performance, manageability and energy efficiency advancements at the platform, rack and even data center level. The instrumentation delivered by Intel Xeon processor 5500 is the basis for enabling industry leading power capping solutions that help to squeeze the most out of every dollar spent and kilowatt consumed in the data center.



This brief animation provides a brief overview of power capping and a number of use cases. Are you using or considering using power capping?  Share your thoughts and experiences about this emerging instrumentation based technology.








In a previous article we explored the implementation mechanisms for monitoring and controlling the power consumed by data center servers.  In this article we'll see that an ability to trim the power consumed by servers at convenient time represents a valuable tool to reduce stranded power and take maximum advantage of the power available under the existing infrastructure.  Let's start with a small example and figure out how to optimize the power utilization in a single rack.



Forecasting the power requirements for a server over the product’s lifetime is not an easy exercise.  Server power consumption is a function of server hardware specifications and the associated software and workloads running on them. Also the server’s configuration may change over time: the machine may be retrofitted with additional memory, new processors and hard drives. This challenge is compounded by more aggressive implementations of power proportional computing: servers of a few years ago exhibited little variability between power consumption at idle and power consumption at full power.




While power proportional computing has brought down the average power consumption, it also has increased its variance significantly, that is, data center administrators can expect wide swings in power consumption during normal operation.


Under-sizing the power infrastructure can lead to operational problems during the equipment’s lifetime: it may become impossible to fully load racks due to supply power limitations or because hot spots start developing.  This extra data center power capacity needs to be allocated for the rare occasion where it might be needed, but in practice and cannot be used because it is held in reserve, leading to the term "stranded power."




One possible strategy is to forecast power consumption using an upper bound.  The most obvious upper bound is to use the plate power, that is, the power in the electrical specifications of the server.  This is a number guaranteed to never be exceeded.  Throwing power at the problem is not unlike the approach of throwing bandwidth at the problem in network design to compensate for lack of bandwidth allocation capability and QoS mechanisms.  This approach is overly conservative because the power infrastructure is designed by adding the assumed peak power for each server over the equipment’s life time, an exceedingly unlikely event.




The picture is even worse when we realize that IT equipment represents only 30 to 40 percent of the power consumption in the data center as depicted in the figure below.  This means that the power forecasting in the data center must not only include the power consumed by the servers proper, but also the power consumed by the ancillary equipment, including cooling, heating and lighting, which can be over twice the power allocated to servers.


Establishing a power forecast and sizing up a data center based on nameplate will lead to gross underestimation of the actual power needed and unnecessary capital expenses[1]. The over-sizing of the power infrastructure is needed as insurance for the future because of the large uncertainty in the actual power consumption forecast.  It does not reflect actual need.




Power allocation in the data center.


A more realistic factor is to de-rate the plate power to a percentage determined by the practices at a particular site.  Typical numbers range between 40 percent and 70 percent.  Unfortunately, these numbers represent a guess representative over a server’s lifetime and are still overly conservative.


Intel(r) Data Center Manager provides a one year history of power consumption that allows a much tighter bound for power consumption forecasting.  At the same time, it is possible to limit power consumption to ensure that group power consumption does not exceed thresholds imposed by the utility power and the power supply infrastructure.




Initial testing performed with Baidu and China Telecom indicates that it is possible to increase rack density by 40 to 60 percent using a pre-existing data center infrastructure.




We will explore other uses in subsequent articles such as managing servers that are overheating and dynamically allocating power to server sub-groups depending on the priority of the applications they run.

[1]Determining Total Cost of Ownership for Data Center and Network Room Infrastructure, APC Paper #6 and Avoiding Costs from Oversizing Data Center and Network Room Infrastructure, APC Paper #37,

I was thinking about what to write in my next blog and what I could share beyond what I have written previously about Intel Vs RISC in terms of TCO, performance and the customers that are choosing to move.


Luckily I didn't have to think too long on a Friday morning as a a topic came to mind instantly. There are numerous articles flying around this morning that picked up on the Oracle comments yesterday about how SPARC based systems compare to Intel. Thanks for providing me with an appropriate topic.


So in case you missed it, there was a question and answer session with Larry Ellison. When asked about SPARC, this was the reply "SPARC is much more energy efficient than Intel while delivering the same performance on a per socket basis. This is not a green issue, its an economic issue. Today, database centers are paying as much for electricity to run their computers as they pay to buy computers. SPARC machines are much less expensive to run than Intel machines"


1) SPARC more energy efficient than Intel?  Seriously, in what parallel universe does that exists?

SUN continues to use watts per thread as measure of energy efficiency. The recognized industry standard benchmark for measuring energy efficiency is SPECpowerand I don't see any SPARC based results in the 91 results published. The absence of a result certainly says something very clear to me - no story.


These UltraSPARCT2+ systems get loaded with a lot of memory to deliver the their results, so when you look at overall system power (what people care about) they are not as energy efficient as Intel based systems.


SPECpower is effectively based of SPECJbb-2005 so another way of loking at this is to look at the SPECJbb-2005 results for a 4 socket UltraSPARcT2+ system and a Xeon 7400 system. The 4s UltraSPARCT2+ delivers 693k BOPs while Xeon 7400 is 532kBOPs. So you conclude that SPARC is better than Xeon?. That would be the wrong conclusion

UltraSPARCT2+ system would consume 1525 watts Vs Xeon 7400 at 816 watts. If you look at BOPs per watt (another way of looking at energy efficiency and performance) then you would see that Xeon 7400 is 43% more energy efficient. Doing a similar comparison with Xeon 5400 (I haven't even talked about our latest Xeon 5500, Nehalem) would be up to 77% more efficient than UltraSPARCT2+.


And lastly before I forget to mention the 4s UltraSPARCT2+ had 128GB memory and costs over $150,000for the system, while Xeon 7400 based system had 64GB memory and costs around $32,000.


2) SPARC deliver same performance on a per socket basis?

2S Xeon 5500 has performance leadership over 2S UltaSPARCT2+ across a wide range of benchmarks. Up to 70% more performance and up to 60% lower system cost. 4S Xeon 7400 has price/performance leadership over 4S UltraSPARCT2+, UltraSPARCT2+ results achieved with system loaded with lots of memory that drives the cost up to 3-4Xthat of Xeon 7400 system


3) SPARC machine are less expensive to run?. I can't for the life of me work this one out!.

Hardware systems based on Intel have leading price/performance (read cheaper), lower energy needs (so electrivity bill lower) and any software product with a license per core strcuture is less expensive on Xeon system than an 8 core UltraSPARcT2+ (which also has higher multipler per core)


That's all for now folks. I just wanted to share some data on why I know that SPARC machines are much MORE expensive to run than Intel machines


Virtually Everything...

Posted by K_Lloyd May 6, 2009

"We are virtualizing".  I hear that at every customer, every day.  I am not sure where virtualization is on the hype curve, but i don't think it is anywhere near slowing down.  I am very glad to be past the "Dilbert" and "in flight magazine" era.  Customers seem to have a really solid command of what they want to virtualize and why they want to virtualize. ( not to imply that all the questions have been answered )

The latest Intel servers - Xeon 7400 processor series in the 4 socket family, and the incredible Xeon 5500 (Nehalem) processor series in the 2 socket family - deliver more than sufficient capacity for sweeping data center virtualization.  i.e. very few enterprise applications are to big for a VM on one of these platforms.


I hear three reasons from customers for virtualization. ( in order of emphasis )


1) To improve efficiency.  Most enterprise servers are only about 10% utilized ( and many of these are old, slow, inefficient servers)  Applications are partitioned onto individual servers for archaic historical reasons.  Combining these on powerful modern servers can dramatically reduce footprint, power, server costs and licensing costs.


2) To improve flexibility.  Virtualization allows "servers" ( think VMs ) to be easily moved from one platform to another - for sizing - for maintenance - for almost any reason.  With the Intel Flex Migration technology and recent versions of VMware ESX - customers can pool Intel servers across multiple generations and families.  Live migration from your Xeon 5100 processor based server to a cozy VM on a 4 socket Xeon 7400 based 24 core server.


3) To improve reliability.  Virtualization provides a vehicle for managing hardware failures, allowing near instantaneous fail-over in the event of a server loss.


Virtualization has moved out of the lab and become a "best know method" for doing IT right.


Intel points to three focus areas for servers.  Efficiency, Performance, and Virtualization.  I think virtualizaiton's place in this triad is fleeting.  It only remains  because changes are still being made to the platform to support virtualization.  Soon virtualization will become just another part of the stack- like the operating system.  Nobody claims their processor is optimized for running an operating system...  Even today choosing the best processors for virtualization is more about efficiency and performance than about virtualization features.  Fortunately - as I do work there - Intel has a solid lead on both efficiency and performance.

If your company needs new servers, this is a great time to be in the market.  Intel based Xeon® 5500 (Nehalem) servers that were introduced only a month ago have been arriving at customer sites all over the world and they provide some very compelling performance and energy efficiency benefits.  Here are 3 key items to consider before buying your next server.  The actual order of importance of these items may vary depending upon your business needs.


1.  Performance.  This is still a primary reason why new servers are purchased.  The best way to measure performance is to actually run your applications on the server you are considering.  If that is not possible or feasible, the next best choice is to compare server performance using a suite of benchmarks.  Some of the more common benchmarks that IT departments use to compare server performance are:

a.       Virtualization performance using Vmware VMmark:

b.      Energy efficiency using SPECpower_ssj2008:

c.       Integer performance using SPECing_rate_base2006:

d.      Floating point performance using SPECfp_rate_base2006:

e.       Web server performance using SPECweb2005:

f.        Java performance using SPECjbb2005:

After looking these benchmark results, one thing you’ll notice is the Xeon® 5500 processors provide phenomenal performance…often up to 2x the previous generation!


2.   Server Hardware Choices 

a.       Processor.  The processor is one of the most important choices in the server.  Performance, features, power envelope and price all need to be considered.  From a power perspective, there are three power envelopes available for Xeon® 5500 server processors (95W, 80W and 60W).  In addition, there are 130W Xeon® 5500 processors, but these are primarily being used for workstations.  If you are in constrained power environment, it may be worthwhile to consider buying a lower power processor to reduce energy consumption.  Depending upon the processor SKU you are interested in, it is possible to get the exact same performance/frequency with a processor that just consumes less power.  (i.e. Xeon L5520 2.26GHz 60W instead of the Xeon E5520 2.26GHz 80W).  The L in front of the processor number refers to low voltage processors that consume less power.   

b.      Power supply.  Choosing a power supply with a high efficiency rating is one of the easiest choices you can make to reduce power consumption.  Choose a power supply that is at least 80%+ or higher efficiency.  Some of the newer power supplies are 90%+ or higher.  The higher the percentage, the better.

c.       Memory.  Every DIMM installed in the server consumes power.  In general, the fewer the DIMMs used, the less power that server will consume.  For a given memory capacity, such as 24GB, choose six 4GB DIMMs instead of twelve 2GB DIMMs.  The price of 2GB and 4GB DIMMs are almost at price per bit parity, but the power consumption of the memory will be much less with fewer DIMMs installed. 

d.      Add in boards.  Compare power consumption of add in boards such as 10GbE adapters, fibre channel adapters and other I/O cards.  Also, do you really need a fibre channel card these days.  FCOE (Fibre Channel over Ethernet) using a 10GbE adaptor is definitely a cost effective and power efficient way to get access to your storage array.


3.       To virtualize or not to virtualize?  Virtualization is no longer just a buzz word.  Virtualization is being used by many companies across multiple diverse industries today.  Fundamentally, it is an excellent way to consolidate many applications onto a single server, thereby increasing the utilization, value and energy efficiency of every server purchased.  Definetely a top item to consider.


What about your business?  What items do you consider before purchasing servers to maximize energy efficient performance?

Ever find yourself in a new location staring hopelessly at a map, wondering where you are?  Then to make matters worse, you call someone on your cell phone and can’t describe where you are so they can help? I think we’ve all been there more than once…


Since the Intel Xeon® 5500 processors launched in March, I’ve been getting a bunch of questions (including from the Ask An Expert community [] in the Server Room) about DDR3 memory and how best to configure your server platforms to optimize performance.  Many times, folks are having a hard time just getting the conversation started, so here are a couple of tips to get you going.  The good thing is that DDR3 memory picks up where DDR2 memory leaves off in terms of speed, so you know you’ll be moving forward!


  1. Figure out how much memory you need.  With multi-core CPUs now mainstream in servers, you need enough memory to keep these compute engines fed.  One metric you might look at is “GB per CPU core” or “GB per socket” for your existing servers, and then project your memory requirements from there.


  1. Start with DDR3 1066 memory, as that will deliver a good balance of memory performance and capacity. 


ð        If you need more bandwidth (and willing to give up some capacity), use DDR3 1333

ð        If you need maximum capacity (and willing to give up some bandwidth), use DDR3 800


  1. Match your CPU to your memory speed because the faster memory does require a faster processor.  Check out page 11 of the product brief for the quick reference table.


  1. Wherever possible, fill up as many memory channels as possible, and populate all channels evenly (same type, size and number of DIMMs). 


ð        Most two-socket Xeon® 5500 platforms will have a total of 6 memory channels, so aligning your memory requirements to a multiple of 6 GB will optimize memory performance for most application environments.  

ð        However, you can mix/match memory types if your requirements call for something that is not a multiple of 6.


  1. For Server application environments, always go with ECC supported memory.  Decide between Registered (RDIMM) and Unbuffered DIMMs with ECC (UDIMM ECC).


ð        RDIMM provide greatest flexibility across DIMM sizes and availability

ð        UDIMM ECC provide a lower cost alternative if you are using 1 GB or 2 GB DIMMs



You will still want to check with your system vendor on the specifics, such as memory configurations and DIMM types and options supported for a given server, but hopefully this helps you pointed in the right direction.


If you are still lost, ask me a question on this blog or Ask An Expert in the Server Room.


My name is Steve Thorne, and this is my first blog post in The Server Room. I’m the product line manager for the Intel Xeon processor 5000 family, and I’m based out of our Hillsboro, Oregon facility. I’ve been looking forward to this blog post for quite some time, since I’ve been meeting with a wide variety of customers over the past few weeks.



It’s been just over a month since we introduced the Intel Xeon processor 5500 series (the processor formerly known as “Nehalem-EP”). We are certainly pleased with the response from the industry at this point. Below you will see some of my observations about what has transpired over the first 30 days of release. At the same time, I invite you to share some of your stories about recent installations of the Xeon 5500. Where is it being used? What kind of environments are you using it in? What kind of improvements have you observed in your deployments?


The industry response has been extremely encouraging to me. Our marketing teams spent more than three years diligently preparing for the successful introduction. Some of my observations from the first month include:


·         The list of vendors that support the Xeon 5500 continues to grow. We started with over 70 system manufacturers on March 30, 2009. And on April 14, 2009, Sun Microsystems introduced a new line of x64 blade servers, rack servers and workstations powered by the Intel Xeon processor 5500 series. Of particular interest is the Sun Blade X6275 server module. You can find more info at:


·         I attended our launch event in Santa Clara on March 30, 2009. While at the event, I was pleasantly surprised by the adulation from the customers who were in attendance. In particular, our friends in the Digital Content Creation (DCC) industry are eager to apply the capabilities of the Xeon 5500 for movie special effects and animated features. Being a father of three school age children, I’ve always been fond of our products’ role in the moviemaking process. It’s fun to take your kids to the theater and show them a concrete example of how these incredibly complex processors are used to generate chuckles and special effects in movies ranging from “Cars” to “Monsters vs. Aliens.”


·         Positive recognition has been accorded to the Xeon 5500 from a wide variety of independent press reviewers and articles. A recent internet search revealed almost 875 news references. Recently, George Ou of DailyTech published an interesting article titled “Server roundup: Intel “Nehalem” Xeon versus AMD “Shanghai” Opteron”. You can read the entire article at:


·         On May 4, 2009 two independent financial analysts upgraded Intel Corp. stock. Both analysts attributed part of their positive outlook to the introduction and ramp of Xeon 5500 servers.


·         On April 8, 2009 the new Xeon 5500 was a centerpiece of our IDF event in Beijing. In his enterprise key note, Pat Gelsinger said the “Nehalem” microarchitecture has received worldwide acclaim.


·         Customer deployments are underway at leading data centers around the globe – particularly in High Performance Computing (HPC) applications. The HPC accounts encompass university research labs, commercial research and development and large scale clusters. These HPC customers are pushing the outer limits of scientific discovery and innovation, and the best examples are yet to come!


Personally, I was proud to be a part of the introduction of the Xeon 5500. There is a strong sense of satisfaction when the silicon is deployed in real-world environments. And in case you hadn’t heard, we are busy getting ready for the next addition to the Xeon family, codenamed “Westmere-EP.” We expect this new 32nm processor to be socket and pin-compatible with the Xeon 5500, and it will stretch the processor to support six individual CPU cores per socket. Stay tuned for this release in 2010!



Ah, the good old days.... It was normal to have a discussion with a friend or coworker member about something like, "We just bought a 1.2 GHz Pentium III server, it runs circles around that 500 MHz system we bought a few years back."  Everyone nods in approval, all rightly assuming that of course bigger is better and frequency directly relates to performance.  Of course now things are more complex with multi-core, multi-threads, differing architectures (Power, SPARC, Xeon, Opteron).  Is a dual-core at Power6 4.7 GHz faster than a Xeon at 3 GHz? Is a 1.4 GHz processor with 8 threads/core better than a 2.8 GHz quad-core with 2 threads per core?  Tough to know off the top of your head these days.  One thing is clear, the Intel Xeon processor 5500 series is in the lead of performance per processor (regardless of the frequency of processors available today). 


In comparing the Intel Xeon processor 5500 series (Nehalem) architecture vs. what's available from IBM, Sun, and AMD today, you see a wide variety of cpu offerings with dramatically differing specs.  However, when you take a look at all these systems with a common number of cores, you can see the differences in per core performance on the industry standard benchmark SPECint_rate_base2006


# of cpus

Total Cores

Total Threads


SPECint_rate_base2006 Performance

Intel Xeon X5570




2.93 GHz


AMD Opteron 2393SE




3.1 GHz


IBM Power6




4.7 GHz


Sun UltraSPARC T2




1.4 GHz



What a contrast!  Chip designers today have multiple choices to make to eek out the most performance in today's server systems.  What we see today is that the Intel Xeon processor 5500 series balances all of these quite well.  Whereas others have much higher frequencies, it doesn't necessarily translate into more performance, while others have gone with a larger number of threads, but have low performance per thread.  Even processors that have similar specs have performance that is quite different.  Of course this is only one benchmark, however if you look at others you will find similar differences.   


What this means for most IT buyers is it's more difficult to understand how all the whiz-bang features the marketers throw at you and how they translate into value for you.  My advice, really understand what kind of workloads are improtant to you and focus on the performance from industry standard workloads that best represent those.  Remember that bigger numbers on the spec sheet aren't always better when it comes to server performance.  Check your figures!


SPECint_rate_base2006 performance data reference:

Intel® Xeon® processor X5570 based platform details

Fujitsu PRIMERGY* TX300 S5 server platform with two Intel Xeon processors X5570 2.93GHz, 8MB L3 cache, 6.4GT/s QPI, 48 GB memory (6x8 GB PC3-10600R, 2 rank, CL9-9-9, ECC), SUSE Linux Enterprise Server 10 SP2 x86_64 Kernel, Intel C++ Compiler for Linux32 and Linux64 version 11.0 build 20010131. SPECint_rate_base2006 score 240,

AMD Opteron 2393SE based platform details

Supermicro A+ Server 1021M-UR+B, AMD Opteron 2393 SE 3.1 GHz, 6MB L3 cache, 32 GB memory (8x4 GB DDR2-800, CL5, Reg, Dual-rank), SuSE Enterprise Server 10 (x86_64) SP1, Kernel, PGI Server Complete Version 7.2, PathScale Compiler Suite Version 3.2, SPECint_rate_base2006 score 122,

IBM Power6 based platform details

IBM system p570 (4.7 GHz, 8 core), 32MB L3 cache, 64 GB memory (32x2 GB)DDR2 667 MHz, IBM AIX5L V5.3, XL C/C++ Enterprise Edition Version 9.0 for AIX, SPECint_rate_base2006 score 206,

Sun UltraSPARC T2 plus based platform details

Sun SPARC Enterprise T5120, Sun UltraSPARC T2 1.417 GHz, 4MB L2 cache, 64 GB memory (16x4 GB), Solaris 10 8/07 (build s10s_u4wos_12b), Sun Studio 12 (patch build 2007/08/30), SPECint_rate_base2006 score 73.0,

Powerful Technology. Compelling Savings. Competitive Business.


Sharing others success is always really cool.  Here are my Top 10 success stories captured off of the intel customer references page located at   These leading IT shops from around the world have successfully taken advantage of new technology to transform their business .. saving money, boosting performance, driving productivity, and increased competitiveness into their business.


Did I miss your favorite? Want to nominate one for my Q2 list? or , just tell me your story – love to hear it





1.        Station Casinos virtualizes IT: To help control IT costs and ensure a robust customer experience, the IT group virtualized its infrastructure by running VMware* virtualization software on Dell PowerEdge* servers equipped with the Intel® Xeon® processor 5100 and 5400 series. So far, the company has eliminated almost 100 physical servers, avoided $190,000 in hardware acquisition costs, and accelerated the deployment of new services from weeks to hours. Virtualization enables Station Casinos to continue to deliver fun and relaxation while keeping the company successful even in tough economic times.


2.        Thomson Reuters:  With virtualization software running on rack servers based on the Intel Xeon processor 7300 series, the Thomson Reuters IT team is achieving a consolidation ratio of 18:1, reducing power requirements and freeing up space to absorb future growth. “We expect to increase that ratio to 25:1 when we move to the six-core Intel Xeon processor 7400 series in the near future,” says Crowhurst. The company’s power requirements are growing by nearly 20 percent every year as business expands. By optimizing the power and cooling strategy in its new data center and concentrating more blades in less space, the IT team was able to increase power density from 100 watts per square foot to 150 watts per square foot. As a result, there is less need for new data center construction, greatly reducing future environmental impacts.


3.        TRW: Achieved server consolidation ratio of 20:1, Increased CPU utilization from 40% to 85%, Reduced server deployment times from two weeks to one to two days, Cut total annual power and cooling costs by 70% to 80%, Saved 50% in annual datacenter cooling costs in Malaysia alone, Enabled one IT staff member to manage up to 50 servers, Saved 80% on the potential cost of additional UPS systems across the datacenters.


4.        Business & Decision Group: Power consumption was reduced by approximately 30 per cent compared to the previous generation of processors. The pure performance gains and lower energy consumption helps us deliver new solutions for our customers and will lead to a return-on-investment in less than one year. Could gain virtualization rates of 20:1 and with a processor load slightly below 55 percent.


5.        BMW: The Intel® Xeon® processor 7300 series performed 2.75-3X faster than the implemented RISC-based servers. Based on the Intel® Core™ mircoarchitecture, it is manufactured using new materials such as hafnium-based high-k gate dielectrics and metal gates, which significantly reduces electrical leakage.


6.        Carnegie: Reduces yearly energy costs by approximately SEK I million (USD 167,000) thanks to an estimated 1.1 million KWh annual saving arising from the server consolidation. Carnegie then replaced 100 of its legacy servers with 16 HP ProLiant 380 G5* servers powered by the Quad-Core Intel Xeon processor 5300 series and in the process created approximately 140 virtual servers. Thanks to the reduction in physical servers it also shelved its plans for extra cooling equipment while also making a SEK 10 million (USD 1.7 million) saving by avoiding the need to physically rebuild its data centre.


7.        Kelly Blue Book: Refreshing aging servers with new Dell* servers based on multi-core Intel® Xeon® processors enables Kelley Blue Book to accelerate performance of key applications by up to 50 percent, increasing business agility. Loading and processing business warehouse data was taking 16 to 20 hours each time - now it’s taking half that amount of time. Server consolidation ratios of up to 15 to 1 and reduced energy costs with the new hardware - saving KBB approximately $10,000 each month in power and cooling costs.


8.        PLAY: Processing all of PLAY’s historical roaming mobile transactions for 2008 to make the data available for the Optiprizer windows user application, takes just 44 minutes running on the Intel Xeon processor 5500 series, compared to 102 minutes on the Intel Xeon processor 5400 series. This represents a performance increase of 230 percent. Similarly, complex business intelligence tasks can now be performed twice as quickly.


9.        Yahoo:  Upgraded their Mission Critical Oracle database with Intel Xeon 7300 based servers. Yahoo is able to support a 1.4-petabyte database with 16 servers without any additional training or operating costs, while cutting the time to run the most demanding queries by 93% (20 hours to 73 minutes).


10.     Turtle Entertainment: Europe’s largest online gaming community supports its 875,000 members with Intel Xeon processor 7400 based servers.  Consolidation with larger servers reduces network, power/cooling and space costs and enabled a 35% reduction in TCO while improving their customers gaming experience (time it took to load a web page went from 177 milliseconds to 72 milliseconds).

Of course changing a light bulb is easier. But did you know that the power savings benefits of changing a single server are about equal to changing three light bulbs and the economics of replacing either is similar.







Light Bulb

60 W

13 W

47 W


Server (peak power)

394 W

244 W

150 W

old server

new server

Server (idle power)

226 W

82 W

144 W






ð        Energy Star estimatesthat replacing a light bulb with a single compact fluorescent can save $30 over its lifetime and pay for itself in 6 months.

ð        Intel estimatesare that replacing 9 racks of older servers with just one rack of new servers can save up to $765,000 over four years and pay for itself in as few as 8 months. 


While many of us no longer question changing older incandescent light bulbs with more energy efficient compact fluorescent light bulbs because of economic and eco-friendly reasons, many businesses retain older servers in their environment because they still work. About 3 months ago when talking to IDC, they shared an estimate that they expect there to be about 32 million servers supporting businesses around the world in 2009 and about 40% of them are more than 4 years old (making them single core processor technology). That is a lot of old single-core technology.


These single core servers take up a lot of space, resources and power/cooling infrastructure.  Newer servers consume less power (about 150W on average based on test results with industry standard benchmark SPECpower found at, deliver more performance (up to 9x), come with a new warranty, and support technologies to enable consolidation that can reduce OS, application and other costs that vary per server.  The combination of these savings balanced with the costs and effort to replace them, migrate applications and validate the new environment can deliver a rapid payback and dramatic savings.  You can estimate the savings yourself using this server refresh savings estimator.


And since power per server is lower, you don’t need to replace the rack infrastructure (unless you want to) … similar to how you don’t need a new light fixture for compact fluorescent light bulbs.


While it will take more work to change your server (than a light bulb), the additional work is sure worth the effort as many customers have learned ( Learn more about server technology in the new Server Learning Center located here.




How many engineers does it take to change a light bulb? … a server?






Here’s the final follow-up post in my 10 Habits of Great Server Performance Tuners series. This one focuses on the tenth habit: Compare apples to apples.  




Much of performance analysis involves comparisons: to baselines, to competitive systems, or to expectations. It is surprisingly easy to make an inappropriate comparison. I have seen it done many times and certainly been guilty of it myself. So the final habit to be aware of is to always compare apples to apples.


Make sure that the 2 systems or applications you are comparing are being run the same way, with the same configuration, under the same conditions. If there is a difference, understand (or at least hypothesize about) the impact of that difference on the performance. Dig into the details about experiments – for some ideas on what to look for, see habit 7.


You should always make this a habit – but it is especially important when you are making decisions based on the comparison. Double-check your work in this case!


This series has given you 10 of the habits I have learned in my years tuning server performance. Of course there are other tricks of the trade and BKMs, which I will try to cover in future blogs. But making these habits part of your routine will help make you a better, more consistent performance tuner. Good luck with your optimization projects!

About a year ago, we started the “Ask An Expert” Forum and the response from the community was overwhelming.  We get questions ranging from general product awareness to product selection guidance to technical support questions. 

The Server Learning Center complements “Ask and Expert” and is designed to help streamline the community’s ability to find answers to questions more quickly and help us serve you better.  We hope you find both the information included as the layout both informative and easy to use.

We have started this forum with a focus on Nehalem (Intel Xeon 5500). However, if we are missing something (either on Nehalem or something else), you can let us know and we will work to fill the gaps.  If this is a popular forum within the community, we can work to list the items that are most popular within the community first so that the most relevant content that your peers find useful rises to the top.

Click around. Check out the content. Provide your point of view. Share your experiences. Invite others to join the conversation. And most importantly, let us know how we can make this a more valuable forum for you.

Give us feedback and by the way … Welcome to class

Professor Chris Peters (and the rest of the Intel Experts)


Find the Server Learning Center here

Talk to the Professor

Follow the Professor's tweets ( )


Filter Blog

By date: By tag: