Skip navigation

Lucky 13!

Posted by VISHAL SANGHVI Mar 30, 2010

As of today, the Intel Xeon processors 5600 series & the two socket Intel Xeon processor 7500 series extend performance gains with 13 new world records for two socket x86 based server and workstation products.


These results and benchmarks represent a lot of hard work and collaboration between Intel and the industry to bring these great products to market.


With the introduction of these new products we have leadership on a variety of verticals that matter most to end users and IT decision makers i.e. energy efficiency, virtualization and enterprise performance.


Specifically, the IBM* x3650 M3 (single node server system) delivered 2,927 overall ssj_ops/watt, upto a 42% gain over previous generation Intel Xeon processor 5500 series and IBM* dx360 M3 (multi-node server system) delivered 3,038 overall ssj_ops/watt, upto a 31% gain over previous generation Intel Xeon processor 5500 series.


Fujitsu's* PRIMERGY* RX300 S6 two socket server with two Intel Xeon L5640 processors meets the need of those customers who desire performance of the Intel Xeon processor X5570 but uses up to 30% lower platform power.


Fujitsu's PRIMERGY* RX300 S6 system established world record ERP performanace (Sales and Distribution SAP* SD 2 Tier ERP 6.0 Unicode score of 4,860 benchmark users, up to 27% boost over previous generation result) and world record Web serving performance (SPECweb2005* score of 104,422 up to 25% boost over previous generation result.


Cisco's UCS B250 M2 servers powered with two Intel Xeon processor X5680 series set a world record for virtualization performance with a VMmark* score of 35.83 at 26 tiles, up to a 42% performance gain over previous generation product. Cisco's UCS B200 M2 platform delivered a record score on high performance computing benchmark - SPEComp*Mbase2001 and SPEComp*Lbase2001. Cisco's UCS 250 M2 platform also delivered a world record on SPECjAppServer2004* benchmark.


Dell PowerdEdge* R810* servers powered with two Intel Xeon processor 7500 series set a world record for server side java performance (SPECjbb*2005 score of 1,011,147 JOPS, up to 60% gain over previous generation Intel Xeon processor 5500 series)


For detailed performance results and more information about world record claims see

Today Intel formally launched this new processor at a press event in San Francisco.  The event consisted of Intel VP and GM of Digital Enterprise Group Kirk Skaugen's presentation to the press including customer testimonials from NYSE Technologies, CEA (French govt funded tech research organization), and Ericsson. There was also a massive sampling of systems on display from multiple OEMs that will bring mission critical computing to the mainstream, and support announcements from major enterprise ISVs including VMware, Microsoft, SAP, Oracle, and Citrix. 



Some of the takeaways specific to the Xeon 7500 from Kirk's speech were as follows:



  • The Intel Xeon Processor 7500 series delivers the biggest performance leap in the history of the Xeon product line with up to a 3 times the performance vs. previous generation on a variety of enterprise workloads, and up to 20 times the performance versus a large chunk of the install base which is single core servers purchased 4-5 years ago. This revolutionary performance comes from bringing the Nehalem Architecture to big servers, but also Intel QuickPath Technology (QPI), 8-Cores / 16 threads, and a whopping 24MB of shared cache.



  • It’s not just the performance that is revolutionary.  Probably the most important breakthrough from a capability perspective is the innovation in scalability.  OEMs are introducing a ton of unique and innovative platforms for the high end that enable new levels of scalability in a modular fashion, from 2-sockets with large memory, to 8-sockets directly connected via QPI links, and up to 256 sockets with OEM node controllers.  Memory capacity has also exploded, with 4 Socket Systems serving up to 64 Dimms that are capable of 1 Terabyte of memory.  IT could run an entire enterprise database in memory with such memory capacity.



  • And that means Xeon 7500 is going to be a catalyst for mission critical transformation.  Customers running mission critical workloads will demand the highest scalability, but will also demand high availability.   The Xeon 7500 has over 20 New Advanced Reliability features which will allow customers to migrate their mission-critical applications with confidence from proprietary RISC based offerings to industry-standard servers powered by Intel® Xeon® processor 7500 series. And now IT can get what was previously possible in proprietary systems for a fraction of the cost.



  • Over 50 System manufacturers around the world were expected to announce systems based on the Intel Xeon processor 7500/6500 processor, many starting today.  These announcements include the first ever two-socket expandable rack servers, multiple four-socket blade servers, and a whole new industry of 8-Sockets or greater servers from 12 different OEMs.



  • The Intel Xeon Processor 7500 has already set over 20 new performance world records on industry benchmarks, from 2-Socket to 64-Sockets for key workloads like virtualization, database, business intelligence, enterprise resource planning and e-commerce to name just a few.



Here's also a copy of the press release on, which includes new highlights, videos and photos, quotes, performance data, etc.



Previous Blog Links:

Nehalem-EX – The New Standard in Scalable Performance

Growing Excitement over Nehalem-EX

Nehalem-EX, a game changer, unleashed!!!

Choice and Options

Nehalem-EX…Monster Chips and Big Boxes

“People Get Ready”…There’s a train a-coming

At RSA conference 2010 in San Francisco, the cryptographer panel consisting of legends such as Ron Rivest of MIT, Adi Shamir, and former NSA director Brian Snow cited one of the highlights from 2009 was the fact that both AES-128 and AES-256 have been broken. It took a lot of people by surprise that these two modes could be broken and the next logical question to ask would be is it no longer useful for data protection?


In reality, these attacks are fathomed by the academics, like researching a solution to a problem that hopefully will never be used. Some background on the three modes of AES - each with a different key length of 128, 192, and 256 bit, correspond to 10, 12, and 14 rounds. A round produces an intermediate state during the process of converting plaintext to cipher text and vice versa.

By 2006, the best known attacks were on 7 rounds for 128-bit keys, 8 rounds for 192-bit keys, and 9 rounds for 256-bit keys. For cryptographers, a cryptographic "break" is something faster than an “exhaustive search” - selecting an appropriate key length depends on the feasibility of performing a “brute force attack”. A brute force attack is a strategy to break encrypted data that involves exhaustively traversing the search space of possible keys until the correct key is found.


Another attack was blogged by Bruce Schneier[1] on July 30, 2009 and released as a preprint[2] on August 3, 2009. This new attack is against AES-256 that uses only two related keys which requires the cryptanalyst to have access to plaintexts encrypted with multiple keys that are related in a specific way.

The computation time required is 2^39 time to recover the complete 256-bit key of a 9-round version; 2^45 time for a 10-round version; 2^70 time for a 11-round version. 256-bit AES uses 14 rounds, so these attacks aren't effective against full AES.


We talked about the AES-256 attacks. What about AES-128? In November 2009, the first known-key distinguishing attack against a reduced 8-round version of AES-128 was released as a preprint.[3] It works on the 8-round version of AES-128, with a computation complexity of 2^48. The attack exploits the 256 bit key schedule. A key schedule is an algorithm that, given the key, calculates the subkey for these rounds.

In summary, the academics have found ways to “attack” AES-128 and AES-256 faster than an exhaustive search, but not against the full AES. Though the computation times are much faster than exhaustive search, these attacks require related keys, and are  non-practical, and do not seem to pose any real threat to the security of AES-based systems. In the real world, by applying obfuscating or “disguising” techniques on the code and data to be encoded, attacks are made less effective as it is more difficult to determine when one has succeeded in breaking [4].

[1]Bruce Schneier (2009-07-30). "Another New AES Attack". Schneier on Security, A blog covering security and security technology. Retrieved 2010-03-11. 


[2]Alex Biryukov; Orr Dunkelman; Nathan Keller; Dmitry Khovratovich; Adi Shamir (2009-08-19). "Key Recovery Attacks of Practical Complexity on AES Variants With Up To 10 Rounds". Retrieved 2010-03-11. 


[3]Henri Gilbert; Thomas Peyrin (2009-11-09). "Super-Sbox Cryptanalysis: Improved Attacks for AES-like permutations". Retrieved 2010-03-11.



If you’re like me, the impact of any data point in a given new article, business report etc is much greater when it’s put in relative terms – items that we come across in everyday life.


Take IT spending trends and how many server refresh plans were delayed due to economic conditions in 2009, and some of the operating cost reduction opportunities left on the table.


In a Gartner Research press release last October. “...approximately 1 million servers have had their replacement delayed by a year. That is 3 percent of the global installed base. In 2010, it will be at least 2 million.” (


So let’s look at how much power could have been saved in 2009, using the Xeon® ROI Estimator:


On March 30th, 2009, we launched the Xeon® 5500 processor, which enables up to a 9:1 refresh consolidation ratio over single-core servers purchased in 2005.  If those 1 million servers were refreshed on day of launch and consolidated on Xeon® 5500 servers (again, at a 9:1 ratio), then the industry could have collectively saved a total of 6.2 BILLION kWh of electricity annually – or the average power output of five (5) Boeing 747s!


2009 is behind us, but the business value and power reduction opportunities get even more interesting in 2010.  With the recently-launched Xeon® 5600 processors, the operational cost reduction opportunities are even more compelling, as the performance and energy efficiency improvements enable up to a 15:1 consolidation ratio over single-core servers.  There’s a lot more Boeing 747s out there…


What’s your 2010 opportunity?  Check it out for yourself using the updated Xeon® ROI Estimator tool.

With the launch of the Intel(R) Xeon(R) 5600 series, out came the built-in Intel(R) AES New Instructions for protecting data in-flight and data at rest.


Intel(R) AES-NI serves to encrypt/decrypt each round of the AES algorithm, generate next key, mix column, carryless multiply in hardware. The benefit is not only in the reduction of side-channel attacks, but also in the reduction of performance overhead which allows encryption to take place where not possible before.


At fall 2009 IDF, we published instruction level, crypto algorithm level, and SSL session level performance data. Many in the industry have been anxiously waiting for the application level performance data of the three usage models - web banking workload, database, and full disk encryption (FDE). Intel measurements have shown that with a web banking workload, Intel Xeon 5600 series can support 23% more concurrent users than Intel Xeon 5500 series without encryption. Oracle 11g database decryption time using a series of focused operation like insert, delete, retrieve, has been reduced by 89%. Using the already launched McAfee endpoint protection software, first time drive provisioning time has been reduced by 42%. Checkpoint and Microsoft bitlocker are some of the other Intel AES-NI enabled FDE products that have launched.


For the server FDE usage case, some argue that server disks are less susceptible to theft than laptop theft, hence the need for it is less. However, NARA (National Archives and Records Administration)'s RAID setup data loss case proves it's equally important to have the "last line of defense" with enterprise FDE. From the preliminary studies to date, there is performance benefit associated with multi-threaded enterprise FDE applications for RAID and storage boxes as well.


How can an ISV utilize these instructions? One can leverage OS crypto libraries such as Microsoft's crypto new generation (CNG), the upcoming Linux distributions libraries, etc. Or one can use 3rd party libraries such as Intel(R) Integrated Performance Primitives (IPP) crypto library, openSSL, NSS, and the upcoming RSA bsafe. Lastly, ISVs can choose to hand optimize with Intel AES-NI and recompile using Intel, Microsoft, or GCC compilers.


For more information on the quest to a bigger ecosystem, please visit this link for the "Securing the Enterprise with Intel(R) AES-NI" whitepaper.

Can you save money by investing in new hardware?

Can you really do more with less?


Mythbusters will be on stage w Intel, IBM, and Emulex May 24th in San Fran, putting the new eX5 servers to the test


Register today at


    It’s seems like every week there is news of some security breach.  And then there is the attention around cloud computing; another popular news story.  Reading beyond the hype it seems the cloud isn’t being as aggressively deployed as some expect and the most common reason cited is security.


    • So what about cloud security?

    • What’s the big problem?

    • Are there really some new security concerns or is it just discomfort with not having physical control?


    Well, it’s likely a little of both and more.  As the headlines remind us, even a traditional data center is challenged to protect itself from all the attackers out there; especially has the type of attackers shift from notoriety seekers to organized crime and nation-states.  Big companies have invested a lot of time and money building the best expertise in the area they can.  So turning that expertise over to a third party without having the same level of detailed knowledge and control of the security procedures is difficult.  Cloud computing has additional challenge of multi-tenancy (that is different departments or different companies sharing the same resources).  Building some details into contracts on security methods and what happens when a breech occurs is essential; so is using the latest technology.


    Whether a traditional enterprise or a cloud, most businesses could benefit from protecting more of their data with encryption.  Surprisingly little Internet traffic, hard drives and database information is encrypted today.  Cost and key management are inhibitors, but so is the performance overhead.  Which DBA wants to take up to a 28% performance hit to turn on encryption?  Who wants to use SSL for anything more than completing e-business transaction if it’s going to bog down the web servers?  The new Intel® Xeon® 5600 series processors, with Intel® AES-NI allows improved performance from previous generations even when encryption is enabled.  That should help take care of most of the performance concerns and enable enterprises and clouds to use encryption where it wasn’t feasible before.




Source:  Internal Intel measurements with a web banking workload, comparing a Intel® Xeon® X5680 (3.33 GHz) with SSL ON compared with Intel Xeon® X5570 (2.93 GHz) with SSL OFF.


    And how about Intel® TXT, which is another new technology for servers on the Intel Xeon 5600 series processors?  Using the concept of trust to detect whether low level software have been altered using emerging attacks from today’s more sophisticated and better financed attackers.  Servers can use virtualization, build up strong software security barriers, with the knowledge that the low-level software these security applications are built upon haven’t been tampered with by hackers.  All of this promises to make virtualization used in cloud computing a little less scary.


    What do you think?

    What’s your critical security barrier for using the cloud today?

    Is there Intel technology that can be helping?


Trust, but Verify

Posted by JGreene Mar 18, 2010

The concept of trust is a strange one. Perhaps even more so in the world of computer systems—where we’re all used to binary, yes/no answers.  Trust is not so easy.  It is purely a judgment call.  Think about people that you know and trust, and why.  Whether we do it consciously or not, we’re evaluating the words and actions of these people to determine whether we can trust these people.  Once we do that, we interact with them in a manner that reflects our trust in them. When the stakes are very high (say letting someone watch your children or manage your money) we typically have very high standards regarding the level of trust we need to have in people to give them that responsibility.

But as recent financial scams involving previously well-respected individuals have shown us, it is hard to find evidence that lets us appropriately gauge trust—we need more data. This puts us in a position popularly attributed to former President Ronald Reagan: “Trust but verify “.  As such, we look for people we trust, but increasingly also look to further evaluate their credentials.

It is also very important to establish trust in our computing platforms—that we can have higher confidence that they will act in the manner that we expect them to:  processing and protecting our data safely and securely.  Given the well-chronicled growth and increase in sophistication of attacks on IT resources, this approach makes more sense than ever. From a security point of view, a basic objective is to establish the smallest possible amount of assumed trust and subject more elements to verification.  Net: assume few items are good and prove that more are good and use that proof to make the assessment of trustworthiness based on the role you want that system to play.  The challenge is: this is very hard to do today—just when it is most needed.

This sets the stage for one of the neat new features available with the Intel® Xeon® 5600 family processor systems: Intel® Trusted Execution Technology (Intel® TXT). Using capabilities in the processor, chipset, BIOS and a Trusted Platform Module (TPM), Intel TXT provides a mechanism for enabling a very small atomic level of “assumed trust” while allowing a robust basis for verification of platform components such as BIOS, option ROMs, etc up to a hypervisor or operating system. With Intel TXT, the assumed trust (root of trust) is pushed down into the processor itself—perhaps the best-protected component of any platform.  From this privileged and protected location, subsequent components in the boot and launch process can be measured and compared to values of “known good” components to enforce that desired code executes and unknown code can be blocked. The result of this progressive measurement is often referred to as a chain of trust.

Figure 1:  Intel TXT provides a hardware-based security foundation to build a chain of trust

TXT Chain of trust.png

Source: Adapted from materials by Cong Nguyen and Monty Wiseman

Note that Intel TXT does not “provide the trust”.  It provides the foundation for assuring that the information about the software that will fundamentally control the platform (the BIOS, hypervisor or operating system) that will be used to make trust decisions is authentic. As a result, one can have greater assurance of the trustworthiness of the platform.  In short, Intel TXT provides the basis of “trust but verify” that is essential to help ward off the growing number of threats to today’s IT infrastructure.

Note also that Intel TXT does not intrude on the entire software stack of the platform.  Intel TXT provides measurement services from platform reset through the launch of an enabled hypervisor.  It does not provide measurements of guest VMs, hosted operating systems or applications above the hypervisor. While there could be some value in this, it would probably add increased latency and complexity that are relative enemies of security (i.e. people would “turn it off” or avoid using it).  That being said, it is entirely possible that the chain of trust started with Intel TXT provides and enabling foundation with can be continued as a software-only process with the hypervisor performing subsequent measurements of its guests as an integrity verification method.  Such use models are indeed likely to evolve in time.

A number of system vendors will be delivering Intel TXT-enabled platforms over the course of 2010.  As system vendors complete the testing of the servers with final production components from Intel, many will be delivering support via BIOS updates that will allow customers to activate this powerful new capability in the field, and may begin shipping subsequent products from their factories with Intel TXT ready to go. Software vendors such as VMware, Parallels, HyTrust and RSA are also interested in having the ability to help verify the platform environment as it helps create a predictable, controllable platform that provides a more robust basis for security solutions in cloud and virtualized environments.

VMware has been active in past events such as IDF and the recent RSA show to demonstrate software solutions that enhance cloud security. In fact, Intel, VMware and RSA technologists have just teamed up to release a solution brief that outlines the key issues for cloud security and identified some key roles that Intel TXT can help provide.  Similarly, vendors such as Parallels and HyTrust are anticipating testing and certification of their software solutions when system vendors make their enabled platforms available.  There will be a number of other leading hypervisor and operating system solutions with Intel TXT support released through 2010 and into 2011.

With an enabled ecosystem of hardware and software providers, trust will be a lot easier to find.  With new Intel® Xeon® 5600 series processor-based systems and Intel TXT in place, an administrator can now know that his/her trust in the platform has been earned.

How important is trust to you?  Does wanting verification of the platform make one seem overly paranoid?  Or do growing security concerns have you thinking that more protection is better? What defines “too much” security for you?


Do IT staff limitations, sluggish server performance, and fears of possible data loss sound all too familiar?  Brookwood School experienced similar challenges until they decided to take actions to turn over its technology support to an Intel® solution provider. Brookwood School got more desktop and server performance, reduced expenditures, and lowered their ongoing energy costs for a rapid Return on their Investment (ROI.)


Remember my trip to Georgia and my visit to the school from last month? Well, now I can share more about the impressive results they obtained. Read the attached case study and watch this short video to learn more.




Their story of increased performance and real savings may just inspire you to rethink your technology environment with a server refresh or even transition to managed services. And with the recent launch of the next generation of intelligent server processors — the Intel® Xeon® processor 5600 series — there’s even more to be excited about. This new processor launch makes now a great time to make a change.

These processors combine industry-leading energy efficiency with intelligent performance that adapts to your workload to deliver some serious improvements from previous processor generations:


  • up to 15 times the performance of a 2 socket single-core server
  • up to 95% lower energy costs

So, if you’re ready to put more intelligence in your server room (or closet), talk to your talk to your Intel® IT solutions provider ( get ready for great things!


1 Claim “Up to 15x performance per server” Disclaimer: Intel performance comparison using SPECjbb2005* business operations per second between four-year-old single-core Intel® Xeon® processor 3.8 GHz with 2M cache based servers and one new Intel Xeon processor X5600 based server..Baseline platform: Intel server platform with two 64-bit Intel Xeon Processor 3.80Ghz with 2M L2 Cache, 800 FSB, 8x1GB DDR2-400 memory, 1 hard drive, 1 power supply, Microsoft* Windows* Server 2003 Ent. SP1, Oracle* JRockit* build P27.4.0-windows-x86_64 run with 2 JVM instance. New platform: Intel server platform with two six-core Intel® Xeon® processor X5670, 2.93 GHz, 12MB L3 cache, 6.4QPI, 12GB memory (6x2GB DDR3-1333), 1 hard drive, 1 power supply, Microsoft Windows Server 2008 64 bit SP2, Oracle* JRockit* build P28.0.0-29 run with 2 JVM instances

2 Claim: “Up to 95% lower energy costs” Disclaimer: Intel comparison replacing 15 four-year-old single-core Intel® Xeon® processor 3.8 GHz with 2M cache based servers with one new Intel Xeon processor X5670 based server.

It's been pretty exciting around the halls of Intel the past few days.  I'm not talking about the fact that ABBA was finally recognized for their brilliance with an induction into the Rock & Roll hall of fame (that will need to wait for a future blog).  I'm not even talking about the fervor over filling out brackets for March Madness with discussions of relative "game" of Oklahoma State or Georgia Tech (who would YOU take in a fight between a Cowboy and a Yellow Jacket?). I'm talking about the much anticipated release of the Xeon 5600 series processor, formerly known as Westmere.  This latest evolutionary step in Intel's tick tock model was released to the market this week, and if you're a visitor to the Server Room you've seen the great response the product is getting.


I recently sat down with Intel's Shannon Poulin to talk about the 5600 series.  Shannon carries a lot of passion for Xeon, and when we started to talk about the latest generation processor you'd have thought he was talking about the brilliance of his March Madness bracket.  Shannon provided some insight into why the 5600 series offers a wealth of capabilities to data centers around the world and reflected on where customers may see the most value through platform deployments.  Check out the interview here.

I'm excited to highlight new IBM servers supporting today’s announcement of the next generation of Intel Xeon processors: the 5600 Series (codename Westmere-EP… EP for Efficient Processors).  IBM refreshed their portfolio of 2-socket racks, towers, and blades. Their Xeon 5600 based-servers (rack and towers) are easy to find with an “M3” in the product name, example: x3650 M3; and two Xeon 5600 based-blades are available: BladeCenter HS22 and HS22V


Some highlights I found on


  • 50% more memory capacity on rack-optimized servers (x3650 M3, x3550 M3)


  • 60% more internal storage capacity on rack-optimized servers (x3650 M3, x3550 M3)


  • 30%-50% more VMs per server with the virtualization-optimized HS22V


  • new iDataPlex (dx360M3) is the first two-socket server to achieve 3,000 operations per watt


I encourage you to check out the System x website – you will find more information about the 7 (seven!) servers IBM announced, including animated demo’s for each product: (press release)



I also listened to the Intel-IBM webcast today. Great presentations by Intel's Shannon Poulin, and Bob Galush, IBM's VP of High Volume Servers. For those of you who missed it, you can check out the replay!


Cheers, Raechel

The new processor series arrived early this morning with the posting of this product page and this press release.

Here are some of the earliest articles:

TechReport has a discussion about it here

Hardware.Info has a review of it here (in Dutch)

PCMag has an article here

ZDNet has an article here with slides

PCWorld has an article here comparing it to predecessors


Plenty more to come as half of the world consumes the content, and the other half wakes up to all the buzz.

Luiz Barroso in his classic 2007 paper posited that given that servers in data centers were loaded between 10 and 50 percent of peak, it would be beneficial from an energy perspective to have servers with a large power dynamic ratio, the ratio of power consumed at full workload to power at idle.  The figure below actually represents the state of the art today with a dynamic ratio of about 2:1 and efficiency that can drop below 20 percent.  The operating band depicted is more conservative than what Barroso indicated, with a CPU utilization that rarely surpasses 40 percent.





The next figure illustrates what happens if we improve the dynamic ratio to 5:1.  This is not possible today for single servers, but it is attainable for cloud data centers and as a matter of fact, for any environment where servers can be managed as pools of fungible resources and where server parking is in effect.




The improved dynamic ratio also dramatically improves the operating efficiency in the operating band of the data centers, but it gets even better:  the servers in the active pool are kept in the sweet spot of utilization in the range of 60 to 80 percent.  If the CPU utilization in the active pool gets below 60 percent, the management application starts removing servers from the active pool to the parked pool until the utilization starts inching up.  If the CPU utilization gets close to the upper range, the management applications starts bringing back servers from the parked pool into the active pool to provide relief and bring the utilization numbers down.


The first output out of the Intel Cloud Builder Program:


For Cloud Service Providers, Hosters and Enterprise IT who are looking to build their own
cloud infrastructure, the decision to use a cloud for the delivery of IT services is best done
by starting with the knowledge and experience gained from previous work. This white
paper gathers into one place a complete example of running a Canonical Ubuntu Enterprise
Cloud on Intel®-based servers and is complete with detailed scripts and screen shots. Using
the contents in this paper should significantly reduce the learning curve for building and
operating your first cloud computing instance.

Since the creation and operation of a cloud requires integration and customization to
existing IT infrastructure and business requirements, it is not expected that this paper 
can be used as-is. For example, adapting to existing network and identify management
requirements are out of scope for this paper. Therefore, it is expected that the user of
this paper will make significant adjustments to the design to meet specific customer
requirements. This paper is assumed to be a starting point for that journey.

As I was growing up my father always taught me the importance of measuring twice before you cut a piece of wood because once you cut the wood, if you did it wrong, you can't go back and re-measure to fix it.  This same principle applies to Data Center Efficiency.  Before you can take steps to improve energy consumption, increase facility PUE or boost density in your facility you need to understand power consumption


Recently Intel IT defined methods for analyzing computing energy efficiency within our design computing environment, using measurements of actual server power consumption and utilization. We used these methods to identify trends and opportunities for improving data center efficiency, and to implement a pilot project that increased data center computing capacity


Read more in the Intel IT whitepaper titled "Increasing Data Center Efficiency with Server Power Measurements"


For more on Intel IT data center practices, visit us at



So, what is it really like to move a workload from RISC to Intel-based platform?  Here is a very simplified example of SAP ERP 6.0 and IBM DB2 running on a legacy Sun SPARC platform migrated to RHEL with IBM System x3650 M2 with Intel Xeon processor 5500 series. 


We believe the exercise described in the paper applies to various instances of SAP not just limited to the version this experiment used. 

x86 Systems are leading the way as the worldwide server market rebounds, according to a February 24th press release from IDC.  This press release summarizes the findings in their Worldwide Quarterly Server Tracker, which is a quantitative tool for analyzing the global server market on a quarterly basis.


I wanted to share with you one of the quotes that aligns to what I’ve also been hearing when I talk to customers, who are simply looking for lower TCO and better performance per dollar for their enterprise solutions.


"In 2009, x86 servers captured more than 55% of all server revenue and more than 96% of all server units shipped worldwide," said Dan Harrington, research analyst, IDC's Enterprise Server Group. "This represents a continuation of the aggressive share gains that x86 technology has enjoyed over the last five years. Interestingly, x86 captured more than 57% revenue share in the fourth quarter of 2009. Because the fourth quarter is typically the strongest quarter for high-end non-x86 systems, this represents a significant shift in trends for the market, as non-x86 servers have never held less than 50% of revenue in the fourth quarter. IDC expects this trend to continue as users became more cost conscious than ever in 2010 and look to x86 servers for relief from capital and operational expenditures."


Per IDC the trend away from RISC to Intel Xeon based platforms has been going on for quite some time.  I’m really excited to see the new innovating things IT managers will do as they continue to move away from RISC and transition to the upcoming Nehalem-EX platforms.  I’ve heard a lot of end users tell me that Intel’s upcoming Nehalem-EX processor is poised to change the face of mission critical computing as we know it today.  Time to revolutionize your back-end.


Choice and options

Posted by Matt_K Mar 4, 2010

Intuitively we know they’re nice to have. I can chose this, or I can chose that. I can take  more, or I can take less. And if I can change my mind later—even better. We know we’re better off when we have more choice and options.

A couple of simple concepts, but they underpin so much. When you get right down to it the success or failure of any organization or enterprise depends on making the right choices and selecting the right options within resource and time constraints. Economists call this economic efficiency.

I think we are about to see a quantum leap forward in choice and options for IT managers.  Intel’s upcoming Nehalem-EX processor is aimed at the most demanding compute needs of the market. Nehalem-EX brings the biggest generation-to-generation performance leap in Xeon history, an 8X jump in memory capacity, socket scaling from 2 to 256 sockets, and a 3X jump in advanced reliability features. Nehalem-EX also brings a big leap in design modularity. And this allows server designers to build a much broader range of innovative server platforms--for better customer choice and options.

IBM’s recent announcement of their new generation of eX5 platforms based on Intel Xeon’s Nehalem-EX next generation processor processor is an excellent case in point. These platforms offer  greatly enhanced choice and options compared to what’s generally in the market today.


  • 2-socket blade (HX5)….. expandable up to 4 sockets
  • 2-socket rack (X3690 X5)… new entry-priced eX5 class server
  • 4-socket rack (X3850 X5)…..expandable to 8 sockets
  • 4-socket workload optimized systems tuned for specific workloads
  • MAX5….an IBM innovation allows IBM’s customers to add even more memory if they need it
  • Flexnode….an IBM innovation that enables easy partitioning of these eX5 systems further enhancing configuration flexibility


I think the quantum leap in capabilities of Intel’s Nehalem-EX combined with the innovations of server vendors like IBM will transform the high-end server market by providing more choice and options for the most demanding workloads.  I can’t wait. I think IT buyers will be delighted.

Filter Blog

By date: By tag: