Skip navigation

The Intel® Xeon® 7400 Processor was officially announced just a few weeks ago and there has been phenomonal interest in this product because of it's world record breaking performance leadership as well as it's great energy efficiency. 


Let's first discuss one of the primary advantages of the Intel® Xeon® 7400 Processor:  Up to 50% better performance/watt and up to 10% less system power vs. 7300.  As stated, this is pretty straightforward:  Intel has real world results that show significant performance increases while consuming less power as compared to servers based on the previous generation Intel® Xeon(R) 7300 Processors.  The performance increase can largely be attributed to designing the Xeon® 7400 processor with 6 cores based on the Intel® Core™ Microarchitecture.  In addition, the primary reason for the power decrease is because Xeon® 7400 uses the latest 45nm High-K process technology instead of 65nm in the previous generation.  In general, processors based on the 45nm process consume less power than the processor's rated TDP (thermal design power) value.  It must be noted that power consumption can vary by processor and some processors may consume even less power and others may consume up to the processor's rated TDP value.  For more details on both the performance and power, I recommend taking a look at this 3rd party review by Anandtech*:  


Next, let's discuss the positive impact these servers can have on your data center.  Whether you have an existing data center or plan to build a new one, there is always a fixed amount of power that is provided to that data center.  Energy efficient performance, in it's simplest definition, is the ratio of performance in relation to the amount of power consumed.  The higher the ratio, the more energy efficient your data center is.  To accomplish this, two vectors need to be considered.  The first is performance output and the second is power consumption (both when servers are operating at peak performance and when they are running at lower utilization levels or at idle).  Servers based on the Intel® Xeon® 7400 processor can provide both higher performance as well as lower power, which offer some very compelling energy efficiency benefits.  For example, when using virtualization multiple applications that currently run on independent servers can be consolidated on fewer, higher performing servers, while still providing performance headroom for future growth.  By doing this, both acquisition and ongoing electricity/operational costs can be dramatically reduced.  To see how much money you can potentially save by upgrading to servers based on the Intel® Xeon® 7400 processor, take a look at the ROI using the Intel® Xeon® Server Estimator at 


In summary, the best energy efficient performance can achieved using servers with Intel® Xeon® 7400 Processors.  These servers provide both exceptional performance across a wide range of applications, with headroom to grow, while at the same time consuming less power as compared to previous generation Intel 7300 based servers.



  • Other names and brands may be claimed as the property of others.


Saying more good things about Dunnington ( Intel Xeon 7400 ) feels a bit like piling on.  There are a myriad of posts out there about how great Dunnington is.  If you are looking for some data to support enterprise selection of the 7400, the article in Anand Tech Intel Xeon 7460: Six Cores to Bulldoze Opteron is very compelling. One of the exciting parts of this article is in the section on ESX performance, especially with vm's configured with multiple "virtual cpu's".  This is a configuration some of my large enterprise customers seem married to - even when not needed...  The 7400's use of highly efficient 45nm penryn cores delivers the dominant performance for this usage model.  There is a lot more to this processor than "2 more cores".


To quote from the article "This 45nm Intel core features slightly improved integer performance but also significantly improved "VM to Hypervisor" switching time. On top of that, synchronization between CPUs is a lot faster in the X74xx series thanks to the large inclusive L3 cache that acts as filter. Memory latency is probably great too, as the VMs are probably running entirely in the L2 and L3 caches. That is the most likely reason why we see the X7460 outperform all other CPUs."

The ESX section concludes with "Xeon X7460 is again the winner here: it can consolidate more servers at a given performance point than the rest of the pack"


Xeon 7400 is the processor for virtualization.

The following are some considerations prior to tuning your MP Xeon 7400 series server. I can speak to this subject as I was asked to tune this system using the TPC-C and TPC-E benchmarks for internal measurements at Intel. While you may not be setting up thousands of hard disk spindles for your performance work, this blog post attempts to capture some of the key tuning considerations of this Xeon-based server.


Understand your system

The key to tuning any system, whether it is a formula one race car (I promise to stay away from silly car performance analogies) or a server is to understand it. Identify what components have an effect on performance and what components don't. This will narrow down your tuning efforts.



Like all of Intel's platforms, an MP Xeon 7400 series server is made of several ingredients. Of course since I work for Intel I need to start out the ingredient list with the central processor. Our website has a good description of this processor here The MP Xeon 7400 processor is made up of three Core 2 Duo T5000/T7000 series processors. This provides six (yes six) cores for processing goodness. Each of the Core 2 Duo T5000/T7000 series processors provide 2 32KB level 1 caches (1 for data and 1 for code) and a 3MB level 2 unified cache. In addition to these two levels of cache the MP Xeon 7400 processor provides a 16MB level 3 unified cache. The other major ingredient to this platform is the Intel® 7300 Chipset. This chipset provides four independent front side bus links to the four CPU sockets. In addition, this chipset provides a snoop filter and four channels of FBD memory.


If some is good, then more is better:

The key thing to take away here is that an MP Xeon 7400 system fully populated with top bin processors will provide a whopping 24 cores of processing power in a four socket system. This is great for the enterprise benchmarks I use for performance testing as those applications are multithreaded and designed for multi-core processors. The same may not be true for your application, so please keep that in mind.


Another thing to remember is that an MP Xeon 7400 processor's design follows a growing pattern in the Xeon processor family. Specifically, I am referring to the addition of the level 3 cache (L3). This is also known as the last level cache (LLC). This follows the design of the Potomac (Xeon MP 64-bit) and Tulsa (7100-series) processors. The value of the large LLC is that it reduces the number of cache misses that would require the machine to go to FBD memory for the latest copy of a cache line. This additional level of on-chip cache comes at a price, though: higher latency. While the latency penalty is relatively low when compared to the latency to memory it is important to mention it here. Again, the LLC greatly benefits enterprise benchmarks I use for performance testing as they have a large memory footprint. The same may not be true for your application.



BIOS / Firmware / Drivers

It is very important to remember to update your system's BIOS, firmware, and OS drivers before you do any deep performance tuning. I can not over state the importance of this step. Your system's manufacturer should be able to provide the latest BIOS and firmware associated with your server. OS drivers are available through many sources these days. Typically these can be downloaded from OS vendors, hardware vendors, from the Linux open source community, or the platform's manufacturer.



Intel processors have traditionally provided four prefetchers. These are accessible via model specific register IA32_MISC_ENABLE and sometimes via your OEMs BIOS. These features are meant to help the processor load data in a predictive manner to keep the cache hierarchy filled with the most pertinent cache lines. This is great if the application uses data in a somewhat predictable way. If your application uses cache lines in a random fashion, then the prefetchers may negatively impact performance. My best advice for you is to test your application with the prefetchers enabled and disabled. Table B-3 (MSR 0x1A0) in this link covers the prefetchers I am referring to.


Memory Population

As mentioned before, an MP Xeon 7400 series server will provide four channels of FBD memory. There are a couple of considerations here. First, latency to memory increases for every DIMM added to the system. This is important to note because you can keep the memory latency to a minimum by adding fewer high capacity DIMMs. Second, be sure to evenly distribute the DIMMs across all the channels. In other words, don't fill up all the slots on one channel and then lightly populate the rest.


An External Factor that may affect performance

Like many Intel designs, an MP Xeon 7400 series server will choose dishonor over death. I am referring to how it deals with high temperatures. The FBD memory inside an MP Xeon 7400 series server makes use of a thermal monitor on each DIMM. If the memory becomes too hot the chipset will begin to throttle memory bandwidth in an effort to reduce the temperature of the system. This will have a drastic negative impact to performance. So, keep your server room nice and cool.






To wrap things up here, we have looked at the architecture, the importance of BIOS/ firmware/ OS drivers, the prefetchers, memory population, and the effects of high temperatures. Your application's performance will vary, but I hope I have given you some things to narrow down your testing. So, by now you might be asking. "Where do I start?" Well not to be too self serving, but I would check out more of our blog posts here. A great place to start for performance methodologies would be Shannon Cepeda's blog. This series is a great resource for anyone interested in computer performance methodologies.


Have you got the "VIBE"?

Posted by whlea Sep 29, 2008

Check out this video with Robert Harley of Virtual Blocks and Chris Parisi with HP Canada talking about a new virtualization solution for SMB's.





This video is showing how radiologists and doctors are using the latest Intel technology with Vital Images software to make faster and better clinical decisions. A stroke patient is one example where "Time is Brain", meaning that as each minute passes, the affected area of the brain is lost. For these patients, accurate and timely information can mean a world of difference in how they live their lives...





If there is one thing that has stayed consistent in the computing industry over time, it's that performance doesn't stand still.  As our computing platform processing, I/O, and memory speeds continue to accelerate, it is important to remember a little thing called latency.


Often in the Ethernet world throughput is the 1st and last performance metric of choice.  1 Gigabit and 10 Gigabit are the numbers that inspire thoughts of increased performance, and improved computing power.  However, it's important to note that, in many applications, the transaction latency over the wire is really the key to unlocking high performance at the system level.  One of the primary reasons that some organizations have turned to Infiniband and other I/O technologies for HPC and clustering in the past has to do with their desire to achieve very low latencies, not necessarily increased throughput.  If you look at a historical standard Gigabit Ethernet connection, you may see latencies that are around 125μs.  This may have been ok in the past, but as improvements at the application level as well in the system hardware and CPU take hold, legacy Ethernet won't be good enough for HPC and clustering environments.



The interesting, and often overlooked fact with Ethernet is that the latency characteristics are improving as the industry moves from 1 Gigabit to 10 Gigabit.  The faster throughput on the wire comes along with lower latency to some extent, but in addition, there have been several improvements in interrupt handling that drastically improve overall latencies when comparing legacy 1Gigabit to 10Gigabit. With a basic 1st generation Intel® 10Gigabit CX4 card you can now see latencies approach 25μs without any special tuning.



What's even better is that Intel's 10 Gigabit networking silicon also has further enhancements for improving latency by introducing some new specialized Low Latency Interrupt (LLI) filters in the silicon.  These filters provide the hardware with a quicker reaction time to network packets that meet certain customizable criteria.  The filters can be tuned to have a rapid response to certain packet and traffic types.  With these kinds of LLI filters in place, latencies can be reduced further by another ~50% to ~14μs.



Going forward with 10 Gigabit there are new technologies and designs that can help push latency even lower to the sub-10μs threshold to keep Ethernet very competitive as a fabric not only from a cost and throughput perspective, but also from the perspective of latency.



And while lower latency is certainly important, the last piece that was really missing from the Ethernet performance puzzle was not just low latency, but deterministically low latency.  The key is that the worst case packet latencies for many applications are relevant and very important.  By application thread affinitization, the individual data thread can be piped directly between a network queue and a CPU core.  By more evenly distributing the networking workload between CPU cores in a predictable fashion, you get a deterministic kind of latency that does not stray far from the average assuming CPU cores do not get oversubscribed.  Average latency of ~14μs is good, but the fact that you can get this with reasonable determinism is a key for many applications and usages.



Now, lower, deterministic latency is not just a theoretical benefit for certain niche applications.  Decreasing latency and improving overall latency characteristics while increasing throughput directly benefits the transaction rates that can be achieved with real world applications.  As an example of the improved performance is the latest Reuter Market Data Systems (RMDS) benchmarks done by STACResearch on the 4-way Intel® Xeon E7450 (Dunnington) using the Intel® 82598EB 10 Gigabit AT Dual Port networking adapter.  The testing showed the Highest Point-to-Point Server throughput to date on a single server in testing done by STAC.  And total updates per second reached over 15 million.  Financial Service industry administrators: I can see you drooling...



Latency and throughput numbers are great to talk about, but at the end of the day, real world application performance on real systems is the key.  While there will always be a small subset of the high end server market that needs the absolute lowest latencies provided by Infiniband; 10 Gigabit Ethernet is gaining ground while maintaining its place as the default fabric of choice for multiple applications and traffic types.  I believe the best is yet to come as newer, faster, and more responsive technologies continue to roll out.



Ben Hacker

Anisha Ladha, Intel’s e-waste Program Manager talks about the Climate Savers Computing Initiative and how everyone can make a difference. Watch this video to see how individuals and companies can take steps to reduce the computing carbon footprint...





More updates coming in from the Oracle Open World conference this week in San Francisco...I had the opportunity to catch Intel's CEO, Paul Ottelini during his keynote on Tuesday. There are a few segments from the keynote that really caught my eye, but this piece was the coolest for me...Check it out:





What a fascinating couple of weeks for Intel.  The week of Sept 8, my colleagues at Intel and I spent the week in Las Vegas at the SAP TechEd Conference.  This show has over 6000 attendees including IT decision makers, developer and partners.   I found this audience to be very technical and eager to understand the value of Intel architecture in relation to their SAP deployments.  The Intel team stepped up and delivered in many ways to educate this audience that Intel architecture is not only the best solution for mission critical datacenter infrastructure, but that we provide clear TCO benefits to the customer.


We were fortunate to be able to feature the New Intel®  Xeon® 7400 processor series via our partners IBM and Vmware.  IBM announced a world record 2-tier SD benchmark on the IBM xseries 3950.  The result of 9,200 SAP SD Benchmark users was achieved on the IBM System x™ 3950 M2, configured with eight Intel® Xeon® X7460 processors.  Absolutely amazing. 


One of the best learning experiences from the conference was speaking directly to IT decision makers in fortune 500 companies regarding the value of the Intel® Xeon 7400 series processor in SAP deployments.  We were able to alleviate their concerns of HW costs associated w/ migration to ERP 6.0, business value of upgrading hardware and overall show clear TCO benefits of the core micro architecture from Intel.  We backed it up with proven examples of TCO savings from multiple companies and even showed how Intel IT itself successfully migrated to ERP 6.0 and minimized business disruption significantly. 


I've also had the opportunity to chat with James G. White with HP. Check out the video below to see what HP has to say about Modernizing the SAP Landscape....







Great stuff, great show.  Loved it.

Good news for the enterprise - the latest "Tick" of Intel's "Tick Tock Model" has made its way to the high-end 4 socket segment and the energy efficiency improvements are sure to make an IT manager smile.


With the launch of Intel® Xeon® Processor 7400 series, the entire Intel® Xeon Processor product line is now using 45nm process technology, hafnium based hi-k dielectrics, metal gates and enhanced Intel Core Microarchitecture. The results are just what you have come to expect - improved energy efficient performance because of higher performance delivered by more cores and processor architecture improvements using faster, lower leakagetransistors.


What does this really mean to you?




  • Do you need better performance? How does up to a 50% improvement over previous generation Intel® Xeon® 7300 processors sound?

  • Is power a concern? A server configured with Intel® Xeon® 7400 Processors consumes ~equal or less power than the previous processor generation.

  • Combined, the performance and power improvements deliver up to a 54% improvement in energy efficiency.

  • Given the breadth of Intel® Xeon® 7400 Processor choices - from 6 core 2.66GHz (130W TDP) processors down to 2.13GHz (50W) 4-Core to the 2.13GHz (65W) 6-Core that is the lowest power per-core processor on the market, you can choose the right processor to deliver the balance of performance and power that meets your compute needs.


In summary, with the Intel® Xeon® 7400 Processors, you can deploy the same number of servers in your data center while increasing your performance capacity or deploy fewer servers to complete the same amount of work while reducing power consumption. Using the best energy efficiency servers is a great first step toward increasing the efficiency and performance of your datacenter - look for a follow on blog later this week from Dave Hill to talk about other actions that you can take to reduce your power consumption and carbon footprint too.

I ran into Barry Kittner (Intel) and Marcos Peixoto (Sun) at the Oracle OpenWorld event in San Francisco today. Sun is showing the Sunfire X4450, 4-Socket, 2U Rack Server. Sun is also talking about a unique way to evaluate the Sunfire server, check out this video to find how...







What do you think, not a bad deal is it? Check out this link for more details: TryAndBuy

"Live From" Oracle Open World and the Intel Innovation Zone...first impression...this is a big event. The Moscone Center here in San Francisco is rocking and Intel has some really interesting and cool demos inside the Innovation Zone. Check out this one where Intel is announcing a new Solid State Drive and demos it at the show:





Check back for more demos and show updates...


I was pointed to a review on Anandtech's website of the new Xeon 74xx (Dunnington) 6-core processor.  The article does a pretty comprehensive performance review of the new server CPU, with benchmark results compared to other platforms.



A good read - Check it out!



Here's the 6th follow-up post in my 10 Habits of Great Server Performance Tuners series. This one focuses on the sixth habit: Try 1 Thing at a Time.



Like habit 2, Start at the Top, this habit looks easy to understand and to keep. But, due to the constant desire for productivity, I and most others I know in the performance community have broken it many times. Some times I even get away with it. But trying to keep this habit is important, because when I don't get away with it, breaking this rule results in even more work than I was trying to save.



The concept behind this habit is simple - when you are optimizing your platform or your code, make only one change at a time. This allows you to measure the effect of each change, and only accumulate the positive changes (however small) into your workload. I have seen instances, for example, where 2 small changes applied at the same time to a workload cancelled each other out: one caused a small in performance and the other a small increase. If these changes weren't tested individually, we would have missed out on that performance gain.



Another thing that can happen in a complex workload is that two changes that seem independent can interact with each other. Like many developers know from fixing bugs, changing one thing may affect something else. Keeping all your changes separate can help you identify these interactions more easily.



You may be wondering when it is acceptable to break this habit. I think of performance methodology, and this rule in particular, as similar to the scientific method we learned in school. It's always good to follow it - doing so will help you quantify your successes and failures, stay organized, and defend your conclusions - but, you can still make a big breakthrough without it. In some cases, like when you are making small local changes to source code in completely different modules, or when you are changing two things you are certain won't interact, the habit can be broken. But the advice I give, especially to those involved in long-term optimization projects, is to follow it.



What has your experience been? Please share your "changing multiple things at one time" stories.



Keep watching The Server Room for information on the other 4 habits in the coming weeks.

Below link showcases a demo done by Parallels on an Intel 5400 chipset based workstation at Intel Developer Forum recently. It highlights innovation in virtualization using I/O virtualization hardware assist technology in Intel chipsets.




Parallels Demo on Intel 5400 Chipset



Don't be astonished, it's a real demo running using a beta code from Parallels for workstation. The workstation has dual graphics slot, which means two graphics devices can be plugged in the workstation. Using Intel VT for Directed I/O technology (Intel VT-d), the VMM can assign each graphics card directly to a VM independently. When done so, the guest OS running in the VM is in full control of the graphics device. The guest OS driver and any associated accelerators (OpenGL or DirectX) can be used with graphics device assigned directly. This lets end user to experience the full graphics capability including full 3D capability and near native performance even in virtualized environment on a workstation. Intel VT-d hardware assist for virtualization in the chipset plays a vital role in making this innovation possible.



Without Intel VT-d the graphics card is emulated in the VMM in software and all the acceleration (like Open GL and DirectX) is not possible. Direct assignment helps overcome the VMM overheads and have the guest OS handle the graphics card directly.



It is a tremendous advantage for workstation users who run applications in multiple OSes on different systems today and also do not want to sacrifice graphics performance with virtualization. On a single dual socket workstation running virtualization in the future, the end user could very well run two different OSes side by side, without compromising the quality of graphics and by running each OS on a different processor (or socket) soak up the full processing capability of multi-core workstations.





Intel's launch of Xeon 7400 processors this week marked yet another great product from Intel that simply delivers to basic virtualization infrastructure need of a datacenter. In my view, what sets Intel apart is the consistency with which Intel has been providing the hardware capabilities essential for virtualization adoption and acceleration. These hardware capabilities have delivered incremental power efficient performance for virtualization and platform wide solution that makes virtualization adoption efficient.



Just drawing a year back Xeon 7300 processors based platforms when launched set industry leading performance results for virtualization for 4 Socket mainstream servers. Now, Xeon 7400 series processor with six cores and built on energy efficient 45nm technology, provides the industry best performance for virtualization for 4 Socket mainstream servers. On VMware's VMmark, Xeon 7400 scaled up the performance (over best published Xeon 7300 score) by appx 35% ( On Hyper-v with vConsolidate virtualization benchmark Xeon 7400 delivered 40% better performance and 52% better performance per watt (over Xeon 7300) as published at This performance trend is fairly similar to how 45nm Quad Core Xeon 5400 (launched Q4 '07) delivered up to 20% performance over Quad Core Xeon 5300 (launched Q4 '06) in 2 Socket space. The key to IT managers from my view point are not just these statistics in performance but also the ability to get these performance increments on a predictive cadence with in the same power envelope. Socket based virtualization software means better TCO as well.



In the same vein of performance I mentioned platform solutions for efficient deployment as the key element of these hardware capabilities. Now efficient deployments of virtualization and emerging usage models of virtualization require performance and some more... what I refer as capabilities. But why?. It requires some simple understanding.



New emerging usage models of virtualization beyond consolidation, referred as virtualization 2.0, like load balancing, high availability and disaster recovery (HA/DR) require resource pooling. Once these resource pools are architected within the datacenter the IT managers do not typically want to change them just because they want to add new generation of servers to the resource pool (and retire a few older ones). To support this requirement Intel delivered a new capability called Intel VT FlexMigration ( With appropriate software support like Enhanced VMotion in VMware ESX 3.5 update 2, IT managers can simply roll in a Xeon 7400 processor based server with Core Microarchitecture based previous generation servers (like Xeon 5300, 5100, 7300 series processors) already in a resource pool.



Another requirement for efficiency in highly utilized servers as in the case of large consolidation or load balancing is robust and efficient networking solution that supports the increased processing capability. Load balancing and HA/DR usage model in particular rely on VM's moving over the network. Efficient networking solution means efficient virtualization 2.0 usage model deployment. Intel networking adapters that can be used on even the Xeon 7400  based servers has a feature known as VMDq, which can accelerate the networking performance. On a 10GbE NIC using ESX 3.5 update 1 software, VMDq delivered >2x the improvement in throughput, which means higher performance and also VMDq being a hardware assist reduces the VMM overhead relieving the CPU cycles for applications to run more than VMM. New Ethernet adapters also add QoS capabilities like bandwidth allocation that could provide even better control in terms of latency and traffic.  



Finally the virtualization 2.0 usage models rely heavily on centralized storage. Becoz when VM is moved from one physical server to another server in the resource pool, if the entire resource pool had a ubiquitous view of the data a VM was using, then the transition and resuming of VM on any server in the pool would be fast and seamless. Hence cost effective centralized storage connectivity would be very desirable for these virtualization 2.0 usage models. Intel hence has been a leading force in working with industry standards to make Ethernet robust and developing Fiber Channel over Ethernet standards and products that can carry both SAN and LAN traffic on the same fabric.



Collectively, all these highlight how Intel is showcasing leadership in products that matter to both consolidation and usage models of virtualization 2.0 beyond consolidation.

So, after four days of VMWorld, there were two announcements that really resonated with me as an end user proxy within Intel. For those who don't know me, my team's role is to look at the new technologies that are coming (or might come) from Intel from the eyes of the end user. We try to understand and quantify whether end users really find any value in these technology innovations and, through hands on work in our own labs and directly in end user IT environments, identify any technical and ecosystem barriers to adoption. When we find barriers, we work across the industry to address them. My team is specifically focused on the data center and we have a big focus on data center virtualization. So, yes, the vision that Paul Maritz outlined in his keynote makes absolute sense to me. Plenty has been written about the keynotes (and maybe I'll add my own thoughts in a bit). I wanted to talk about a couple of specific things that Paul mentioned and that, to me, were very encouraging and significant.


Technology innovations that directly and specifically address an expressed customer need don't always come to market quickly, especially if they require coordinated effort across different companies. I also don't believe the new conventional wisdom that, with virtualization, "the hardware doesn't matter". Two announcements at VMWorld demonstrate great examples of the former and give lie to the latter.



The first announcement was Cisco's unveiling of the Nexus 1000v virtual switch. One of the big issues for IT shops deploying virtualization has been that it's next to impossible to easily integrate virtual networking into the existing network management processes and roles and responsibilities. It's been the CCNE's that have enabled physical networks to be managed for reliability, security and compliance and, until now, virtual switches have not allowed that separation of duties and transfer of skills that are embodied in the CCNE's. The Nexus 1000V, a virtual softswitch that will launch next year (according to the demonstrator in their booth), will run side-by-side with the VMWare vSwitch inside ESX server and give CCNEs full Nexus OS access to configuring and monitoring the vSwitch using the same interfaces they're used to on the "hard switches". It also can enforce a separation of duties between the network administrator and the server administrator. This issue has been something that we've heard repeatedly from end users as a barrier to adoption for virtualization 2.0 in the enterprise and Cisco and VMWare have deserve a lot of credit for collaborating closely to make this a reality. (BTW, it also looks to me like the first tangible evidence that higher level networking functionality is beginning to migrate back to where it started: to software on general purpose computers. Perhaps more on that later).



The second was the announcement by VMWare of Enhanced VMotion and by Intel of VT FlexMigration. (Sorry if this part seems a little self serving from an Intel guy). These two capabilities, working together address another key need of end users. Until now, each new generation of CPU needed to maintained in a separate resource pool in the data center. If you didn't and you VMotioned backward from a new generation to an old one, it was possible that the guest application would make use of an instruction that didn't exist in the older generation. So, that kind of migration was not permitted. This restriction means that end users had to either grow resource pools by purchasing older generation hardware (and foregoing the energy efficiency and performance gains of the new hardware) or live with increasing fragmentation into resource "puddles". With EVmotion and FlexMigration, the hypervisor can now assure that the backward migrated VM doesn't use any of those new instructions. Voila, the backward migration can be allowed! Pools can be grown by adding new generation servers to a pool of older servers, a much smoother and more efficient approach to evolution in the data center.



Now, in retrospect, both of these innovations seem "obvious" but actually getting them to market is challenging and significant challenges still remain to implement them in real world environments. Perhaps more significant is that they both required the two companies to recognize the need, align their business interests to address, design a joint solution and coordinate the launch of their respective product offerings. Hard enough to do this across teams in the same company, let alone across two companies.



So, do you see other technology challenges like this with your virtualization projects? Simple problems that seem obvious but no one seems to be addressing?

Each year for the last 10 years, the innovators of VMWare, have hosted a users and partner conference to discuss virtualization technologies, ideas and services for the IT industry. This years event, in Las Vegas, brought together over 14,000 of the world's foremost thought leaders, developers and users from around the world. As the "Virtualization World" converged on Las Vegas their was a prevailing forecast that has begun to permeate our virtualization landscape: Cloud Computing. Paul Maritz, in his initial keynote address as CEO of VMWare, outlined the importance Cloud computing and the role that VMWare and their customers will play in defining the Enterprise Computing "forecast" over the next several years. It was a thoughtful direction for the world's leading innovator in virtualization software technology. I personally found it rather gratifying to see Mr. Maritz thoughtful demeanor and acknowledgement of the VMWare Co-Founders Diane Greene and Mendel Rosenblum, role in shaping this new direction. His understated prose also failed to acknowledge the role he himself has played over the years in establishing this also clearly placed in my mind why he may be the ideal leader to help us realize the forecast for cloud-based compute models.


So what does it all mean? Cloudy forecasts are always difficult to predict and predictions can become self-fulfilling prophecies or embarassing missteps. What is clear, in my opinion, is that Cloud computing will drive meaningful change across a wide range of industries in rapid succession.


Let me explain the logic: Organizing and managing compute, network and application usage models has been a very elusive endeavor for many years. IT departments cannot always predict application load, network requirements and storage availability. If you provision for the worst (or highest use) case scenario you often over build. In other cases, application popularity or changing business conditions create under capacity and infrastructure failure. Those of us who have launched Application Service Provisioning infrastructures bear the scars of failures, excitement of success and hope for the future. VMWare, Microsoft, EMC, Google, Amazon and many others have made a concerted effort to "get it right" this time. Cloud infrastructures using virtualization technologies are providing a opportunistic ways for developers and end users to test scalability theories of traditional client/server compute models. These same "Clouds" are providing internal cost reduced resource infrastructures to make available vast computing, network and application resources for everyday usage with relatively low entry points (a la Amazon's EC2). However, determining which part of the "Cloud" to make available for public vs. internal consumption will be defined by innovative new technologies that have yet to be announced. Interoperability, compatibility, performance and scalability are all design points which the industry must consider.


Visionaries in this space abound: Vin Cerf (deserves more credit than he is given), Ray Ozzie, Reuven Cohen (you may not of heard of him yet), Alan Gin, Marc Benioff, Ed Bugnion, K.B. Chandrasekhar, Pete Manca and many others have been working diligently for years behind the scenes to make the promise of Cloud computing real. Industries such as Big Pharma, Telecom, Financial Services and Oil & Gas will reap tremendous benefit from well defined industry "clouds". The role of ethernet will be a critical design point for these next generation infrastructures as 10Gbe+ reduces latency, response times and delivers application QoS. At Intel, we are very proud of our engineering and process manufacturing prowess for the development of multi-core compute technologies, rightfully so in my opinion, but the future of the "Cloud" will challenge us to re-examine our design methodology, increase our price-performance-per watt cadence and deliver exciting new innovations throughout our server/client platforms.



Virtualization innovation has provided a "sliver lining" for today's Cloud infrastructures. Where there is transitions or inflection points in the technology industry, there is opportunity. At VMWorld 2008, the virtualization industry has begun the process of delivering technologies in a world beyond the hypervisor. Virtualization 2.0 as outlined by Doug Fisher, Intel VP of Software and Solutions Group and Steve Herrod, CTO of VMWare is a step towards providing the innovation required to make Cloud infrastructures real. The next steps, the new pioneers ( a la Simon Crosby of Citrix) are building tools which provide increased ROI in decreased cycle times for IT managers. The future of the IT cloud is in their capable hands and in the hands of the IT innovators within each company focused on providing compute infrastructures designed to scale (and shrink) with the businesses we serve. VMWorld has yet to disappoint, in 2008, VMWorld reminds us that even on a "Cloudy" day there is a chance for change.


Here's a short video talking to Dave Martin of VMware around VT Flex Migration....





More news from VMWorld 2008, Las Vegas. Doug Fisher, Intel V.P. gave a keynote during the VMWorld conference. One of the more interesting elements brought Steve Herrod, Sr. V.P. and CTO of VMware on stage to talk about how Intel and VMware are collaborating to deliver leading Virtualization Deployments. Click on the video to see what they have to say.....






Day 1: I'm live from VMWorld this week experiencing the virtualization event of the year. I'll be updating this blog with happenings from the Intel booth and around the show floor. Some really cool video interviews with Intel Partners who are making a big impact in the virtualization world and giving IT managers real advantages over previous generation solutions. Here's a video showing the XEON 7400 near perfect scalability from 8 to 24 to 48 cores. Wow, 48 cores, that's cool!





If you liked the first video, check out this one where Jon Markee is talking about Intel Virtualization Technology (VT) and how flex priority improves performance and reduces boot time in your virtualized environment.






Day 2: Here's another video from the Intel Booth showing more examples of Intel Virtualization technology.



About 3 months ago I delivered a 2-part viedo series on the benefits of 45nm process technology (part 1, part 2). As time has progressed, the intel roadmap has continued to evolve and deliver increased benefits. On Sept 8th 2008, we introduced four new 2-socket processors in our Xeon 5400 product line and this past Monday (Sept 15th), we introduced a whole new series of products for our 4-socket product line, the Xeon 7400 series (codename: Dunnington). All of these new products feature 45nm process technology and the enhanced Intel Core Microarchitecture.


Here are some highlights of the benefits available for IT solutions


Better Performance: Xeon 7400 features up to 6-cores and 16MB cache per processor. It is staggering to think about what an individual server is now capable of doing.


o Over 1 million transactions per minute (8 socket TPC-C* result)
o Over 600,000 transactions per minute (4 socket TPC-C* result)
o Over 500,000 business operation per second (4 socket Java SPECjbb*2005 result)
o Learn more about performance results of the Xeon 7400 products here


Energy Efficient: The performance of 45nm processors (including the 6core) is being delivered in the same power/thermal envelopes as previous quad-core processors making the performance per watt ratio particularly appealing and beneficial to managing data center space and minimizing cooling challenges while growing performance capability.  Many customers are refreshing older servers and seeing dramatic reductions in total cost of operations and space requirements. Evaluate your potential benefits with the Xeon estimator


Investment Protection – All 45nm intel xeon processors (xeon 7400 and xeon 5400) are platform compatible with their 65nm quad-core predecessors (xeon 7300 and xeon 5300 respectively) so adoption, certification and integration into existing IT environments requires less effort.


Flexible Virtualization: All 45nm Intel Xeon processors contain a technology called Intel VT FlexMigration that allows newer 45nm processors to be live migration compatible with previous 65nm intel xeon processors. So with current virtualization software support, IT customers can migrate virtual machines across multiple generations of intel processors, all in one big pool of computing.


Better Business and Science: Many of the world’s top companies are using Intel’s 45nm products coupled with their software solutions to enhance their IT infrastructure. Last week Cern opened the Large Hadron Collider focused on recreating the big bang . Read more about how 45nm intel technology is playing an integral role in gaining insights into the formation of the universe or check out how your peers are benefiting from new technology at


Eco-Friendly: If your company or boss has a green thumb, you may be interested in knowing that the new Xeon 5400 products are now built with materials which are both lead and halogen free (halogen is a material known to contribute to global warming)


Finally, I came across this video where Nathan Brookwood (analyst from Insight 64) discusses the new Xeon 7400 product (Dunnington) and his outlook on technology roadmaps moving forward.


In the next few weeks, I will be compiling and answering the top 6 questions around 45nm … so ask away.



Virtualization is the big thing, everybody is doing it - just read the in-flight magazine to see why you should be virtualizing your data center... While it is true that Virtually everyone in the fortune 500 has begun to virtualize their data center, it is also true that most servers are still not virtualized.

i.e. The data center landscape is still mostly an opportunity. The software is mature, there are multiple viable solutions, but there are still many questions about how "best" to proceed.


As an enterprise engineer working with enterprise customers, I am inevitably asked where the sweet spot is. The reality is, there isn't one. Or "It Depends". In general larger ( 4 socket servers) provide an edge in efficiency as there are more shared components - board, memory, power supplies, etc. Large servers can also provide more head room if most of your VMs are low utilization, but any of them can spike way up. The launch of Intel's six core Xeon 7400 series based servers ( and their record breaking virtualization performance) have added to the interest - is it time to go big?

What does it depend on?


  • How big are your VMs? Machines today are quite powerful. We have seen a 10X growth in compute capacity in just the last 6 years. The application that filled 37% of your 2003 vintage server won't even make a dent in a modern Xeon based server. i.e. Most VMs are much smaller than your server 2 socket or 4 socket. There are still tasks - like decision support that scale as big as your machine will go, but with average enterprise utilization down around 12% ( on old hardware) most physical machines fit tidily inside a VM.

  • How spiky are your VMs ( in resource demand - compute, memory, network)? By doing some resource profiling, you can understand where your servers fit best.

  • How many VMs do you want on each PM(physical machine)? You can put more on 4 socket hardware ( efficiency) but have greater redundancy on a bunch of 2 socket hardware ( depth).



Fortunately you do not have to solve this linear programming problem before you start. In reality the tools are making it easier you solve. Using your favorite VMM manager (choosing this is another discussion).  With Intel's VT Flex Migration Technology you can pool together 1, 2, 4 socket current and future generation Xeon platforms and move the workloads ( automatically, or manually) to optimize your resource utilization.

There's been a number of blogs written recently about the upcoming Xeon 7400 (Dunnington) processor (I've listed a bunch of them at the bottom of this thread if you're interested). I'm happy to report that it's not upcoming anymore - today Intel formally launched this new processor at a press event in San Francisco. The event consisted of Intel VP and GM of Digital Enterprise Group Tom Kilroy's presentation to the press followed by a lively end user panel discussion with execs from Yahoo, Oracle, MySpace and Verisign and moderated by Intel VP and CIO Diane Bryant. It was really interesting to hear about the challenges these companies face today in their data centers and the benefits that Xeon platforms bring to them.


Some of the takeaways from Tom's speech were:


  • This is Intel's newest high-end Xeon® server processor. It's socket compatible with the previous generation Xeon® 7300 based platforms so that means it should allow IT to easily qualify and introduce Xeon 7400 servers into their environment.

  • The processor is based on Intel's 45nm high-k process technology, 6 cores per chip and 16MB shared cache memory, and has advanced virtualization capabilities like VT FlexMigration.

  • It's built for virtualized environments and data demanding workloads (i.e. databases, BI, ERP and server consolidation.)

  • Servers based on the processor are expected to be announced from over 50 system manufacturers around the world, including four-socket rack servers from Dell, Fujitsu, Fujitsu-Siemens, Hitachi, HP, IBM, NEC, Sun, Supermicro and Unisys; four-socket blade servers from Egenera, HP, Sun and NEC; and servers that scale up to 16-sockets from IBM, NEC and Unisys.

  • It's already set new four-socket and eight-socket world records on key industry benchmarks for virtualization, database, enterprise resource planning and e-commerce. I found a link on that summarizes these here.


Here's also a copy of the press release I found on and an article I just found on EE times.


Previous Blog Links:



HP Announces World Record 4-Socket TPC-C Result



IBM Announces World Record 8-Socket TPC-C Result




Previous Video Links:



Announcing Demos on Demand



Turtle Entertainment-Virtualization of Gameservers



HP Announces World Record 4-Socket TPC-C Result



IBM Announces World Record 8-socket TPC-C Result



XEON 7400-series Benchmark Results



Boyd Davis Talks Dunnington Performance



IBM Announces World Record SAP-SD Result



I had the privilege to get invited to the Microsoft Virtualization Launch event in Bellevue, Washington on September 8, 2008 on occasion of the release of Microsoft Windows Server* 2008.  I attended the keynote presentations and a number of technical sessions.  I was especially interested in calibrating my experience from the few months before working with Hyper-V as an architect and integrator in putting together a demo that was delivered for the Intel Developer Forum, which took place in San Francisco on August 19-21.  Please refer to "[p-11499]" for a detailed account.



The solidity of the product during the months I worked with it was impressive.  I've seen behaviors in previous products are technically correct.  However, oftentimes with new products (and also with presumably mature products) the system just checks out for a time, or the results of certain operations are ambiguous, yielding a subjective feeling of "mushiness" that does not inspire confidence.  None of this happens with the Hyper-V manager user interface.  The response was always crisp and the system was good at informing the user about what is happening.  At the outset, if the BIOS settings are not correct for running virtualization, it will remind you in very certain terms to turn on hardware support for virtualization and the execute disable bit, even to the point of asking you to power cycle the system before resuming.  The claims by Microsoft engineers during the conference about OS and VMM stability are consistent with my own experience.



Hyper-V comes with an extensive set of tools designed to facilitate large-scale deployments, yet they are useful to small enterprises.  Examples mentioned: Microsoft Assessment and Planning (MAP) and integration with System Center.  The available tools support the complete deployment life cycle.



Microsoft invited Thomas Bittman, a VP and distinguished analyst from Gartner. His vision for virtualization is similar to the one painted extensively a book I co-authored.  The alignment of his ideas with those in the book caught my attention.  See "[p-11383]". His thesis is that the impact of virtualization comes not from the technology itself, the capability to consolidate workloads and save energy, but from the changes in business models it brings.



"It is now less about the technology and more about process change and cultural change within organizations," said Mr. Bittman. "Virtualization enables alternative delivery models for services. Each virtualized layer can be managed relatively independently or even owned by someone else, for example, streamed applications or employee-owned PCs. This can require major cultural changes for organizations."

For the event next week @ VMWorld we have created a virtual booth, check it out here:  Intel Virtual Booth


Also follow Hank Lea as he explores vmworld and blogs about the event.

I am currently sitting in PDX (Portland, Oregon) waiting for my flight to Dallas, then on to Sao Paulo, Brazil, then on to Porto Seguro, Brazil.


This is where our PR team is running an event called Intel Editor's Day (IED).  This is the 3rd such event this year and I have had the pleasure of presenting Server Benchmarking at each event.  The first in Mexico, the second in Costa Rica a few weeks ago.  IED offers regional journalists a chance to get product information and demonstrations from Intel so they can be ready to report what they see, review, and evaluate when looking at these products in the market.


It's a team effort with these events.  I am 'the server guy', yet I am carrying a few MID's and even an Atom demo card for my brethren (and sisteren?).  I even have a wafer with me... something of the 45nm type.   ;o)


The reason I am going is to talk with the 30+ journalists about distinguishing between client and server benchmarks.  If you (the reader) don't know that there is a difference, there definitely is and you should let me know so we can offer some information for you to read about it. 


My goal at IED is show a few benchmarks, talk about a few more (SPEC, TPC, etc.), and ultimately learn about how they might run these benchmarks.  Education is the goal, but it goes both ways.


I'll add more when I get a chance... I have to grab a bite before getting on the plane.

I'm getting ready once again to hit the road and this time I'm heading out to SAP TechEd conference next week (Las Vegas, Sept 9th-12th). I want to give a shout out to my colleague at SAP, Craig Cmehil who posted his blog HERE. As Craig mentions, a great place to network at the event is the Community Clubhouse which is in its 5th year. I also plan to spend some time in the Networking Lounge trying to catch the vibe of the event.


I'll be making daily updates to this blog as the event happens including cool Dunnington videos, interviews, and demos from the Community Clubhouse. If you can't make it out to the event, check back for updates here and you may want to browse the blogs on the SAP Community Network.


Wm. Hank Lea

Server Room Community Manager


LIVE UPDATE: 12:25pm


Here's a shot from the Intel Booth:



Check back later today for a cool video showing a Formula 1 racer simulation....




Here's that F1 video I was promising earlier, and no, its not me driving. I was much slower than this guy....



More cool stuff coming your way tomorrow, check back to find out.

This is the 2nd in a 3 part series of video blogs that looks at Virtualization, Grids and Cloud computing. Follow this link for the first part: Part 1


The videos explore these concepts first individually and then try to show that taken together the combination is greater than the sum of individual deployments of the techologies. In reality, all three are required to begin to realize the vision of the dynamic, efficient datacenter but I would caution that these are necessary but not sufficient to realize this full vision ... well ... is a topic (set of topics?) for another day.


As you view the video please bear in mind that there are a couple of underlying assumptions in the statements I make in the video - unfortunately these got eliminated in the editing in trying to meet video duaration constraints. A quick recap of the assumptions:


a) the target environment is the enterprise (both enterprise IT and enterprise data centers) - some of the thoughts apply to SMBs but may not always.

b) the discussion on clouds is really focussing, primarily, on "Internet" clouds and not "Private" clouds (there is a reference to and a motivation for "Private clouds" when the technologies are brought together but in this discussion on clouds the focus is on Internet clouds unless mentioned otherwise).

c) The perspective on Grids in the video is broad - Most folks are used to associating Grids with HPC - it would be very helpful in this video to suspend this association (at least while watching the video and if I can get you to, maybe, into the future as well). This association with HPC is very limiting and represents a use of Grids and does not illuminate what Grids really are or, more importantly, their potential.


This and the previous video introduces the concepts as I see them. Some tangible examples in how they may come together is presented in the next video ... promise ...


So here is the video:






Now that you have heard and seen the video .... a few more observations not discussed in the video ...


  • Grids represent an infrastructure management paradigm - actually once you step beyond the base machine virtualization (where the opportunities for real differentiation are fast dimnishing) you will find that solutions that most vendors have or are developing to manage these VMs borrow heavily and, in some cases, almost entirely from Grid technologies - but they won't tell you that. (Once you take the "broader view" it becomes apparent that many Intel platform technolgies become very relevant to Grids and so Intel platforms can be deployed as more than just the "simple and commodity" hardware that they are currently viewed and deployed as)


  • Another point to note is that Clouds and Grids are closer than one may think. In many cases a cloud is realized by a simplification of a Grid that is made possible by application to a defined context determined by the cloud service offered. Furthermore some of the complexities of Grid computing (under the covers) have been masked by the introduction of a portal or some other simplifying assumptions and implementations. Many of the well known clouds are implemented using Grids. Nonetheless it is very important to keep Clouds and Grids distinct so that one can understand these paradigms and extract maximum value. The moniker cloud represents a use paradigm (against an highly elastic service) whereas Grids represent an infrastructure paradigm.


  • On a larger note: One way to bring these topics together conceptually is to see virtualization as the paradigm to substantiate the entities (resources or otherwise) that can be/are visualized in a context, Grids as the paradigm to manage these virtualizations and Clouds as the paradigm for use of these managed virtualizations.


I will build on these assertions in my next video ... In the meanwhile am looking for discussion on these topics -


  • What do Grids and Clouds mean to you?

  • Do the views represented here make sense or are there other ways in which one may approach these topics?

  • What are some interesting ways you have used these technologies in your line of work? What are some problems that were solved or new usages created?

  • Are Grids as a topics of discussion dead/passe or are they as relevant today as they were a few years ago - why?


Filter Blog

By date: By tag: