Skip navigation
Billy Cox

Intel Cloud Builder

Posted by Billy Cox Oct 30, 2009



For those of you implementing the infrastructure of a cloud, often called IaaS or Infrastructure as a Service, one of the challenges can be “where to start?”. With the myriad of hw options and variety of software solutions finding a starting point can be daunting.


For example:


  • What server configurations are optimal?
  • How to structure the network?
  • What is the optimal storage configuration?
  • I really don't want to write this software , therefore, what is the cloud management stack that best suits my needs?


Assuming that cost reduction and/or agility are the reasons you are building a cloud (true for the vast majority of customers), then there is huge benefit from using a largely homogenous architecture: identical server, network, storage, and management configurations across the cloud implementation. This architecture addresses the maintenance aspects of the infrastructure (remove from service if it fails, replace when enough are out of service to justify a visit to the data center) as well as the operational aspect (no special cases). Getting to the point where workloads can be hosted in this environment requires effort but has a fairly quick payback once you complete the transition.


Even with this in mind, you still have to design the hardware infrastructure and then select a set of management tools.


Intel recognizes this need and has formed the Intel Cloud Builder program to help in this ‘getting started’ phase. If you are already well down the road to building a cloud, you will likely find the output from this program useful to understand the options available in the market.


Intel(r) Cloud Builder is a fairly simple program with a powerful output:


  • using a defined hardware blueprint,
  • using a cloud management software stack,
  • run the combination on a Intel hosted cloud test bed, and document the results.


For more information, please go to Intel Cloud Builder Program.



Billy Cox

Director, Cloud Strategy

Intel Software and Services

Just completed 8 city roadshow with Dell and Red Hat, talking to customers face to face over lunch about how a migration should be planned and done from UNIX/RISC to RHEL/Intel/Dell. 


I had opportunities to attend two of the seminars in different cities, meeting few different customers.  Customers came to our seminar to look for a template to quickly develop a plan that could also be approved quickly and get the project going to counter the enormous management pressure they get to do more with less. 


Most of the customers I had conversations with were from medium size business. 

One customer clearly said, it is no brainer, there is no reason to advocate to purchase new AIX/Power or Solaris/SPARC machines any more. 

The other customer said, much of the knowledge accumulated to manage UNIX servers can immediately applied for migration activities while he saw the needs to get the UNIX administrators trained on RHEL through the training programs offered by Red Hat. 

Consistently, there were the nuance that customers needed to act now, for a short term result. 


For those who needs to get going on planning, Red Hat has put together a very good migration guide. The document helps you start thinking about and resourcing your migration.   Register, to get started.  Also, a lighter version to get an idea. 


Please contact RHEL, Dell, or Intel representative for the next steps, and if you don’t have such resource, let me know.  I’ll help dispatch one to you. 


Why migrate? Why now?


There has never been a better time to migrate your proprietary RISC servers running UNIX(R) to Intel(R) Xeon(R) processor-powered Dell™ PowerEdge™ servers running Red Hat(R) Enterprise Linux(R) Why? Four compelling reasons. First, cost, cost, and cost again. This industry-standard platform can reduce your capital expenditures as well as your operational costs for a lower total cost of ownership (TCO). Second, choice and flexibility. Because you’re not locked into proprietary technologies, you have substantial choices that keep you nimble and agile no matter how your business needs evolve. Third, simplicity. The Red Hat-Dell-Intel platform just works. And acquiring all the products and services you need from one source–Dell–reduces the complexities of both technology procurement and support.  Finally, performance. In these challenging economic times, migrating from RISC and UNIX to a Red Hat-Dell-Intel solution is an easy and fast way to accomplish more with less, bringing true value to your business.



Power Your Enterprise


Because Red Hat Enterprise Linux is optimized for the Intel Xeon processor on which Dell PowerEdge servers are based, you can support your business’s most demanding  challenges. For starters, Red Hat Enterprise Linux 5.3 takes advantage of the Intel Xeon processor 5500 series to deliver more than twice the performance compared toprevious generation Intel processors.1 Because Red Hat Enterprise Linux incorporates Intel’s energy efficiency enhancements, such as integrated power gates and automated low power states to support low-latency changes among power states, you can lower power consumption during off-peak times. This has the additional benefit of reducing datacenter cooling requirements. You achieve previously unattainable scalability with support for up to 255 central processing units (CPUs) and one terabyte of memory. And Red Hat Enterprise Linux supports Intel Hyper-Treading Technology to enable advanced parallel computing.



Learn More


To learn more about migrating from a proprietary RISC /UNIX platform to a Red Hat, Dell, Intel solution navigate to, then click on White Papers.

Do you ever wonder where Spam comes from?  I have no idea where the meat-like version of Spam comes from (nor do I wish to ponder that mystery). But it is pretty well established that a huge component of the e-mail and IM Spam that we all know and hate is generated by automated programs (bots) installed on thousands or even millions of unsuspecting systems.  These bots are remotely controlled via command-and-control or even peer-to-peer networks (botnets) to do the bidding of the bot developer—such as propagate Spam or other malicious software or generate denial of service attacks against designated targets.  And all of this could happen without most people even knowing their system is doing anything.

Botnets are the end result of many malware exploits—as viruses, worms, Trojans, drive-by or click-through attacks may deliver and propagate the bot payload. They are also a crystal clear example of how the objective of attacks have changed from hit-and-run high-profile grabs for fame to instead focus on stealth and establishing and retaining control of assets. Botnets are an ideal tool for the nefarious—they can command huge numbers of widely distributed systems at trivial costs.  While it is hard to estimate how many systems are part of a botnet, the potential is staggering.  For example, the much-publicized Conficker worm is estimated* to have placed more than 4 million unique IP addresses under the control of “bot-masters”.  And this huge resource base allows the bot-masters to rent control of these resources to spammers or other agents looking for ways to generate attacks or other nuisances with low risk of being detected.  In essence, they are allowing criminals and spammers to outsource the generation of their malicious activities. It is a frightening business model indeed.

It is also a difficult challenge for IT. Thanks to botnets, it is possible for an IT manager or CIO to get a call from out of the blue asking why their systems are attacking some other company or government entity’s systems.  Or discover a botnets of 100’s of computers with their company.  These type of events can happen to the best IT departments (even Intel or the US Government). Clearly, IT needs tools to help prevent such scenarios, and the antivirus and intrusion detection/prevention industry is working hard to keep up with the rapid growth in the delivery vehicles for bot code.  The other weapon for IT managers is traffic analysis – looking for strange patterns of activity (such as bursts of e-mail traffic from selected systems or floods of network traffic generated against specific targets) that falls outside of business norms to determine if there is another business being conducted with their assets.  While being part of a networked world has wonderful, powerful benefits, it is not without enhanced risk. A botnet is not a network you ever want a member of.

Intel technologies like Trusted Execution Technology (TXT) and instruction set optimizations such as STTNI can be part of these solutions.  Intel® TXT can be used in solutions that help protect systems from software attacks which provide the malware payloads to compromise systems.  In fact, Intel TXT (to be available with Westmere server systems) provides an entirely new protection capability for most systems—providing evaluation of the launch environment and enforcing “known good” code execution. This is important because most malware tools execute only once the system is booted—so Intel TXT provides a valuable complementary protection. And to help with the growing burden of run-time malware and attack analysis, new (with Nehalem) instructions that accelerate string manipulation can boost content inspection software ability to detect anomalies.  And research and development will ensure Intel continues to develop and deploy building blocks to help IT address today’s challenges and tomorrow’s.

We can do that most effectively only if we’re trying to solve the right problems.  Are your systems under attack? (yes, they are). What types of solutions are most effective for you?  Where is the greatest exposure? Is the pain in stopping attacks or cleaning up after them? This is certainly worth thinking about—before some Government agency comes calling asking why your systems are sending them so much spam!



Why upgrade your hardware when migrating to SAP ERP 6.0?  Because it makes simple, practical, business sense that is all.  SAP has identified several key reasons why customers are concerned about migration and several among them are as follows:

  •    Cost, Cost, Cost
  •    HW infrastructure cost is highlighted as one of the key barriers of migration
  •    Business Justification
  •    Is there a compelling business reason to upgrade the hardware?
  •    Additional risk of business disruption
    •    Migration of ERP environment is complex enough…how much more risk is there when upgrading your hardware?


From a cost perspective, the perception that hardware is a barrier to migration can be easily overcome.  Based on research, the hardware cost as a percentage of the overall migration cost is only about 7%.  That means 93% of the cost is in licensing, consulting, etc, etc.  HW costs are only the “tip of the iceberg” and the real $ investment lies elsewhere in the equation.


Is there a compelling business reason to upgrade your hardware?  Well…frankly, it does not make sense not to do it.   One, we showed above that the hardware investment is minimal compared to SW licensing, consulting, service, etc.  Two, the hardware requirements of ERP 6.0 are significantly higher than previous versions. ERP 6.0 requires up to 2.5x more CPU performance, 2.5x more memory and 1.5x more I/O!  You will need the increased performance and scalability that Intel provides in our microprocessors.


While the ERP performance requirements have increased 2.5x, Intel performance with SAP has increased 10X!  Oh, btw…energy efficiency does matter and in your new ERP environment you will be able to consolidate servers and save on power and cooling costs.  TCO will be significantly reduced and from hardware investment standpoint, you are likely going to recover the cost of the servers in a very reasonable timeframe.


From my discussions with the IT community, their major concern and number one focus area is to prevent business disruption and downtime.  This costs companies real and significant money.  The fact is that an ERP migration is a complex enough project managing the strategic, functional and technical portions.  Adding a server infrastructure change increases fundamental risk.  But, the key here is that it is done often and done successfully.  Intel IT has published several whitepapers on the subject and communicated “Best Known Methods” to minimize that risk.    A quick summary is inserted here:



•   Convert Intel’s Worldwide Warehouse Management Software

•   Upgrade from SAP* ERP version 4.7 to 6.0, change the DBMS, and perform a Unicode* conversion as well as a hardware upgrade

•   Minimize downtime


Benefit to Intel IT:

•  SAP ERP 6.0 improves Intel supportability

•  Increases ease of integration to SAP NetWeaver* 7.1 Suite

•  Provides access to Enhancement Packs and Enterprise Services

•  Intel® Itanium®-based servers provide access to 128 GB of memory for database and SAP operations and significantly increased performance from true 64-bit processing


Key Results:

•  Reduced downtime of upgrade by 50% by using Intel Architecture


In summary,  upgrading your server infrastructure when migrating your ERP environment is a very, very complex task, but form a business perspective, it should be fairly easy to see the true benefits from combining the ERP migration and hardware upgrade at the same time.

Every day, Intel® technology and platforms help companies solve business problems and challenges. Here are a few of the growing number of stories and reasons for choosing Intel processors and technology.


Winning: Humana – Healthcare product and services company

Humana continues to refresh its infrastructure with more powerful, energy-efficient technologies. For Humana, technology is vital for providing information and a full array of health benefit services to members. To replace an outdated facility, the company worked with Intel to design a state-of-the-art data center with a compact, energy-efficient infrastructure that could deliver flexibility and scalability.

Read about it here


The results:

  • The Intel processor–based virtualized environment helps IT deploy new services quickly and ensure high availability.

  • Humana added 25 percent more servers in 56 percent of the previous space while decreasing data center power consumption by 16 percent.


Winning: Emerson Electronics

Emerson reshapes its IT infrastructure for future growth, consolidating approximately 135 data centers down to four using Intel® technology–based servers

Read about it here


The results:

  • 3,600 physical servers are eliminated by virtualizing on Intel processor–based blade servers, for 18:1 consolidation worldwide

  • Power-saving processors help make Emerson’s new global production data center in St. Louis 31 percent more energy efficient than traditional data centers

Winning: Türkiye Finans Katılım Bank

Leading Turkish Financial Institution Drives Better Growth and Services with Intel®Technology. Türkiye Finans Katılım Bank makes use of the online Intel Xeon processor-based Server Refresh Savings Estimator

Read about it here.


The results:


  • Intel® Xeon® processor-based Server Refresh Savings Estimator¹ sets expectations clearly, predicting 80 per cent reduction in power/cooling requirements, and a 30 per cent increase in system performance already realized. With only 20 per cent of capacity currently utilised, bank has significant headroom for business expansion


Winning: Oracle IT

Oracle uses Intel® Xeon® processor 5500 series–based systems with Intel® Intelligent Power Node Manager to increase rack density and propel business growth. Refreshing its existing dual socket, quad-core servers on a three- to five-year schedule to increase processing capability and energy efficiency, but had no significant power management in use in the data centers.

Read about it here


The results:


More processing capability can fit within the data center power envelope because Oracle can actively manage power consumption for individual servers and applications.


  • Energy savings of 35 percent are projected with Intel Intelligent Power Node Manager, for reduced operating expenses

  • 50 percent more servers per rack saves data center space and enables more growth while keeping costs low


Winning: DataPipe®

DataPipe® retains a competitive edge by designing a new facility and refreshing existing data centers with cutting-edge technology that can deliver outstanding processing performance for a broad range of customer applications. Low-voltage Intel® Xeon® processors help DataPipe create a dense, energy-efficient infrastructure for managed IT services.

Read about it here


The results:


New Intel Xeon Processors Provide a Foundation for Cloud Computing. With the Intel Xeon processor 5500 series, DataPipe is creating a robust virtualized server environment, Stratosphere™, for hosting customer applications.

As I’m new to The Server Room, I offer this brief introduction:  I am a marketing manager in Intel’s Software and Services Group – looking after Intel’s collaborative marketing efforts with virtualization solution providers.


A couple weeks ago, Ken Lloyd blogged about the incredible changes in compute capability and performance brought by the Nehalem microarchitecture – and gave credit to the advances in software, too.  I’d like to take the conversation a step further:  did you know that the launch of VMware™ vSphere 4.0 in April 2009 represented a milestone of collaborative development?  The combination of VMware vSphere and Intel Xeon processor 5500 based systems delivers astonishing performance in part because it is the result of a full cycle of collaboration.



Intel has a well established rhythm of technology innovation – and a lot of really smart architects who know a thing or two about cpu design – but we get innovative ideas from the outside, too.  Over the years of the VMware alliance, Intel has received (and acted on) many requests for small changes in cpu circuitry…changes that would make virtualizing the cpu easier, more efficient, or add capability.  A whole raft of hardware optimizations for virtualization were included in the Nehalem architecture.  As Intel started to deliver early silicon for Xeon 5500 based platforms, Intel software engineers worked closely with VMware engineers – optimizing vSphere code to take advantage of the new hardware features to improve performance, increase efficiency, and add new functionality.  The results?  Check out this video from the launch of VMware™ vSphere 4.0 to see for yourself what “better together” really means.  And the cycle continues – what can you imagine in the next round of collaborative innovation??





The digital workbench is like the workbench at home where you have pliers, nails and hammers that we use to build or fix things—the workbench holds all the best, most useful tools to complete a project and makes them available at your fingertips.

The digital workbench replaces analog tools with digital tools and software suites from ISVs (e.g. Altair, ANSYS, Autodesk, Dassault CATIA, Dassault SIMULIA, ESI, MSC, PTC, Siemens PLM and others).  These ISV’s are all laser focused on enabling designers to move analysis further up the design chain.  Couple this with recent performance gains available on workstations based on the Intel® Xeon® processor 5500 series from suppliers like Boxx, Dell, HP and Lenovo and you have the opportunity to now view your workstations as a digital workbench.  The result is a new environment that enables users to rapidly test and refine their ideas potentially at the speed of thought.

The digital workbench, powered by two intelligent Intel® Xeon® 5500 processors based on the Nehalem microarchitecture, can help you transform complex and visually intensive data into actionable information at near-supercomputer speeds.

“Imagination is everything. It is the preview of life's coming attractions.” Albert Einstein

Today’s workstation can provide you with a magnificent digital canvas to create tomorrow today. With workstations powered by two Intel® Xeon® 5500 series processors, engineers have the opportunity to create, shape, test and modify products before they become real. Engineers can now design, visualize and simulate products from the conceptual design phase through the entire manufacturing process. This is done virtually before any investments are made in a prototype.

“Experiment fearlessly.” “Innovation is bloody random.” Tom Peters

Peters, a world renowned author and management consultant, recognized that innovation is more art than science. Consider this example: Taking innovation to an entirely new level, Boeing, in the late 1990s, employed a process known as algorithmic design to see what designs might be viable to meet a specified hypersonic aircraft design criteria. The algorithmic design process enabled computers to create and test new ideas against the specified design criteria without human intervention. As a result, more models were evaluated in less time, and a vehicle that was counterintuitive to what many engineers may have thought possible was evaluated. Innovation just accelerated.
Intel technology has seen dramatic changes since Boeing first tested the idea of algorithmic design in the last decade. Workstation performance has gone up Dual-processor workstations have yielded to workstations with two processors, eight cores and 16 computational threads. Science or simulation that was never tractable on a workstation before is now standard, and it is getting faster.

“I confess that in 1901 I said to my brother Orville that man would not fly for fifty years.” Wilbur Wright

You think all you need is an entry-level workstation with a single Intel® Xeon® processor. After all , you only do CAD—right?
However, as you begin to adopt modern workflows and realize the dramatic impact that simulation-based engineering or digital prototyping can have on your product development cost and schedules, you realize that the cost of the second processor and additional memory necessary to support digital prototyping was far less expensive than the cost of multiple physical prototypes and the associated time to produce them. Instead of investigating hundreds of digital prototypes, you only have time to look at a single physical prototype and ask: What if I …?

Those “what ifs” could have been played out on a dual-processor Intel Xeon processor 5500 series-based digital workbench faster, and your time and cost of physical prototypes could have been significantly reduced.

The digital workbench, powered by two Intel® Xeon® 5500 series processors, can have an enormous impact on your organization’s ability to design, visualize and simulate products, from the conceptual design phase through the entire manufacturing process, and it is all done virtually before a prototype is ever invested in. These digital workbenches exceed the computational power of the Cray C90 series, which in the 1990s was revered as the fastest ever.

Without question we all recognize that simulation and modeling have become indispensable tools in design. But visualization remains the principal conduit to transforming data into knowledge and actionable information. The digital workbench can provide you with both the compute capacity and the visualization capability you need to innovate faster.

If all you are doing is CAD on your workstation, then an entry workstation may be bes
t your solution. However, as others around you adopt modern workflows that incorporate simulation-based engineering and digital prototyping, you may want to step up to a more comprehensive digital workbench solution that provides an entire suite of tools to help you play more “what ifs” locally and faster than ever before.

One more point on this: If you are stuck on the entry workstation, then you may want to consider a mobile workstation. While the immediate cost will be higher than an entry-level workstation, the real cost may be lower. With mobile workstations you can design with your customers and not just for your customers. You may be able to reduce the number of design reviews by innovating with your customer right there as spontaneous ideas happen. The real cost of a tethered entry-level workstation may be indeed be much higher than you think.
Join the revolution and innovate faster with the digital workbench powered by two Intel(r) Xeon(r) 5500 processors

One of the first questions in my mind when I was first exposed to Intel(r) Intelligent Power Manager (Node Manager) was "what is the performance impact of applying Node Manager technology?"  I will share some thoughts.  The underlying dynamics are complex and not always observable and hence it's difficult to provide a definitive answer.  Robert A. Heinlein popularized the term TANSTAAFL ("There ain't no such thing as a free lunch") in his 1966 novel “The Moon is a Hard Mistress”.  So, does TANSTAAFL apply here? Node Manager brings benefits with the ability for the application to designate a target power consumption, a capability otherwise known as power capping. On the cost side, Node Manager takes some work to deploy, and has performance impact that varies from very little to moderate.  On the other hand, Node Manager can be turned off, in which case there is no overhead.


Node Manager is useful even when it is not actively power capping but is used as a guardrail, ensuring that power consumption will not exceed a threshold.  The predictable power consumption has value because it provides data center operators a ceiling in power consumption.  Having this predictable ceiling helps optimize the data center infrastructure and reduce stranded power.  Stranded power refers to a power allocation that needs to be there even if it's only for occasional use.


The performance impact can vary from zero when Node Manager is used as a guardrail to a percentage equal to the number of CPU cycles lost due to power capping when Node Manager is applied at 100% utilization.  When applied during normal operating conditions, the loss of performance is smaller than the number of cycles lost to power capping implies because the OS usually compensates for the slowdown.  If the end user is willing to re-prioritize application processes, under some circumstances it is possible to bring performance back to the uncapped level or even beyond.


Power capping is attained through voltage and frequency scaling.  Power consumed by a CPU is proportional to frequency and to the square of the voltage applied to the CPU.  This is done in discrete steps (“P-states” as defined by the ACPI standard.


The highest performing P-states are also the most energetic.  Starting from a fully loaded CPU and the highest P state, the DBS assigns lower energy P-states as workload is reduced utilizing the Intel(r) SpeedStep technology.  An additional dip takes place as idle is reached as unused logical units in the CPU are switched off automatically.


Node Manager allows manipulating the P-states under program control instead of autonomously as under SpeedStep.  Since the CPU is running slower, this has the effect of potentially removing some of the cycles that otherwise could be used by applications, but reality is more nuanced.


At high workloads, most CPU cycles are dedicated to running the application.  Hence, if power capping is applied, a reduction in CPU speed will yield and almost one-to-one reduction in application performance.


At the other end of the curve, if the CPU is idling and power consumption is already at the floor level.  An application of Node Manager will not yield any additional power consumption reduction.


The more interesting cases take place in the mid-range band of utilization, when the utilization rate is between 10 and 60 percent, depending on the application (40 to 80 percent in the BMW case studybelow.)  Taking utilization beyond the upper limit is not desirable because the system would have difficulty in taking up load spikes and hence response times may deteriorate to unacceptable levels.


We have run a number of applications in the lab and observed their performance behavior under Node Manager.  Surprisingly, the performance loss is less than frequency scaling would indicate.  One possible explanation is that when utilization is in the mid-range, there are idle cycles available.  The OS compensates to some extent for the slower cycles by increasing the time slices to the applications, using up otherwise idle cycles, to the point that the apparent performance of the application is little changed.  The application may need to be throttled up to re-gain the pre-capping throughput.


One way to verify this behavior is to observe that CPU utilization has indeed gone up in a power capped regime.  BMW conducted a proof of concept with Intel precisely to explore the boundaries of the extent to which that application could be re-prioritized under power capping to restore the original, uncapped throughput.  TANSTAAFL still applies here.  The application is still yielding the same performance under power capping.  However, since there are fewer cycles available due to frequency scaling, there will be less headroom should the workload pick up suddenly.  In this case the remedy is simply to remove the cap.  The management software needs to be aware of these circumstances and initiate the appropriate action.


The experiments in this proof of concept involved an application mix used at a BMW site.  In the first series of experiments we plotted power consumption against CPU utilization by throttling the workload up and down, shown in red.





In the second series, shown in green, for each dot in the original curve we apply an initial power cap.  This yields a performance reduction.  The workload is throttled up until the uncapped performance is restored.  This process is repeated with increasingly aggressive power policy caps until the original performance cannot be reached. The new system power consumption without impacting system performance is shown plotted in green.  The difference between the red and green curves represents the range of capping applicable while maintaining the original throughput level.  The execution and running at the green level yields the same uncapped system performance. However, since idle cycles have been removed, there is no margin left to pick up extra workload.  Should it happen, performance indicators will deteriorate very quickly.


Under the circumstances described above, the system was able to deliver the same throughput at a lower power level.  There was no compromise in performance.  The tradeoff is in the form of diminished headroom in case the workload picks up.  The system operator or management software have the option to remove this cap immediately should this headroom be needed.

Congratulations to Ron as the winner of the Intel Xeon Workstation Sweepstakes.  He has been a member of The Server Room for over a year and was able to complete the quiz on the first attempt.


Good job!




"I was excited to hear that I won the Intel Xeon workstation sweepstakes. With its incredible performance, the system offers me the flexibility to use it in so many ways that I'm not sure how to best utilize it at the moment. It's a welcome problem to have and I look forward exploring the possibilities. Thanks to Intel and the Server Room team for providing a great resource to everyone!"




Thank you all for entering and look for more sweepstakes offerings in the near future.


- Your 'The Server Room' Admin's


I want my HPC….under my desk

Posted by jleon01 Oct 19, 2009

Are your “stuck at the desktop?”   A May 2008 study from the Council on Competitiveness and IDC identifies the barriers large and small firms face in moving from desktop computers to High Performance Computing (HPC) servers.

Among the study findings1, most firms:


Reveal that they have important problems they cannot solve on their desktop systems


  • Reveal that they have important problems they cannot solve on their desktop systems


  • Face three major barriers to adoption:  lack of application software, lack of sufficient talent, and cost constraints


As a scientist, an engineer, or an analyst: Have you outgrown your desktop?  What kind of new innovation or capabilities would the use of a HPC cluster give you?  Imagine the capability to analyze data and gain more insight faster, or the ability to virtually prototype your ideas product more efficiently and cost effectively, or perhaps analyze, model, or simulate larger problems.

Few OEM products aimed at the personal or “desk-side” segment are making easier for end users to adopt HPC and help to overcome some of the barriers to adoption:  the Cray* CX1*, SGI* Octane* III, and HP’s  CP Workgroup System.  These products are aimed at addressing the needs of the entry level HPC market and the workstation users that have outgrown their desktops.  Both the Cray CX1 and SGI Octane III systems are Intel® Cluster Ready (ICR) program certified which means that Intel has worked with the hardware, system, and application vendors to ensure your configuration has been pre-tested for interoperability, so you can deploy with confidence.  ICR helps to reduce TCO by making sure the components keep working together over the cluster’s lifetime, to increase availability and save time for IT departments.

So if your IT department cannot buy you full blown super computer, ask them for a personal super computer.


1 Source: Reveal.  Council on Competitiveness and USC-ISI Broad Study of Desktop Technical Computing End Users and HPC, May 2008 (

*Other names and brands may be claimed as the property of others

There’s a video going around from one of Intel’s top external customers.  Before you see this (video linked below) I wanted to position this correctly.  I caught up with Mr. X at an undisclosed coffee shop and got his approval to share publicly the messages that we would have rather had him go out with. Those messages are as follows:


  • Mr. X’s 4 year old servers were a burden on his organization, he spent all of his budget on just maintenance, nothing left for innovation.


He looked at his old infrastructure and determined that replacing them with more powerful-energy efficient servers from Intel was a strategic investment.


The New intel Xeon 5500 based servers provided the opportunity for him to innovate again.  He claimed that these new Intel Xeon Processor 5500 (Nehalem-EP) are the best enabler of IT business value that he's seen in years.


  • They boosted energy efficiency, saved him big $ and extended his facility lifespan – now he doesn’t have to go build a new data center.


He replaced his old servers in a 9:1 ratio (getting rid of 9 old and replacing with 1 new) that enabled him to cut operational expenditures by 90% …And that savings alone is paying for the investment in these new servers in just 8 months.


  • By strategically investing in IT when his competitors hunkered down and cut spending – he is now positioned to grow faster and gain share as the economic upturn arrives.



Ok, now that I’ve had a chance to convey his real messages, you can check out this video.





I’d like to introduce myself as a product line manager at Intel who has spent almost a decade ensuring we are creating the best servers to solve small business challenges. Part of my role is to influence future generation products and I’d like to learn more about your challenges, needs and desires so I can ensure we address them in our next generation products.



Here is a story I have heard in the past: “Ah geez, What Now? A customer just called to tell me they tried to enter an online order for my product and my web site is nowhere to be found.  I am lucky they called, but so much for spending a Saturday at my kid’s baseball tournament! Now I need to drive an hour to my downtown office to restart and possibly fumble with my server.  You would think that the desktop system that I am using as a server would just work so I can spend my free time with my family and my work time growing my business.” 



I can’t count the number of times I have heard a similar story from customers and colleagues that are trying to grow a small business, manage their own computers and have a personal life.  The answer to their problem is simple, buy a real server based on Intel®Xeon® Processors that is designed to keep your business running 24/7.   Our latest Xeon processors and chipsets are not only validated to run 24/7, but include features such as support for error correcting code memory and RAID for server operating systems that ensure dependability and differentiate a real server from desktop system used as a server.  However, a small business should not care about all this technical jargon.   I believe they only care that their server runs 24/7 without failure, enabling them to focus on business growth and life.



What are your small business challenges?  I’m all ears.







These are dog years for servers.   Pretty much every year Intel introduces a new Xeon processor.  Those who have heard the story recognize this as the Tic Tock model.  On Tic years the manufacturing process is updated, on Tock years the chip architecture is updated.  Every year customers get a boost in performance, and often a cut in power.  Typically this boost is in the 50% neighborhood, enough to make it worth the upgrade, and still achievable by engineering teams on a two year cycle.  Except, we are in dog years.



The Nehalem – Xeon 5500 – processor broke all prior boundaries on single generation performance gain.  Delivering two to three times the compute capacity of the Xeon 5400 (Harpertown) generation.  This is a big change, probably a once in a lifetime change – unless that quantum thing happens in my lifetime.  Roughly a 10X performance boost in less than 5 years.


During this same five years we have seen virtualization technology go from a lab project – something for test and dev – to mainstream data center process.  In 2005 it would have been heresy to suggest virtualizing the corporate ERP.  At that point virtualization overhead on the server could be as high as 25% and the entire server was needed to do “real work”.  Fast forward to today.  Virtualization technology in both the hypervisor and processor have reduced overhead to only a few percent, AND servers are 10X faster.  Not only can you virtualize the ERP, you are irresponsibly wasting resources if you do not.  Unless your ERP demands have grown 10X in 5 years, your ERP alone won’t even make a new Xeon 5500 system sweat.


If this advancement wasn’t enough, the announcements last month from Intel about the coming Xeon 7500 (4+ socket) processor were amazing.  All the benefits of the Xeon 5500, but on steroids.  The  new biggest leap ever.  With up to eight cores and four memory channels per socket, this is a monster.  Your ERP system will be barely a blip in perfmon.  It isn’t unreasonable that an entire data center for a SMB business could be virtualized onto one of these beasts.  And, how big is a Xeon 7500 server?  My guess is about the size of a breadbox

Three short years ago, this would have taken 32 Xeon 5100 (Woodcrest) servers, accounting for 64U of rack space... this pic is from the upcoming Xeon MP (Beckton) platform with Nehalem-EX processors that many of you have seen at IDF 2009.  This server only takes 3U of rack space... less than 5% of the space of what it could replace.


Sometimes you see a screenshot and it just makes your jaw drop...



Just to give a comparison of CPU density... here's a diagram showing the comparison of 3 year old technology compared to the upcoming Nehalem-EX.  If each of those 32 old servers burns 400W of power - that's 12.8 kilowatts - compared to one server, burning less than 1kW.



What's even more amazing, is that some design wins are based on a 1U server with the same cpu footprint - that's AWESOME!

What are your thoughts on these upcoming multi-core technology improvements?


Just wrapped up Oracle Open World…sitting at SFO, waiting for a flight back home.

The event from a Nehalem-EX perspective was a success. Hit important points and accomplished what we had to deliver.



Hit #1:  Michael Dell, in his key note, delivered Nehalem-EX message beautifully.  2.5x performance improvements coming from 9x memory bandwidth…compared to currently sold technology.  Thank you, Michael.


Hit #2:  Dell placed Nehalem-EX demo at its Exhibit at Moscone West.  I missed seeing it in person but the Dell friends came to me reporting that the demo attracted a lot of attention from the audience.  Thank you again, Dell.


Hit #3:  My Nehalem-EX demo at Intel booth was also a success.  The pre-production system ran throughout the event with 64 logical processors fully active with 1TB of Samsung DDR3 memory, running SPECjbb, stressing all the CPUs, cores, and threads.  Occasionally, I injected double-bit error to show off the MCA-Recovery function.  Windows 2008 R2 reported nicely that the system encountered a critical error but the system still running at full speed.  If not with MCA-Recovery function, I would have had blue screen each time I ran that error injection script and would have had to wait for few minutes to have the server come back up online.


Also, I really liked the demographics of the audience this time.  Compared to the other events I went to this year, I had more conversations with the folks who actually purchase equipments, those who test new equipments at IT shops, and those from Oracle starting to realize that hardware choice does matter when selling Oracle software.  Many people specifically asked when Intel starts shipping Nehalem-EX and which specific OEM models would use Nehalem-EX.  I hope my responses to those folks were legitimate.  ;-)  I also hope Oracle sales folks now have true confidence that the Oracle software stack runs best on Intel, specifically, Nehalem-EX.


Oracle Open World is said to be the largest IT event.  I believe that.  You don’t get to have lunch at the middle Mission St tarmac very often.  You don’t get to see four digit hotel bills very often for just couple night stay. Despite the fall storm hitting the peninsula dumping loads of water and gust knocking trees down on Tuesday, Intel booth continuously had heavy flow of traffic.  I admire the Intel team putting together our presence and admire the whole industry supporting the event. I also personally learned a lot from the event, meeting people, exchanging knowledge.  Three day booth duty is a tough one but worth it.


Oh, and to wrap the whole trip up…

Hit #4:  cleared the wait list and getting home earlier with an earlier flight…

AND…I wish today was Friday…



Just because you’re a small or medium-sized business doesn’t mean you don’t deserve benchmark data that’s relevant to your environment. In fact, the right kinds of comparisons are critical for you and your decision-making. Why? Because those performance differences can mean the difference between good and great service to your customers, or cost savings that boost your bottom-line, or maybe even help you better use your scarce resources.


That’s why Intel brings you independent and reliable benchmarks that mean something for companies like yours. For example, for our latest entry-level servers, the new Intel® Xeon® processor 3400 series, Principled Technologies* Inc. conducted a benchmark based on applications that most small and medium businesses use to run their data, web, and email exchange servers. Now you have meaningful results that you can actually use to make an informed decision about transitioning from a desktop-based server to a real server or even upgrading from an older Intel Xeon processor-based server to this new generation.


Curious what Principled Technologies found?  Well, the Intel Xeon processor X3450-based server delivered 119% more performance than a desktop-based server. So, that means you can do things more than twice as fast. Plus, the energy efficiency was significant too – with an 87% increase in performance-per-watt compared to the desktop-based server and 136% more than a previous generation Intel Xeon processor.

Pix 2.bmp


So, whether you’re looking to transition to your first real server or it’s time to refresh your hardware, you can see what the business benefits will be – more productivity and increased energy efficiency (which can equate to utility savings and simply being a better environmental citizen).  And one more thing, the benchmark also showed that the Intel Xeon processor x3450 could do all of that using only 60 percent of its capacity. That means plenty of room for future growth. Now that’s big!


Check out the benchmark results for yourself here in the PDF document.  And, if you want to see more, you can visit


Talk to your Intel IT solution provider reseller about these results and what they can mean for your business: (

With the Intel Xeon 5500 series (Nehalem) based processors, the X5500 chipset and instrumented power supplies, you can start with the most basic use case for Intel Node Manager - monitoring the power usage of your servers.


As you can see in the Intel Datacenter Manager (DCM) screen below - there are multiple servers configured into logical units:  HF2-EIL is the lab that these servers are located in.  Rack 1 and Rack 2 are the physical location of these servers, and each Rack contains 2 servers each.



When you highlight one server (as above in DCM)- you can see the power characteristics over a certain time period.  The time period shown gives you the idle power, max power, and thermal measurement.  The 'hump' in the graph is a SQL workload which creates 'work' for the server and the process runs for about 5 1/2 minutes with no power capping.


Here's a graph of the 2nd server in that rack, performing a similar workload.  As you can see, the 2nd server power usage is different than the first.



The Intel Datacenter Manager SDK console can monitor multiple systems as well.  The next graph, is both of those servers in the rack, which accounts for both servers power usage during the same timeframe.


Finally, here is the final graph, showing the accumulation of all 4 servers, in both Rack #1 and Rack #2.  This shows the maximum power utilized during the workload, the minimum power (idle) and the inlet thermal temperature in the lab.  Something that hasn't been able to be done before without expensive equipment in the datacenter.




My next power based blog will show how power-capping can give you more effience use of your workload power while using Xeon 5500 series platforms.

At Intel, we not only pack a lot of performance in a small form factor, we also pack a lot of great demos and theater presentations into our booth at Oracle OpenWorld in San Francisco (South Moscone, booth #1621).  We have 5 demos from 5 of our customers—Cisco, Dell, HP, IBM, and Sun—and 3 other demos showcasing Wind River, Intel’s SOA Expressway product, and last, but certainly not least, Intel’s amazing and upcoming Nehalem-EX processor, which you heard Michael Dell praise in his keynote this morning.


Over the course of the three days of our booth at OOW (Monday through Wednesday this week), we will have over 35 brief presentations that will help you plan your requirements for your next generation data center.  They are short and sweet, and you can ask all the questions you want.  If you simply attend a presentation and get a few more stamps form our demo stations, you can enter to win one of two netbooks that will be given away at the end of each day.


Outside of our booth, you may find us presenting in various partners’ booths and we hope to see you in a session we are having later today (see info below).  We had an amazing session yesterday from resident Intel genius, Steve Shaw.  The huge room was filled to capacity.  At this other session today we will be giving away a netbook.


Here are the logistics for today’s session:


ID#: S309892

Title: Ten Ways to Improve J2EE Application Performance on Multicore Systems

Track: Oracle Develop: Enterprise Java and Oracle WebLogic

Date: 13-OCT-09

Time: 17:30 - 18:30

Venue: Hilton Hotel

Room: Yosemite B


We hope to see you around somewhere at Oracle OpenWorld, but if for some reason we miss you entirely, please visit for more info on Intel’s fantastic products.  Also, please visit Channel Intel on youtube for some videos from the event.

Nehalem-EX has been in the news quite a bit over the past several months.


First, in May, Intel described how Nehalem-EX will be at the heart of the next generation of intelligent and expandable high-end Intel server platforms, delivering a number of new technical advancements (Intel Nehalem Architecture, Quick Path Interconnects, 16 threads, 24MB cache, new RAS features like MCA-Recovery, 16 DIMM slots per socket, 128 threads on 8 Socket systems) and boost enterprise computing performance (the greatest gain in generational performance ever seen at Intel.)


Next at IDF in September Intel described how Nehalem-EX would deliver a bigger generational performance improvement than that delivered by the Intel Xeon 5500 processor (including a 3X Nehalem-EX gain in database performance); a large shift in Xeon scalability with over 15 >8S systems anticipated and expandability for the most data demanding enterprise applications, the addition of about 20 RAS capabilities traditionally found in the Intel® Itanium processor family – along with a demonstration of MCA-Recovery. IBM announced their upcoming BladeCenter products that will support 4S Nehalem-EX blades and Super-Micro announced a 1U box, specifically targeted at HPC.  Staying on the HPC theme, Mark Seager from the Lawrence Livermore National Laboratory was also quoted with stating that “Nehalem-EX allows us to invest in science, not the computer science of porting and adapting software to new architectures, but real science.  Nehalem EX is an innovative SMP on a chip solution that provides us access to a “super node” … The result is an astonishing new level of performance.”


And Oracle Open World on October 13th, the drumbeat for Nehalem-EX continued.  Michael Dell in his Oracle Open World Keynote today discussed how Nehalem-EX will provide a true leap in performance, with up to 9x the memory bandwidth and 3x the database performance vs. prior generation.  And he mentioned that Dell’s unique implementation of the memory architecture will allow the most cost effective scaling, with 4S systems up to 1TB of DRAM (64 Dimms x 16GB Memory sticks) enabling customers to run their entire database in system memory.  He also mentioned that standard based systems are driving new efficiencies with applications like Oracle, where Dell’s data shows Oracle apps run better on x86 vs. proprietary architectures, up to 200% better.  Check out this short video from the keynote and watch what Michael Dell had to say.



Keep your eyes on the Server Room for more Nehalem-EX news as it comes between now and launch.  And visit the Intel booth at South Moscone Booth #1621 to learn more.




If you hadn’t heard, Microsoft* and Intel spent a lot of effort optimizing Windows* Server 2008 R2 (and Windows 7) to improve energy efficiency by reducing system power consumption at idle and under load.  For more details, check out the presentation from the Intel Developer Forum a few weeks ago titled Microsoft and Intel: Innovations in Hardware and Software to Help Deliver New Technology Experiences.  This presentation (and other IDF presentations) can be found at (search for SPCS003 using the session ID number).


There is good information on the operating system optimizations that were done to reduce power consumption.  Slide 22 has an excellent comparison of the power consumption of Windows Server 2003 vs. Windows Server 2008 R2 running on the same Xeon® 5500 series processors. It shows that using WinSrv2008 R2 reduced system idle and peak power consumption by ~60W!!  In addition, Hyper-V* 1.1 now uses the power management features of Intel processors to reduce power consumption during periods of low utilization.


This is a great time to show your customers the energy efficiency benefits that come with upgrading to WinSrv2008 R2 at the same time they refresh their server infrastructure with Xeon® 5500 based servers.


*Other names and brands may be claimed as the property of others

If you are attending Oracle Openworld 2009 and are interested in learning more about Oracle Database performance on Linux on Intel platforms then I will be talking on this topic in my session on Monday 12th Oct as follows:



Presenter:  Steve Shaw

ID#: S312645
Title: Oracle Database Performance on Linux: Tips, Tools, and Tuning for Intel Platforms
Track: Database
Date: 12-OCT-09
Time: 11:30 - 12:30
Venue: Moscone South
Room: Room 303


I will talking about the latest platform features, how you can get the best out of them on Linux  and also showing worked and customer examples of how these can benefit you with 'real world' Oracle installations.


Software scalability has been a big issue recently.  While modern servers are incredibly fast, many software solutions simply are not able to take advantage of it.  There are many reasons for this.  Some are easy to address and some require changes to the software.  Intel performance engineer and Oracle WebLogic performance engineer will jointly give a talk at Oracle OpenWorld on this topic.


Here is the session information:


ID#: S309892

Title: Ten Ways to Improve J2EE Application Performance on Multi-Core Systems

Track: Oracle Develop: Enterprise Java and Oracle WebLogic

Date: 13-OCT-09

Time: 17:30 - 18:30

Venue: Hilton Hotel, Room: Yosemite B


Here is the abstract:


The current economic environment and the new focus on being green demand greater efficiency from every IT shop, big and small alike. In this session, you will learn how to improve Java application scalability by using Oracle WebLogic Server on the latest multi-core systems. It examines various software and hardware features for getting the best performance out of your applications. In particular, it explores the pros and cons of 32-bit versus 64-bit environments and how having multiple Java virtual machine instances can reduce heap pressure and improve cache locality. It also discusses operating system and hardware features such as large pages and solid-state drives and their impacts on J2EE application performance.


As a bonus, we will be giving away a Netbook at the end of the talk.


We also wrote a technical paper on the topics that will be covered in this talk.  You can find this technical paper at .

Sun has recently published a whitepaper that discusses how the Solaris OS will take advantage of the next generation Intel Xeon processor (codename Nehalem-EX) for expandable servers (4 sockets & greater).  Sun with over 20 years of experience in larger socket, core & threading capabilities is working to have the Solaris OS be ready to take advantage of the features & new capabilities of “Nehalem-EX”.  The three areas of collaboration for Solaris & Nehalem-EX are around  scalable performance; advanced reliability and energy efficiency between the specific features in Solaris and the next generation Intel Xeon processor.  Read this recently published whitepaper



Posted by EMcConnell Oct 9, 2009

It has been a little while since I shared some thoughts about moving from RISC due to a 3 month assignment managing the Nehalem-EX product line. One word describes that product, ‘wow’, and the change it will bring to the IT marketplace as we know it. But I’m not here to talk about that….


David Bowie was certainly introspective when he wrote his “Changes” song about his need to constantly look at oneself, previous decisions and the need for frequent reinvention and change. The sentiments reflected within “Changes” can be applied to all aspects of life both personal and business. Reflecting on previous business decisions and looking for newer and better ways to do things should not be seen as the previous decision being wrong, but rather should be rewarded as looking to change and do things better based upon the environment today.


Previous business decisions to deploy your IT solutions on RISC based architectures was most likely the right decision at the time based on the business need, the solutions availability and the architecture available to run that solution.  Some of these solutions are likely due for an upgrade due to changing business needs, a better version of the application now being available or have become to expensive to maintain and support on older server system. It is time to make a change and change is likely to include upgrading to next generation of software solution and choosing a new server system that will perform and work with the software solution.


With the rapid pace of technology innovation and evolution over the last number of years, the decision is not necessarily as clear cut as it may have been in the past.

What I wanted to share with you was some information around how Intel’s Xeon microprocessor has evolved and can now compete with the POWER architecture offered from IBM.  Some of you may say that this is not possible, but Xeon 5500 is getting some attention as shown with information posted on IBM website


Price/performance is a key consideration for database workloads and $ / tpmC is pretty widely accepted as a good rule of thumb. Its good to see that the IBM System x3950M2 (based on the Xeon 7400 processor) has a $1.99 $ / tpmC compared to IBM Power P570  $3.54 $ / tpmC;-)


Xeon 5500 has a performance per socket leadership against a similar class POWER 6 2S system. This can be seen by comparing results at www.spec.orgfor benchmarks such as SPECJbb2005, SPECint-rate 2006 etc. IBM makes reference to performance per core leadership over the Xeon 5500. A fair statement, but most customers look at overall system level performance to do the require task. I guess my key takeaway is that if you are looking for a solution to run infrastructure type workloads and get the best bang for your buck then the Xeon 5500 delivers best price/performance


There is also some interesting discussion around scalability of Xeon Vs POWER6. Xeon 5500 is used in 2 socket configurations, and not in scalable systems. So it seems to be a little like comparing apples to oranges!. Scalable Xeon platforms are available in the market today from both IBM and Unisys. There are also 15+ designs for scalable platforms from 8 OEMs coming with Intel’s next generation scalable Xeon product, Nehalem-EX.  Some good material was shared recently at Intel’s Developer Forum in San Francisco. Look for Mission Critical Server Deployment class at material provides a good overview about how Nehalem-EX provides supportfor high-end computing with a scalable micro architecture, advanced RAS capabilities and how Redhat will support Nehalem-EX scalability. This presentation also shows an example of the innovation of NEC who are developing mission critical Linux solutions based on NEC’s Scalable architecture using Intel Xeon processors.




So is it the time for you to change?. Are existing options like the Xeon 5500 or the Xeon 7400 the right choice for you? Nehalem-EX is coming and I believe will bring a huge change to the marketplace as we know it today.


What do you think?


“Imagination is everything. It is the preview of life's coming attractions.” Albert Einstein

Today’s workstation can provide you with a magnificent digital canvas to create tomorrow ...... today!

With workstations powered by two Intel® Xeon® 5500 series processors, engineers have the opportunity to create, shape, test and modify products before they become real. Engineers can now design, visualize and simulate products from the conceptual design phase through the entire manufacturing process. This is done virtually before any investments are made in a prototype.


“Any color—so long as it's black.” Henry Ford

Like the automobile, the workstation has morphed into something much more than what it once was. It now has more capabilities and features than its predecessor and, if you allow it to, it can help you accelerate the pace of your innovation.

Today’s workstation gives engineers a new tool that can be likened to a digital workbench. This tool is powered by two Intel Xeon 5500 series processors with Intel® Turbo Boost Technology and Intel® Hyper Threading Technology to take advantage of the processor’s power and thermal headroom to enable increased performance of both multi-threaded and single-threaded workloads.


Today's workstation can host a suite of software applications from ISV's like Autodesk, SolidWorks, PTC, Bentley and others to create and test their ideas. The pliers, hammer and nails found on a workbench in a garage or basement have now been replaced with digital tools that promise to accelerate innovations via a process known as digital prototyping. Its enablers include application tools like detailed CAD, CAE and PIM. Together they represent the new digital workbench—a powerful innovation tool you can use to bring your ideas forward faster than ever before.

“I confess that in 1901 I said to my brother Orville that man would not fly for fifty years.” Wilbur Wright

You think all you need is an entry-level workstation with a single Intel® Xeon® processor. After all, you only do CAD—right?  You may be thinking like Wilbur Wright.


Innovation in the workplace is paced by how well you can use technology to test and improve your ideas. As you begin to adopt modern workflows you may realize the dramatic impact that simulation-based engineering or digital prototyping can have on your product development cost and schedules.  You will soon realize that the cost of the second processor and additional memory necessary to support digital prototyping was far less expensive than the cost of multiple physical prototypes and the associated time to produce them. Instead of investigating hundreds of digital prototypes, you only have time to look at a single physical prototype and ask: What if I …?


Those “what ifs” could have been played out on a dual-processor Intel Xeon processor 5500 series-based digital workbench faster, and your time and cost of physical prototypes could have been significantly reduced.

Are you ready to adopt modern workflows and accelerate your innovation?




When it comes to your car or fixing something around the house, you know there’s a tool for every job right? Well, it’s no different when you’re considering transitioning from a desktop-based server to a real server. That’s why we created the Server Transition tool for all small businesses.  It’s easy to use (a few clicks), and gives you what you need to make a sound business decision.


Now you have a comprehensive tool to understand your server options and how your current system measures up. All you do is select the year you purchased your current system and the tool makes an initial guess at the configuration you have right now. You can make adjustments so it better matches your actual set-up and then, PRESTO! You’ll see how your system stacks up to a real server based on the processor, memory, storage, business applications and form factor. This way, you’re making an apples to apples comparison with quantifiable data.  Well, I guess since it’s a desktop versus a server it’s more of an apples to apple pie comparison and maybe even apple pie a la mode….


You can even take the comparison one step further by answering a few usage questions to find a more customized configuration for your business needs. It’s really that simple – a few clicks and you have a recommended server configuration that will deliver greater dependability, productivity and performance to meet your needs today and tomorrow. Now, if only there were tools like that to help my sister find the right guy to date.  Maybe that would be more of a tool avoidance tool, if you know what I mean.


Don’t wait, check it out now.

In March '09, former Intel executive Pat Gelsinger predicted that Nehalem-based Xeon 5500 servers would become "cash machines" for the IT industry, due to unprecedented power-efficient performance gains that can deliver a very short ROI for IT.  Pat's description of the Xeon 5500 was validated during a briefing with Intel CIO Diane Bryant in San Francisco on October 6th, as reported in TG Daily.


She discussed the ROI achieved and the impact that a proactive serve refresh strategy has had on Intel’s bottom line, as reported in PC World.  Some of her key points:


  • Intel is expecting up to $250M savings over 8 years, saved $45M in 2008 alone

  • Despite these results, economy forced Intel to re-evaluate capital spending in 2009

  • Found that delaying server refresh would cost us $19M more than continuing.  So we continued.


  • Getting an average of 10:1 server consolidation with Xeon 5500 in design computing environment and 20:1 virtualization server refresh ratios Office/Enterprise

  • Did you know that Server Refresh is also the #1 driver of Intel’s Carbon Footprint reduction as well, with an initiative to reduce Carbon footprint b5% per year.  We are projected to reduce by approximately 4K metric tons (2009) and this server refresh strategy is forecasted to be #1 project to help IT reduce Carbon.

  • Staying on the green IT theme, the newest ally for IT to help drive carbon-reduction and energy cost savings is the energy utilities.  A prime example of this is the Energy Trust of Oregon, who offers cash incentives to motivate Oregon businesses to make energy saving investments.  Intel gained access to a $250K incentive from them as a result of energy savings gained by replacing older servers with newer, more energy-efficient servers in our data centers. If you are replacing older servers with modern energy-efficient Xeon 5500 based servers and you haven’t had this conversation with your utility yet – please do so.  You may be eligible for utility incentives for energy savings that can lower your operating costs and reduce the impact of your business on the environment.  To estimate the energy savings associated with server refresh, go to


You’re going to hear more about these “cash machines” in the very near future…stay tuned!





I’m a bit late in relaying my thoughts from Intel’s Developer Forum (IDF), but there was definitely some excitement around virtualization and high performance networking that I wanted to get the word out about!


In the past I’ve shared some details about SR-IOV and the advantages you can gain by being able to present virtual LAN hardware to each Virtual Machine (VM), effectively avoiding the Hypervisor when presenting virtual devices to each VM.  The advantage of being able to do this is clear:  The less interaction in the networking stack there is from the hypervisor, the less processing overhead is required for the system process the data.


That’s all good because if you have a dual 10 Gigabit adapter, you can segregate those two physical pipes into perhaps 16 virtual pipes that get exposed to 16 VMs.  By segregating these LAN pipes at the hardware level with SR-IOV instead of using Hypervisor switching, the performance gains in both CPU utilization as well as maximum total throughput can be very large.  There were several demos at IDF with various configurations, but reductions in CPU utilization of 40% were possible coupled with dramatic improvement in throughput!


But there is unfortunately one minor complication that I didn’t mention in my last post on the topic of SR-IOV.  There is the little fact that when VMs move between physical boxes (a usage that is highly desired and commonplace these days) you run into some problems with this SR-IOV capability.  When the hypervisor owned the network hardware abstraction, the performance was worse, but the functionality was better because you could seamlessly migrate from one box to another and the virtualization application would handle the details.  But with SR-IOV, a new layer needs to be added so that the direct hardware connection between the VM and the LAN hardware can be moved to a new box.


The really exciting part of IDF demos that I saw was the demonstration not just of the SR-IOV functionality on multiple hardware and virtualization configurations, but that these demonstrations also showed updated software from two virtualization vendors allowing mobility of the VMs while supporting SR-IOV!


There was a demo on Dell systems showing this fully functional SR-IOV implementation with Citrix’s Virtualization suite.  There were two separate demonstrations on Dell systems, with VMWare displaying their new Network Plug-In Architecture (NPIA) solution that allows for the migration of SR-IOV connected VMs seamlessly between servers.

For those hungry for more detail, I’ve included the three SR-IOV demonstration videos here:


The first is the Citrix demonstration on Dell and Intel hardware of SR-IOV with VM mobility:





These next two are two videos are demos on Dell and Intel hardware with VMWare and their NPIA software implementation.





Each virtualization demo shows the massive performance benefits under various workloads when moving from Hypervisor based LAN segregation to SR-IOV implementation.  But most importantly, each demonstration proves out the capability to migrate VMs between physical hardware.  The only system hardware requirement is that the server itself supports VT-d.  If the networking hardware in the newly migrated-to box supports SR-IOV you get better performance, and if not, the solution falls back on the legacy Hypervisor virtualization.  Backwards compatibility is maintained!


I didn’t get firm details on when this full support for SR-IOV and migration will be available in Citrix and VMWare’s releases, but the demos looked pretty clean, and hopefully these suites will be available soon with this new functionality.  The LAN and Server hardware ecosystems are ready today, and it looks like the software vendors are just around the corner.  Virtualization momentum continues!

While virtualization was the big takeaway for me from IDF, there were also several other interesting demos for us networking hounds.  I’ve linked a couple videos of them below for anyone still thirsting for more of the latest networking technology and performance details!


The first video is a demonstration of Intel’s 82599 10 Gigabit Ethernet-based adapter card with Fiber Channel over Ethernet (FCoE) support.  Storage and Ethernet together at last!


The second video is a demonstration of Intel’s NetEffect 10 Gigabit Ethernet card publishing 1 million messages per second in a simulated NYSE floor trading scenario.  Oh yeah, only 35uS of latency.  That is fast.

So although I am two weeks after IDF, I hope some of you got a little taste of the networking excitement that took place.   Industry wide, hardware and software vendors alike are delivering ultra high performance low latency applications for the financial services industry, as well as mainstream performance increases for virtualization.  The performance and technology beat moves forward.  Exciting times!


Ben Hacker

Looking for an excuse to sneak out of the office for a RISC migration seminar?  Here's one for you which you can't resist.  How about a seminar associated with a spectacular lunch at a great Greek restaurant in the city of San Francisco

Dell, Red Hat, and Intel are hosting a RISC migration seminar over lunch, during Oracle Open World next week. 

Register here for the seminar held at the Kokkari restaurant, 12pm, Wednesday, October 14.  It is not too late.  Small number of seats still available. 

Don't miss this opportunity to learn when and how the migration should be done and ask questions to Dell/ Red Hat/ Intel team members present on site.  I am hoping the hosting members do their job not just busy enjoying lunch. 


I had never heard of a cloud forest before I went on vacation this past June to Costa Rica where I spent time at the Villa Blanca Resort.  Even when we arrived at Villa Blanca, I have to admit I was a little confused.  I had expected to see a forest in the clouds, however, I saw a beautiful hill side scattered with a few trees.

villa blanca grounds.jpg

However, when we went on our walking tour the next morning, our tour guide walked up to one of the larger trees and says “Welcome to the Cloud Forest.  This tree is a perfect example of a cloud forest”.   As I looked more closely at the tree, I was amazed at what I saw - this single tree was host to thousands of species of both plants and animals.

  cloud forest tree.jpgcloud forest foliage.jpg


Nature is extremely efficient in it’s use of a cloud forest.  Likewise, Cloud Computing is an extremely efficient use of computing resources.  It is for this reason that Intel IT has developed an enterprise cloud computing strategy focused on building an internal cloud to boost efficiency and flexibility inside of our IT infrastructure.  This internal cloud strategy is closely linked to our current use and accelerated plans for virtualization. In addition Intel IT uses the external cloud services selectively for certain applications.



Additionally, we are exploring using rich mobile clients with cloud computing models moving forward to better meet the needs of an ever changing user base, consumerization trends and the need to maintain highly efficient, secure information and application delivery to employees.


To find more discussion, blogs and content relating to cloud computing – in the enterprise or corporate client solution areas – take advantage of these resources.



And .. if you ever have the chance to visit Costa Rica .. visit the cloud forest. It was worth the trip.



Chris Peters, Intel IT


I wanted to follow-up on the pre-IDF blog I wrote and what I and Sean conveyed regarding comprehensive IO optimization for enterprise cloud (based on virtualization infrastructure). A blog I owed to those who could not attend the IDF session.


In the last blog we identified 4 important vectors that drive I/O evolution.

1)      Balanced system that maps to the increases in CPU performance

2)      Scalability

3)      Unified fabric

4)      Security


In my view I state it an evolution as I feel that is the natural state things will head towards in the (near) future.

In a cloud environment you would anticipate automation and policies determine the consolidation possible on a system. If SSDs get broader adoption and virtualization performance increases due to hardware assists, I/O and fabric could become the bottleneck for the degree of consolidation and efficiency as it cannot map to the increased data rates from the storage and CPU performance.


Ways to address this is either to reduce or eliminate the overheads in the I/O stack caused by software emulation of devices in the VMM. VMDq is an example of a technology that can reduce the overhead or offload some of the VMM tasks through hardware assists in the NIC. Direct assignment with PCI-SIG SR-IOV support is a way to eliminate the overheads by by-passing the VMM. With SR-IOV, a single device can be divided into many logical devices known as Virtual Functions (like a pair or independent transmit receive queue). Each virtual function can be directly assigned to a VM using Intel VT-d thereby bypassing the VMM. This can work with Live VM Migration too. At IDF we showcased 4 demos of prototype SR-IOV software solutions running on Intel Xeon 5500 based hardware with prominent VMM vendors like VMware, Citrix and Redhat that have different hypervisor technology. The networking demos showcased working live migration with SR-IOV and VT-d based direct assignment. Direct assigned VMs could be even moved to an emulated mode and brought back to direct assigned mode. Intel has not only been working with software providers but also with other hardware vendors like LSI to demonstrate this capability. These technologies are as important to storage as networking particularly as SSDs gather steam. You can learn more from these blogs below on the demos.


Demo with Intel Xeon 5500 based Dell servers

An analyst view on the LSI solution demonstrated



If those multi 1GbE cables (that make your fabric look like pasta) be replaced by 10GbE and if SR-IOV and VT-d be used for performance, then it answers both the I/O performance and scalability requirements for a flexible datacenter.


Beyond those VT-d provides greater protection by allowing I/O devices to access only the memory regions allocated to them, and SR-IOV allows VMs to access only their portion of the device and restricts access to other Virtual Functions (owned by other VMs) on the I/O device or the entire device itself. Better security through better isolation.


Last but not the least of the requirements is the unified fabric. When IT can use a single I/O device for storage or for LAN traffic, the rigidity associated with provisioning of servers with some number of HBAs and some number of NICs is reduced. The I/O capacity becomes fungible and flexible. FCoE and iSCSI are key technologies in this direction. Adding capability to monitor QoS and shape the traffic makes it a good match for flexible datacenter.


Many of the technologies I discussed above (VT-d, SRIOV, FCoE, iSCSI) are here today…  software such as Red Hat Enterprise Linux is already delivering the solution. In my view just a matter of time that ecosystem builds further and hardware is well tuned.


With these in perspective how do you see your datacenter shaping up?

Filter Blog

By date: By tag: