Skip navigation
Start your week right!  With the Data Center Download here in the  Server Room.  This post wraps up  everything going on with the Server Room, the Cloud Builder Forum, and our Data  Center Experts around the web.  This is  your chance to catch up on all of the blogs, podcasts, webcasts, and  interesting items shared via Twitter from the previous week.download_arrow.png


Here’s our wrap-up of the week of May 23rd:



In the Blogs:

Billy  Cox asked Hybrid  Cloud: Are you ready? And didn’t quite recommend let’s  just leave the barn door open…

Chelsea  Janes shared her thoughts on Intel’s  Keynote Vision at Interop Las Vegas 2011

Wally Pereira explored testing by asking A  Pilot or a Proof of Concept? The First Step in the RISC to IA Migration


This week on Intel Conversations in the Cloud we had Ben Grubin from Novell on to discusses data  center infrastructure tools and automating cloud services, as well as some best  practices for your data center and cloud computing needs.


On Intel Chip Chat Bridget  Karlin discussed the Intel AppUp Small Business service a new hybrid cloud  computing solution that delivers a complete solution from software to hardware  to cloud provisioned monitoring and management for the small business.


This week the Intel Chip Chat Challenge has returned!   Allyson Klein asks a simple question:

By 2015 what is the largest impact cloud computing will have  on our lives, why, and how?

This is your chance to share your thoughts and you could win  in the contest! Go answer!


Across the web we saw:

A demo of the Intel Expressway Cloud Access  360 Partner Program Testimonials

From the Intel Cloud Builders program we saw a demo of Trusted Compute Pools  for Cloud Computing with Hytrust and Intel TXT

We saw Intel  Xeon E7 processors power the SAP In-Memory Appliance (SAP HANA™) in  explanation.


In the social Stream:

Winston  Saunders shared the Top 10 Things Data Centers Forget in Their PUE Claims,  on Linked In.
Raejeanne Skillern shared a look inside Google’s DC:
Chris Peters asked: Is compromise inevitable? The Intel CISO says yes.

Mashable reported  on datacenter growth: YouTube has 2  Days of video posted every 1 minute

Raejeanne Skillern shared info from Data Center Knowledge that eBay's  data centers are filled with racks packed with up to 96 servers, and using  28 kilowatts of power. 
Winston  Saunders shared a 451 Group Video on Renewable  Energy and Data Centers on Data Center Knowledge.
Cisco Systems offered  a peek at the cool, green features from Cisco's [new] Allen, TX data center!

Allyson Klein Wants to hear from you cloud experts out there on your thoughts on Hybrid  Cloud service


AMT in Workstations

Posted by K_Lloyd May 31, 2011

Finally! Out of Band Management!

'Real' ( read as Xeon) workstations have been the unsupported crossbreeds between servers and clients.

For years servers have been managed using a "base management controller".  In the OEM lexicon this includes technologies like Director, ILO, iDrac, ...

In clients( desktops and laptops)  this need was filled by Intel based systems with vPro which have AMT - Active Management Technology.


In either of the above, out of band management allows remote device management at the hardware level.  This is fundamentally different than what you can do with a software agent.  OOB management allows you to remotely do low level functions - lilke power on, power off, reboot, format, partition, bios, etc... all of which are not exposed through an agent that runs on top of the operating system.


OOB management is a critical tool for server management, and with vPro is becoming a critical tool for client management as well.


Up 'til "now" OOB has been missing on workstations.


With the E3 family and C206 chip set Intel first introduces AMT into the workstation family.  This will continue into the two socket space when the 'Sandy Bridge' products launch this fall!  This is seriously exciting. Customers can use the same tools to manage fleets of Xeon based workstations as they do to manage their vPro laptops and desktops!


OOB management can dramatically reduce support cost and travel time, keeping support staff efficient, and get your workers back to work faster.

Billy Cox

Hybrid Cloud: Are you ready?

Posted by Billy Cox May 25, 2011

Maybe you’ve finally figured out why you need to incorporate clouds into your thinking. Let’s see: agility, flexibility, cost savings, new business models, etc. Any one of those is likely enough to warrant a hard look at how a cloud can help.


At the risk of making life more complicated than it already is, you may want to start the process with a reality check: you will be using multiple cloud providers. Possibly one or more SaaS providers and one or more for IaaS.


While you may only use one SaaS provider for, say CRM, you may end using a different provider for say, travel reimbursement management. Then, when the time comes to build your own private IaaS cloud, you may find that you will also need so called burst capability. This, in turn, will cause you to create relationships with one or more external partners to host those IaaS workloads. Some of those partners may implement an extension of your private cloud in their facilities, something I refer to as an “extended private cloud”. Some of the IaaS partners may be public cloud providers such as Amazon, Rackspace, Joyent, etc.


Some things to consider in these situations:

  • For the IaaS workloads, you will need to build them in such a way that they can in fact be hosted at multiple providers. This means you may end up maintaining multiple versions of the VM image which are then validated in each of the environments. This sounds bad but there are format conversion tools emerging and you have to do the environment specific validation anyway.
  • When your internal users get ready to deploy a workload into one of the IaaS clouds, which one should they use and what credentials will they use to access that cloud? The cloud selection needs to be implemented based on an IT policy, in turn, implemented in an automation tool. Univa Unicloud is one example of such a tool (there are others). The developer credentials need to be derived from the organization, not from the service provider. Therefore, when the developer gets ready to push a workload to an IaaS or access the application environment, they need to use their corporate credentials but the credentials needed at the cloud vendor will be specific to the cloud vendor. Products such as the Intel Cloud Access 360 can perform this ‘translation’ function (authenticate a developer with corporate credentials but access the cloud using service provider credentials).
  • Not all of your applications will be in your cloud managed environments. Some of the applications will need to remain in their ‘traditional’ enterprise environments. Therefore, we need to put in place the means for the ‘cloud applications’ to securely and transparently access ‘traditional enterprise applications’. One example of solutions in this space is the Citrix Cloud Bridge.
  • Make sure your internal processes (such as help desk) are ready for the complexity of managing these relationships, being able to easily locate workloads, and handle support escalations into multiple providers.


If you are a small business, all of this may be overkill. And, there may be another option: AppUp Small Business Service. This is a way for a small business to get access to software solutions with support from a channel partner. The trade-off can be the software solutions you need to keep 'on-site' but managed 'off-site' vs. SaaS solutions (that are almost always 'off-site').


These are some of the things to consider. I’m sure there are many more.


In any case: start getting ready. Hybrid clouds are coming to an IT shop near you.

Back in the old days, when the horses escaped from the barn it was a bad thing since they tended to just wander off.  Often: no horses meant no food.


But, what if the horses ‘wandered off’ to join with other horses and instead of leaving, they came back with friends: some bigger, some smarter, some really good at coding, some really good at documentation, etc.


The analogy is aimed at the OpenStack project, where NASA and RackSpace have released into open source the basis for cloud management (compute, storage, networking, more). From my perspective, this was a significant event. Not the least of which is that the code they released was code they were using every day in production, but also because we can now create parallelism in the innovation for cloud infrastructure management.


Now that the release into the community has happened and an active community has formed, it seems pretty clear that there is no going back now. With the ‘horses wandering off’, what kind of discoveries might we find? Here are a few that my prognostication sees coming.


  1. Here’s an easy one. For a commercial customer to use OpenStack, we need supported distributions. Citrix has announced their offering at their conference this week.
  2. Workload schedulers that optimize infrastructure in ever more complex and interesting ways.  For a service provider, more VM’s per server means more revenue. My back-of-the-envelope math (below) says that for 4 rows of an OpenCompute infrastructure, a 10% improvement in workload density (while maintain SLA) could be $8k/day in increased revenue. For corporate IT, it means better utilization of the assets. But the trade-off usually is to pack more workloads on a server with decreased ability to ensure an SLA is being met. As we move forward with the underlying technologies, we will see increasing ability to measure fine grained performance at the server, network, and storage levels allowing us to place and adjust workloads based on the ability to see potential compromises to SLA guarantees.
  3. In the past we could design the network to support the carefully placed workloads. But with a cloud, we have to design a network where those darn workloads may not only change (radically) but also move around. Doing this at low cost with high availability is a challenge. We are seeing evolution to network designs that appear to offer “cheap, fast, and available” – and where I can pick all 3. OpenFlow is one example of the innovation.
  4. Hybrid clouds will be the norm for IT. What needs to be the norm is not the norm in terms of tools and IT processes to manage a hybrid environment. To be sure, there are some early players that have made it work in spite of the state of the art. OpenStack enters the market at a time when there is urgent need for portability. I’m betting that the community will embrace this requirement and not ‘defend the barn door’.
  5. We have to be able to trust our cloud infrastructures. To date that is accomplished by the “trust me” method. We already have many of the tools in place to validate VM’s, validate and recognize users, and establish platform and hypervisor trust. But, these are completely independent tools that are poorly integrated into a whole. I would expect to see the OpenStack community build these connections in interesting and innovative ways.
  6. Academic research is strongest when there is an open source basis for the experimentation and innovation. The fact that OpenStack is less about the API and more about delivering functionality tells me that we will be surprised at what the research institutions can come up with.


Well, if the horses are already out of the barn, I for one would vote to let them find their friends and continue to build a community.


(Back-of-the-envelope math: compute farm is 4 rows of 24 racks/row and 30 servers/rack yielding 2880 servers. With 12 vm’s per server and $0.10/hour per VM-hour driving $82,944/day in revenue.)

Last week was Interop Las Vegas 2011, a popular event with some of the most well-known technology companies such as McAfee, Dell, Citrix, Cisco, Panduit, HP, Force 10 Networks, Barracuda Networks many more.  But it was the Intel Cloud 2015 Vision that really spoke volumes to Intel’s partners and customers.  From the Intel booth to over one-thousand attendees at Intel’s keynote with Kirk Skaugen and hundreds that packed the panel with Iddo Kadim, and the conversations happening on social networks and blogs discussing the Intel vision it began to be clear what the next hot topic from Interop was.


In October 2010, Intel launched their Cloud 2015 Vision, and since then Intel continues to tell the story of connecting more people and devices to the cloud, more devices becoming client aware and focusing on Intel’s three pillars of the vision; a federated, automated and client aware cloud.


At Interop, these three pillars became key and better understood not only at the booth with  six demos to represent each pillar more clearly, but also with Kirk Skaugen describing what drives Intel forward with the timeline of ‘2015' -- he says “We will connect and extend computing technology to enrich the lives to every person on Earth.”


Also In his keynote Kirk  explains that more people will have morec access to technology and be more connected then ever. In this explanation  Kirk Skaugen gives a compelling and “startling” statistic that in the entire history of the internet, 245 Exabytes of data have crossed the internet in 2010, alone.


So how does that fit with the Intel Cloud 2015 Vision? Intel hopes to play a role in connecting one billion more users to the internet with the Intel® Atom processor in Smartphones, Nettops, Netbooks, and more, by 2015.  Intel hopes to play a role in connecting 15 billion devices by the year 2015 to the cloud in th next generation of data centers.  This level of connectivity and awareness could change how the world stays connected to each other, the news, other new technologies and change the way we live in what is now called, the digital age.


We saw great responses from the attendees and online. it wasn't just people understanding what Kirk presented in his keynote that helped. But people were genuinely interested! We had people coming by the boot just to what Intel was showing, people that wanted to become a part of the Intel Cloud Builders program.


We also recieved postive remarks online. Here are just a few:

“Intel ROCKS the cloud!...” & “Thanks for interview@ the #intel booth!  I had a lot of fun.


The Interop organizers even shared some photos of the Intel booth on the Interop Facebook Page. We saw articles popping up everywhere. Cloud Tweaks said: “Kirk Skaugen… came up with some interesting numbers on cloud computing… While these figures are quite optimistic, they are not impossible…”


Force 10 Networks shared: Intel Corporation will help deliver secure "cloud-in-a-rack" by integrating Intel® Expressway Access 360 for Single Sign On and Account Provisioning into Force10's Open Automation Framework.”


And, had this to say: “Today, the McAfee Cloud Security Platform offers the following modules, the Intel® Expressway Cloud Access 360…the Intel® Expressway Service Gateway…”



There continues to be a “buzz” not only about cloud computing, but about Intel’s Cloud 2015 Vision!


Stay tuned here on teh Cloud Builders Forum, on Twitter, and on Facebook for more news!

The migration from expensive legacy RISC platforms to lower cost commodity Intel Xeon based servers is not a trivial effort but it need not be so daunting that staying on the expensive RISC server is a reasonable alternative.  What makes this migration process challenging is as much the alternative approaches as the actual process.


Regardless of the level of effort the migration process needs to start out with some analysis.  Possibly the analysis need not go much beyond the existing documentation of the system.  Other levels of analysis can be some study of alternatives from the system vendor; for instance do they have a native port for IA and can that be installed on an IA server?  Other times there is no documentation and the source code is all custom with possibly vendor libraries edited for the application.  Other applications have a native version for IA but your data is stored in the database in binary format.  These are just a few samples of the sorts of RISC to IA migrations that are occurring.


Most companies I’ve worked with have a pretty good idea of the parameters of the application they want to migrate.  They believe they have studied the problem pretty well while in the process of getting the approval to execute the migration.  Maybe the staff wants to migrate not just the application but a large number of external components as well such as LDAP capability.  They bring me in because they want to get on with the actual migration.   They don’t want credenzaware, you know, those three ring binders of data sitting on the IT managers credenza.


But to me simplicity is the key to success.   You know that when making changes to a system that only one thing, the operating system upgrade, the application upgrade, application changes, platform upgrade, etc., should be changed at a time.  Otherwise it may be difficult if not impossible to isolate the cause of problems and bugs that are encountered.  The migration process introduces changes in all sorts of variables, platform, operating system, application version, and more.  By adding variables to the plan that are peripheral to the main system, the likelihood of the failure of the migration skyrockets.   A little additional analysis can help in keeping it simple.


So after our brief analysis has limited the parameters of the migration effort, what are we going to do first, a pilot or a proof of concept?  At the risk of being obvious a pilot is a test of a system that will be cloned after the pilot works the kinks out.   A proof of concept is executed to ‘prove’ out the idea that this concept will work.


For example, suppose that the RISC servers are managing backup systems for the corporation.  The same backup application is available in a native version for IA servers.  A pilot should be run to test installing the backup application on the IA server, hooking it into the corporate network and ensuring it can handle the load and functions as well as or better than the more expensive RISC server.  This process creates a script that can be replicated for all of the other RISC servers supporting the backup process. 


Once the agreed upon testing period has ended and the script for replicating the process is written,  the next step is to begin the conversion process for all of the other RISC servers running the backup software.   And then you’re done!  You’ve converted your application running on RISC servers over to IA likely saving the company tons of money.


If only it were only all so easy.  Migrating application systems, whether based on a vendors application platform or built in the shop with custom source code, by a long departed author who used Russian cities as variable names (but I digress), the migration requires a careful process.  That process begins with a Proof of Concept. 


The proof of concept starts with some simple premises.  We’re not migrating the production application, just a copy or backup of the production application.  We’re going to make sure, at the end of the process that the application on the IA platform has all the functionality and better performance of the production system running on the RISC server.   In other words we’ve accounted for everything that has to be moved.  And we’re going to be sure we have documented all the steps to build a recipe for the actual migration of the production system.


The Proof of Concept may not even follow the process that will be used in the migration of the production system.  Here we are proving to ourselves that it will be worth it, we’ve got the right sized server and the performance meets or exceeds expectations.  


The Proof of Concept will differ because many of the conversion steps can be taken in this stage.  For instance shell scripts can be set up and will eventually serve the production system.  For custom written applications some vendors offer migration services like programs that read the source code and spit out where the libraries need to be changed and the code syntax needs to be changed for Linux and this will not need to be replicated in the production migration.  (Of course you have locked down code changes once the Proof of Concept has started.)



My next blog will cover the methodology of the Proof of Concept. (POC)

download_arrow.pngStart your week right!  With the Data Center Download here in the  Server Room.  This post wraps up  everything going on with the Server Room, the Cloud Builder Forum, and our Data  Center Experts around the web.  This is  your chance to catch up on all of the blogs, podcasts, webcasts, and interesting  items shared via Twitter from the previous week.


Here’s our wrap-up of the week of May 16th – May 20th:



In the Blogs:

Bruno  Domingues  discussed Trusted  Compute Pools: What’s under the hood?


Winston Saunders introduced us to sUE and The  Elephant in your Data Center: Inefficient Servers


And, Emily Hutson explained Data  Protection for Small Business and Intel Xeon Processor-based Servers


Broadcasted across the web:

This week on Intel  Conversations in the Cloud we had Joyent on to discuss best practices for  building your data center for the cloud and their Smart Data Center products.


On Intel Chip Chat we  had Suzanne Fallender Intel’s Director of Strategy and Communications for  Corporate Social Responsibility (CSR) on to discuss this year’s CSR report from  Intel and what Intel is doing inside and out as a technology company to be  responsible to mother earth. You can find out more about Suzanne from a recent article.


Joyent and Dell took part in this week’s Intel Cloud Builders webcast on Addressing Industry  Challenges with Cloud Computing.


On Channel  Intel we saw:

Kirk  Skaugen in the Intel  Keynote at Interop on the Cloud Vision 2015


Vikas Jain explain Single Sign-on to  the Cloud with Intel Cloud Access 360 & Trusted Client to the Cloud  with Intel Cloud Access 360


Jesper Tohmo explained Provisioning Identity across  the Cloud with Intel Cloud Access 360


And, Ben Onken showed us how you can Accelerate Data Encryption  with Intel Xeon Processors


Across the web we  saw:

Raejeanne Skillern shared that  GigaOm listed Intel on their list for Structure – 50 Companies  Influencing How the Cloud and infrastructure evolves.


Pauline Nist shared a White paper from Springboard Research  that shows Intel Itanium and Xeon at the top of their list for mission critical processors     


And, Intel was at Microsoft TechEd North America sharing the events and news #MSTechEd

itc_cs_telefonica3_xeon_library_preview.jpgDownload now


Telecommunications provider Telefónica wanted to launch new cloud services to all its customer segments, from enterprises to consumer, and expand its portfolio to SME/SMB and multi-national companies throughout Europe and Latin America. But first, it needed a new IT infrastructure based on a common platform that could scale as services grew.

The company implemented Cisco Unified Computing System* (UCS*) cross-architecture, powered by the Intel® Xeon® processor 5600 series.

“This processor is scalable, energy-efficient, and reliable and ensures service continuity,” explained Juan Antonio Sánchez Cañibano, computing services manager for Telefónica. “We also needed a common platform from which to launch new services and the Intel Xeon processor 5600 series provides this.”

The company has now launched two new services from the Intel Xeon processor 5600 platform: Aplicateca*, software-as-a-service, and Terabox*, ubiquitous and flexible storage. It’s also set to launch desktops and virtual data centers, in both public and hybrid flavors, over the next few months. Telefónica’s future strategy is based on developing more cloud services as it moves to become a fully integrated multimedia provider. Intel Xeon processors are at the center of this strategy.

To learn more, read our new Telefónica business success story. As always, you can find this one, and many more, in the Reference Room and IT Center.



*Other names and brands may be claimed as the property of others.

As I previously explained in my last blog post “Mitigating threats in the cloud using Intel® TXT and Trusted Compute Pools”, Intel TXT has the capability to Measure Launch the Hypervisor and/or operating systems and consists of a series of hardware enhancements:


  • Trusted Platform Module (aka. TPM) which allows for secure key generation and storage, and authenticated access to data encrypted by this key. By analogy, is a Smart Card embedded into the chip, because the private key stored in the TPM is not available to the owner of the machine, and never leaves the chip under normal operation;
  • Memory and I/O virtualization performed by the Intel® 5520 chipset that among other things, protect certain areas related to TXT from DMA access;
  • Intel® Xeon® 5600 series family or Xeon® E7 family that support the TXT instructions;
  • Enabled BIOS and Hypervisor.


We maintain a list of hardware that is TXT capable where you can find out what manufactures and models are available that deliver fully enabled solutions.


How do these pieces work together?


Before we explain TXT, there is some groundwork to be done. First let’s understand how a key component in this technology works. The Trusted Platform Module is the root component of a secure platform. It’s a passive I/O device that is usually located on the LPC bus, and nowadays can be found as part of the North Bridge chipset. TPM has special registers, called PCR registers (i.e. PCR[0…23]) and can do some interesting things: Seal/Unseal secrets, allow Quoting (Remote Attestation) and do some crypto services, e.g. RSA, PRNG, etc.


The principle of TPM is that it is based on PCR extend operations, where it uses the previous PCR value to define the next one:



A single PCR can be extended multiple times and it’s computationally infeasible to define a specified value to a PCR, so the order where things happen matter [(ext(A),ext(B)) ≠ (ext(B),ext(A))] and the secret sealed in TPM can only be unsealed if the correct PCR values matches as presented in figure 1.


TPM-Sealing and Unsealing Operation.png

Figure 1 – Sealing/Unsealing TPM operation due PCR registers matching.


Intel® TXT brings a magic new instruction called SENTER that has the capability to attest the integrity of the hypervisor loader or OS kernel code in a process known as Measure Launch. As presented in figure 2, the hypervisor loader issues the GETSEC[SENTER] instruction, which essentially performs a soft processor reset and loads a signed authenticated code module (ACM), which can only be executed if it has a valid digital signature. This module verifies system configurations and BIOS elements by comparing against the “known good” values protected of sensitive memory areas by using Intel Virtualization Technology for Directed I/O (Intel VT-d) and chipset specific technologies, such as Intel Extended Page Tables (Intel EPT). Then it verifies and launches the hypervisor, which configures low-level systems and protects itself using hardware assisted paging (HAP).



Figure 2 – Dynamic Root Trust of Measurement


TXT is the right technology for a measured launch and, in conjunction with Intel Virtualization Technology (VT-x, VT-d and EPT); it’s also possible to implement run-time protection against malicious code.


If you want learn more, I recommend you read an excellent paper this excellent paper by James Greene on Intel TXT technology. And  checkout the Cloud Builders Library




Best Regards!

Here’s a dilemma many data center operators face: we’d like to quantify all aspects of data center efficiency, but many measurements are too difficult or too costly to make. So, we either succumb to inaccessible data or we compromise, accept what we can measure, and move ahead.


PUE is a great example of what compromising can accomplish. PUE, despite known imperfections, has become the de facto industry standard for data center infrastructure efficiency. It focuses on a high impact area of data center cost efficiency in a simple but clear way and provides a basis for defining improvements. It’s not perfect, but it’s relevant. It’s not overly complex but instead relies on easily available data.  And PUE has done more to improve data center efficiency than anything short of Moore’s Law.


So, where does this leave us? Well, the answer is, with an elephant in the data center.


The most wasteful energy consumers in data centers (especially in low-PUE data centers) are inefficient servers. Here is some data collected from an actual data center recently by and architect in our HDC team, William (Bill) Carter:


Bill Carter elephat slide power slides 5_11  BIG CHART REV 3.png


The data is from Bill's walkthrough of just one data center in a typical large company. The details may or may not be relevant to your specific data center, but the general lesson might. In this particular data center servers older than 2007 consume 60% of the energy, but contribute only an estimated 4% of the compute capability. Energy consumption by IT equipment is dominated by inefficient servers. That's a pretty big elephant!


This observation got me thinking about what ways one could build a framework to assess this problem.


There are lots of ideas already. The Green Grid is working toward a Productivity Proxy as a non-proprietary computational model for understanding the workload portion of the energy efficiency equation. But this is still a few years away at best.  Emerson Electric proposed the CUPS model in 2008 to reflect the workload dependent aspects of energy efficiency, its simplicity is appealing and it’s one of the proxies The Green Grid is exploring for broader use.


But that elephant, inefficient servers, is still lurking in your data center. We need a way to take action today.


So let me stir up the pot and propose an idea for a kind of Server Usage Effectiveness or SUE type metric. In the spirit of PUE, the first order of business is to keep it relevant and simple. Let's not try to boil the ocean with over analysis. Let's get started today!


So what’s the one thing we know about computers? That their efficiency has closely followed a “Moore’s Law” type progression of doubling in efficiency every two years. That means, all else held constant, a server that is two years old is about half as efficient as a server you’d purchase today.


We can take this idea and simplify a server efficiency assessment as follows:


Formula Ratio corrected 26 May.png


To simplify the math, we take the age in whole years:  a server less than one year old has an age of zero, older than one and less than two is an age of one, etc. The term 2 -Age/2 = 0.707Age gives an approximate age based performance gap compared to the most current generation of server. The idea is similar to Emerson’s CUPS idea, except that instead of setting 2002 = 1.0, SUE sets Today = 1.0. So, yes, you need to update SUE annually.


Here’s how SUE might work: let’s say I’ve bought 500 servers per year for the previous four years. My baseline server population would be as listed in the table below. In the current year, I refresh my oldest servers with newer ones.


Table one.png


Following the above rules, you end up with the result


Table Two Fixed.png


What does SUE mean in words? An SUE = 2.2 simply implies that you have 2.2 times the number of servers you actually need (based on current daa center productivity and workload, etc.). Pretty straight forward.


In the above data from Bill, for instance, I could very easily (and approximately) assess an SUE.  Before the server refresh of  2010 might have been as 3.5 or higher. The current SUE is around 1.5, a good number but still with plenty of room for improvement (the company is still paying the operational costs of about fifty percent more servers than it needs to). Of course this is an approimate analysis to illustrate the point. Greater granularity of the data is needed to be more specific.


So what are some pluses and minuses of this approach? Well, before we launch into this, let’s be honest and admit nothing is perfect. Observations like, “you can game the system,” “it doesn’t account for x or y or z,” and “you need to think” are acknowledged (since they’re basically true for everything).



Here’s what I like about this approach:


  1. It accounts for system performance. This is the biggest factor driving energy efficiency in our industry. It’s the elephant.
  2. It’s quick. I can make an assessment of SUE in a couple hours for almost any data center-  without impacting operations.
  3. I can talk about the results in plain English. This is a big benefit when talking with management.
  4. It’s a “contemporary” metric, meaning it captures the time evolution of server performance relative to today’s productivity.
  5. It follows the scheme of PUE where 1.0 is ideal.


Here’s what you might not like:


  1. You need to think before you use it. It is not the answer to every question about data center performance.
  2. It’s approximate. That’s the compromise needed to avoid analysis paralysis. Tools such as the productivity proxy are on the roadmap and we can use them when they’re available. But this lets us get started today.
  3. It doesn’t account for the differences in system architecture or absolute performance. This can be accounted for, of course, but at a cost of increased complexity.
  4. It doesn’t account for data center operational efficiency or actual workloads. That route again adds complexity. One day all servers will be fully instrumented and that information will be readily available.
  5. It's a relative metric and doesn't allow comparison of different data centers. That's true, and it is actually true of PUE as well. I actually think this is a hidden strength.


I was chatting with Michael K. Patterson, a data center architect here at Intel, on this latter point and he flashed on an idea that brings us even closer to what he calls the “holy grail” of a total-data center efficiency metric. He proposed defining a “TUE” as


Let’s take TUE for a test drive.  Say my data center has a PUE of 1.8.  Its TUE in the 500 server/year example before refresh would be 4.0.  With a server refresh TUE would improve to 2.7, reflecting not a gain in infrastructure efficiency, but a sustantial efficiency gain at the data center level from the IT hardware itself.


This approach may help us understanding data center level improvements and let PUE become even more valuable a metric. For instance, one of the common concerns with PUE is that replacing older inefficient servers with fewer efficient ones can degrade PUE, even if the data center’s energy use and work output may have improved. This approach partially addresses that concern, purely focused on the server influence on the datacenter.


This is not a complete picture of the data center, however, and a hidden assumption is that servers make up the bulk of the IT energy use. These and other factors could be added to the model, but at the expense of complexity and the risk of analysis paralysis. Meanwhhile this approach does let us start measuring the elephant today. Mike, Bill, and I intend to explore these ideas in coming weeks and, if they pan out, perhaps test the waters with the Green Grid.


To summarize, inefficient servers are potentially the biggest efficiency drain in your data center. To date it has proven difficult to attack systematically the overall efficiency of the servers in your data center in the way PUE has been so successful at putting a microscope on infrastructure. The Server Effectiveness idea above is a “zeroth order” approximation that overcomes the complexity and cost of detailed server data and let’s you begin to size up the elephant in your data center: inefficient servers.



So tell me, what do you think? Is this a way to start the conversation about the elephant? Could this be useful in your data center today?

Do you have suggestions to improve this kind of assessment (while still keeping it easily measurable)? Mike, Bill, and I are definitely interested!

Mother Nature has caused plenty of destruction this Spring across the US, from tornadoes in Alabama to flooding in Tennessee. But it doesn’t take a natural disaster to cause a small business to lose access to its valuable data. According to a recent HP study, 32% of business data loss is caused by human error, 44% of data loss is caused by a mechanical failure, and 1 in 5 computers suffer a fatal hard drive crash during their lifetime.


Data loss can have dramatic impacts on a small business – from lost revenue to damaged reputation to reduced productivity from downtime. This makes it vital to choose a stable, secure IT platform for small businesses – like a server based on the new Intel® Xeon® Processor E3-1200 product family. “Real servers” have several advantages vs. desktop systems used as a server. They are designed and built to operate 24x7, with robust cooling systems and power supplies. They support ECC memory, which automatically detects and corrects memory errors to prevent the dreaded “blue screen of death.” And they are validated and optimized to run server operating systems like Microsoft SBS 2011.


The Intel Xeon processor E3 family is an entry-level server platform that enables additional data security and backup features, like Intel® AES-NI, which speeds encryption and decryption, and Intel® Rapid Storage Technology, which simplifies data backup – and also sends an email notification to the server administrator in case of a missed backup. Best of all, servers based on this new processor family don’t cost much more than a desktop system.


Check out this quick video for a humorous depiction of the differences between an Intel Xeon Processor E3-based server and an older desktop system used as a server (I play the role of desktop). Cori and I hope you take the message seriously, though – even a small business needs a real server for data security!


Before you jump all over me for my cheesy title, I’d like to  defend the comparison of MIC to that epic basketball star.  MIC, or Many Integrated  Core processors, are shaking up performance expectations for a unique set of  highly parallel apps in a manner that inspired Intel VP Kirk Skaugen to  recently state: “MIC is the biggest innovation for data centers since the  creation of Xeon.” infrastructure…the proliferation of industry standard based  servers, change of data center economics…this statement provides an interesting  look at the compelling nature that MIC represents to a unique slice of data  center computing.



But how much of an opportunity is it?  To understand  how MIC applies, one needs to understand that “not all workloads are created  equal.”  In a recent blog on micro  servers I discussed Intel’s recent announcement of micro servers, a new  category of server emerging where data center managers scale capacity through  use of highly redundant nodes, each processing unique work.  Reliability  is gained through redundancy of nodes, and efficiency optimization is a primary  determinant of value.  This is great for what we used to call “front end”  apps like web and fits nicely into some solutions provider requirements.


MIC, by contrast, is in some ways the exact opposite  implementation as micro server.  Instead of scaling performance capacity  through nodes, data center managers can utilize high core count to drive  performance of highly parallel workloads, things optimized for breaking up work  in known elements across cores.  Would you want to run a web server on MIC?  Not optimal as you’d have a drastic underutilization of cores.  What does  run beautifully are some specific apps traditionally focused on the technical  computing arena.


I recently caught up with technical computing expert John  Hengeveld about the latest developments for this processor technology.   It turns out that folks outside the walls of Intel have taken interest in this  processor.  While there are other alternatives in this space, MIC has been  getting unique attention as it relies on the same Intel Architecture framework  for software support meaning that code porting from Xeon to MIC is a fairly  straightforward value proposition – and this simple fact is a big deal when  considering architecture choice for an organization’s next HPC deployment as it  speeds time to deployment and saves on engineering costs.  The University of  Texas recently announced that they’re dedicating significant resources to  drive optimization of multiple workloads for MIC as it continues on its  development path joining over a hundred optimization engagements.  While  MIC isn’t quite ready for primetime, the promise of the technology is lining up  early ecosystem partners to ensure that when it does hit the streets, solutions  are ready to take advantage of its unique capabilities.

Greetings from Interop 2011, here in Las Vegas.  For those of you not in the know, Interop is billed as “the meeting place for the global business technology community,” and it’s one of the IT industry’s major tradeshows. Technology companies from all over the world are showcasing their latest products here this week, and networking companies are no exception. Bandwidth, Gigabit, 10 Gigabit, iSCSI, Fibre Channel over Ethernet, I/O virtualization – all of these networking terms (and many more) can be heard as one walks through the exhibitor expo. Why? Because networking is an essential element of many of the technology areas being highlighted here this week, and people want to understand how new networking technologies will benefit and affect them.


So with that in mind, I thought I’d share a handful of the questions we on the Intel Ethernet team are hearing at this show. And I’ll answer them for you, of course.


What Ethernet solutions are available from Intel?

The Intel® Ethernet product line offers have pretty much any adapter configuration you could want – Gigabit, 10 Gigabit, copper, fiber, one port, two ports, four ports, custom blade form factors, support for storage over Ethernet, enhancements for virtualization . . . the list goes on and on. We’ve been in the Ethernet business for 30 years, and we’re the volume leader for Gigabit Ethernet (GbE) and 10 Gigabit Ethernet (10GbE) adapters. We’ve shipped over 600 million Ethernet ports to date.  I have to think even Dr. Evil would be happy with that number.



interop rack3.jpg

The latest and greatest: a display of our 10GbE adapters for rack and blade servers



Why do I need 10GbE?

Quite simply, deploying 10GbE helps simplify your network while reducing equipment needs and costs. A typical virtualized server contains up to 10 or 12 GbE ports and two storage network ports, often Fibre Channel. 10GbE allows you to consolidate the traffic of those dozen or more ports onto just two 10GbE ports. This consolidation means fewer network and storage adapters, less cabling, and fewer switch ports to connect that server to the network, and those reductions translate into lower equipment and power costs.


What role will Ethernet play in the cloud?

Ethernet is the backbone of any data center today, and that won’t change as IT departments deploy cloud-optimized infrastructures. In fact, Ethernet is actually extending its reach in the data center. Fibre Channel over Ethernet for storage network traffic and iWARP for low latency clustering traffic are two examples of Ethernet expanding to accommodate protocols that used to require specialized data center fabrics. The quality of service (QoS) enhancements delivered by the Data Center Bridging standards are largely responsible for these capabilities.


As I mentioned above, converging multiple traffic types onto 10GbE greatly simplifies network infrastructures. Those simpler infrastructures make it easier to connect servers and storage devices to the network, and the bandwidth and 10GbE will help ensure the performance needed to support new cloud usage models that require fast, flexible connectivity.


How quickly is 10GbE growing?

10GbE is growing at a healthy rate as more IT departments look to simplify server connectivity and increase bandwidth. According to the Dell’Oro Group’s Controller & Adapter Report for 4Q10, 10GbE port shipments rose to over 3,000,000 in 2010, a 250 percent increase over 2009.


All the major network adapter and switch vendors are showing 10GbE products here this week. One of those companies, Extreme Networks, announced two new switches on Tuesday as a part of their Open Fabric Data Center Architecture for Cloud-Scale Networks. The BlackDiamond* X8 switch platform supports up to 768 10GbE ports per chassis, and the Summit X670 switches are available in 64- and 48-port configurations. Sounds like a big vote of confidence in 10GbE, doesn’t it?


Isn’t 10GbE still pretty expensive?

It might seem that way, but 10GbE prices have fallen steadily over the past few years. You’ll have to check with other companies for their pricing info, but I can tell you that at less than $400 per port, Intel Ethernet 10 Gigabit server adapters are less expensive than our GbE adapters in terms of cost per Gigabit.


When will I see 10GbE shipping as a standard feature on my servers?

10GbE connections are available in many blade systems today, and integrated 10GbE LAN on motherboard (LOM) connections will be widespread in rack servers with the launch of Sandy Bridge servers in the second half of 2011. When 10GbE becomes the default connection on those volume servers, all of the benefits of 10GbE – simpler networks, higher performance, lower costs – will be free, included with the cost of the server. There’s a ton I could say in this answer, and I’ll go into more detail in a future post.


There you go - that’s a quick sample of some of the questions we’ve heard here at the show. Many of the questions we’ve heard this week, including those above, will make for some interesting blog posts. I’ll get to as many as I can in the coming weeks.

I just returned from my European trip and wanted to  summarize some of the key customer feedback that I received from a breakout  session on BI and Analytics.


I need to preserve the identity of the customers so I’ll  leave out names, and I’m sure that the messages are ones you have heard before,  but perhaps with some amplification.


Most of the attendees agreed that BI was an area of  investment for their businesses, agreeing with analyst trends like:



There is a lot of interest for in-memory databases,  primarily for either real time BI with response time in seconds  to drive production, or for critical decision  workloads where time = money. There’s also interest in utilizing in-memory for  predictive analysis because the speed up would let you consider more scenarios.  All of those  interested were experimenting now! They want it ASAP, even if they have to roll  up their sleeves and work with the vendors.


There was a lot of discussion about cloud as a BI resource.  Obviously if you are analyzing data that comes from the web (social media,  online sales etc.,) is has some appeal, assuming that you can deal with the  data security issues.  If your data  doesn’t originate in the cloud, there is concern about the cost of moving it  back and forth. We actually had one customer say that they had shipped hard  drives as that was a lot cheaper than shipping the data! (Maybe there is a long  term market for HDDs as interchangeable media?!?)


Also, there was a lot of excitement about cloud resources,  as a way to quickly set up business in emerging countries without conventional  infrastructure.   These are places were  cloud works for traditional BI and DB until the datasets are too large, and it  lets you ramp up a new business or expand into a new geography.


Lastly, there is some concern about consolidation in the BI  sector because the customers are enjoying the pace of innovation and some are  working with smaller companies.  Because  no one stack is perfect, they like to pick and choose.


It was a great session and always good to hear directly from  end users!

How important is uptime to you when it comes to cloud? In order to archive higher 9’s you should have a structured plan to deal with root causes of unavailability: operational errors, components failures, power outages, security threats, etc. In this post, I’ll discuss about one aspect of security concerns: Integrity.


Actually, how do you manage to guarantee that Virtual Machines (VMs) running on top of hypervisor and hypervisor itself is in a trusted and well-known condition? The short answer is simple: You don’t, at least if you don’t have a root of trust tamper resistant that can attest integrity of chain of trust, and Intel® Trusted Execution Technology (TXT) has this ability.


Threat Analysis


To understand the role of Intel® TXT and why a root of trust is so important, I’ll guide you in a brief description of x86 architecture and how popular operating systems are designed.


Since Intel 286, every instruction executed on x86 architecture can be executed in 4 different privilege levels that are defined by 2 bits. So, when an instruction is executed with the 00b (0d), it means the highest privilege level called ring 0 or kernel mode. Otherwise, when these bits are equaled to 11b (3d), it means the lowest privilege level called ring 3 or user mode.


On the first 90s, at USENET, Prof. Andrew S. Tanembaum (Computer Network’s writer) and Linus Torvalds (Linux’s creator) discussed about the models that the Operating System should be built on Intel x86 architecture. Andrew defended the microkernel model, an elegant proposal by the implementation point of view since the system would be disposed between the 4 privilege levels, as we can see on the diagram bellow



Maybe, because of his programmer background, Linus Torvalds defended the monolithic kernel, a more pragmatic model where the kernel and device drivers are on the same privilege context. Therefore, on the Operating System construction point of view, it becomes simplified because avoid context changes due inter-processes calls.


Time showed that Linus Torvalds was right. Linux became popular and MINIX, developed by Andrew, didn’t. David Cutler (Microsoft architect and Windows Internals’ author) also believed on monolithic kernel that predominated. However, it doesn’t means that Tanembaum’s concerns weren’t relevant, is completely the opposite, from security standpoint, creation of security boarders isolating kernel, drivers, services and applications, and is an advance over Linus’ model. However, kernel monolithic showed fastest development cycle due its simplicity and claimed by better performance. I know that it’s a controversial point, since there are many examples of operating systems developed under microkernel model with excellent performance such as Cisco IOS that arrives embedded into Cisco routers.


From security view, in the monolithic kernel any vulnerability or malware loaded by devices drivers or any kind of code running in ring 0 context may compromise the entire system, while in microkernel model, where you have a minimum kernel footprint, non-extensible and isolated in its own TCB (Trusted Computing Base) is more resistant against attacks.


What TXT essentially do, is bring the security advantages of microkernel model to actual platform with enhancements. For a cloud environment, Intel® TXT is able to Measure Launch (ML) the BIOS, hypervisor and attest the integrity of each VM individually as described in the following picture:


TXT How does it works.PNG


Trusted Compute Pools

Extrapolating this capability to cloud infrastructure allow us to develop the concept of Trusted Compute Pools, where you group machine with TXT capable and enabled in a cluster of trust.


Trusted Compute Pools.PNG

This capability is present on various hardware models and you can use it with VMWare ESX 4.1 U1,  Linux/Xen using the tboot code as described in this post and also using HyTrust or Parallels with more coming.

itc_cs_imcee_xeon_library_preview.jpgWhen it comes to scientific research, processing power is everything. And two European research organizations have found the processing power their demanding applications need in the Intel® Xeon® processor 7500 series.


Switzerland’s CERN openlab is a framework for evaluating and integrating cutting-edge IT technologies and services in partnership with industry, focusing on future versions of the World-Wide Large Hadron Collider Computing Grid* (WLCG*). Through close collaboration with leading industrial partners, CERN acquires early access to technology before it’s available to the general computing market segment. Recently, CERN openlab tested servers based on  both the Intel® Xeon® processor 7500 series for use with its Large Hadron Collider* (LHC*) and infrastructure services.

The tests showed that Intel Xeon processor 7500 series offered a stunning 3x performance improvement over the Intel® Xeon® processor 7400 series.

“We make our decisions based on price, power, and performance against our benchmarks and per server,” explained Olof Barring, head of facility planning and procurement for the CERN IT Department. “The Intel Xeon processor 5600 series met the criteria we look for across these areas.”

In France, the Institut de Mécanique Céleste et de Calcul des Ephémérides (IMCCE) researches and charts celestial mechanics and the dynamics and astrometry of solar system objects. It uses an application called TRIP*, an interactive computer algebra system specially adapted to celestial mechanics. To increase the performance of TRIP, IMCCE implemented HP ProLiant* DL980 G7 servers powered by the Intel Xeon processors 7500 series. IMCCE benchmarked the Intel Xeon processor 7500 series against its existing servers and used Intel® C++ Compiler and Intel® vTune™ Performance Analyzer to optimize the TRIP code and trace any potential performance bottlenecks.

“The Intel Xeon processor 7500 series clearly made a significant difference to the speed and number of calculations we could carry out,” said Mickaël Gastineau, research engineer at IMCCE. “Furthermore, we gained more computing variables and also moved from batch processing to interactive processing.”

To learn more, read our new CERN and IMCCE business success stories. As always, you can find these, and many others, in the Reference Room and IT Center.


*Other names and brands may be claimed as the property of others.

Next week is the largest business technology show in North America – Interop Las Vegas 2011.  Intel will be raising the bar at Interop to show how the company plays a role in the cloud computing space.
Intel will feature:
  • Keynote speaker, Intel Executive VP Kirk B. Skaugen, Vice President and General Manager of Data Center Group, Wednesday, May 11, 9:00-9:25 am with the topic, “Driving the Data Center Infrastructure for the Next Decade.”  Kirk will discuss Intel’s Cloud 2015 Vision and how we are helping make this vision a reality with technologies that help IT and cloud providers build out secure, efficient clouds.
  • Intel’s Iddo Kadim, will be teaching a class  “Client to Data Center Security in the Cloud:  Realizing Benefits of End to End Platform Security” on Wednesday, May 11, 1-1:40 pm
  • Additionally, Iddo will be on a panel at Monday, May 9, 1:00-1:40 pm discussing “Building Private Clouds.”
Intel will showcase demos and technologies in their booth with the theme of Intel Cloud 2015 Vision, in the Cloud and Virtualization Zone booth #509 at Mandalay Bay.
Many have learned that the Cloud has to be powered by data centers, and the majority of these datacenters are powered by Intel® Xeon® processors.  In the booth, Intel, Hytryst and Dell will be demonstrating a new security innovation in Intel Xeon processors called Intel® Trusted Execution Technology.  This technology helps to ensure that a server has not been tampered with at the hypervisor level or below, and this hardware integrity checking is a critical component of securely on-boarding workloads between Clouds. Additionally, Intel will be demonstrating client aware computingwith Lenovo. Client-aware computing refers to technology that is aware of the technology and resources at the client-level (for instance, battery life, screen size and processing/graphics capability), and is intelligent enough to take advantages of these resources at the client. At the booth, you will see how cloud providers can use this technology to deliver a richer end-user cloud service.   Last, Intel Ethernet representatives will be at the booth talking about the latest Ethernet innovations that help solve I/O efficiency problems.
Intel will also have representatives at the booth talking about Intel® Cloud Builders—a cross-industry initiative aimed at making it easier to build, enhance, and operate cloud infrastructure. At the booth, we will be showcasing the Intel Cloud Builders Guide (eBook) demo that provides a plethora of case studies in the forms of videos, webcasts, podcasts, and whitepapers from many of Intel’s partners and customers who are taking advantage of the Intel Cloud Builders program.
The Intel booth will also show software that Intel has developed to help address security and identify issues for the cloud. This software includes:
  • The Intel ® Expressway Access 360, the first solution suite designed to control the entire lifecycle of cloud access by providing single sign-on, provisioning, strong authorization and audit.
  • The Intel® Expressway Gateway Service demo, used to abstract, secure, and simplify services and delivers a unique set of features tailor made to integrate, mediate, and scale services in a dynamically changing Enterprise application perimeter.
While at the booth, besides learning about Intel’s latest technologies for the cloud, be sure to enter to win a Kinect (powered by the Intel® Core i7 processor).  Additionally, we will be videotaping live interviews for Intel’s YouTube Channel for any Intel booth attendee – so learn about our technologies and give us your opinion about Intel’s latest technologies.
The week of May 9-12, there is only one place to learn more about the next Intel technology innovations that will power the Cloud – Interop Las Vegas 2011!  To follow Intel at Interop and more, follow us on Twitter: @IntelXeon or follow-up at

Small businesses can significantly benefit from a networked  infrastructure. Networks enable small businesses to enhance their collaboration  and share resources such as file and print services.


There are two types of networks: peer-to-peer and client-server. A  peer-to-peer network consists of interconnected client computers, such as  laptops or desktops, able to access each other’s resources such as applications  or files. A client-server network is a centralized network where one or more  computers (aka servers) act as dedicated resource providers to a pool of client  computers. These servers “serve up” 24x7 services to the client computers such  as file, print, email, and backup.




Peer to Peer Network

Client-Server    Network



There are advantages and disadvantages to both peer-to-peer and  client-server networks; however, in general, small businesses benefit more from  a client-server network designed to maximize your employees’ productivity  through enhanced  security, reliability, and  accessibility features. And client-server networks built with an Intel®  Xeon®-based server are the ideal choice for small businesses.


If you’re unsure about whether a client-server network is the way  to go, here are the


Top 10 Reasons  to Setup a Client-Server Network with an Intel Xeon-based Server:



24x7 Accessibility: With a  peer-to-peer network, if a user needs to access a file residing on another  computer, that computer needs to be powered on. This is not practical with  client devices that are generally powered off when not in use. With a  client-server network, the server is always-on, always available, so files and  applications can be accessed at anytime.


Improved  Collaboration: The server in a client-server network can act as a centralized  hub for storing and sharing files. This configuration allows multiple users to  access files and makes changes to a single centralized copy. This also helps  minimize version control issues that often arise from managing multiple  versions of the same file.


Centralized, Client Backups: Servers can be configured to automatically  backup client computers and also restore data based on those backup images, in  the case of a client hard drive failure.


Remote  Access: Servers support remote access which  enables employees, partners, and customers, to access data on the server  without physically being in front of the system.


Server  Backups: Intel Xeon-based servers support Intel® Rapid  Storage Technology, which enables the server to seamlessly store  multiple copies of its data on additional internal hard drives, so if one of  its hard drives fails, it can quickly recover the data with minimal system  downtime.


Enhanced  Security: Servers can be configured to control access to the server’s data  and other resources on a per-user basis. This ensures that only individuals  with proper permissions access specific data and applications residing on the  server. And with Intel Advanced Encryption Standard New Instructions (Intel AES-NI),  data passing between the server and clients is encrypted to prevent data from  being compromised in transit.


Better  Client Performance: In a peer-to-peer network, clients also have  to act as servers, “serving up” services to other clients on the network. This  can negatively impact performance of those clients. This computational burden  is lifted by having a high performance, Intel Xeon-based server, dedicated to  supporting the clients.


Shared,  System-Wide Services: Servers provide shared, centralized services  for clients to access such as file, print, email, database, and web hosting.


Enhanced  Reliability: Intel Xeon-based servers support Error Correcting Code (ECC)  memory which helps protect your business-critical data and prevent system  errors by automatically detecting and correcting memory errors.


Business  Growth: Peer-to-peer networks are limited in terms of the number of  users. A client-server network built with an Intel Xeon-based server is  scalable for your needs, allowing room for growth as you business grows.



For your small business’ client-server network, the right choice  is a server based on the Intel® Xeon® processor E3-1200 product family. For  more details on the Intel Xeon processor E3-1200 product family, listen to my Intel Chip  Chat.

Filter Blog

By date: By tag: