Skip navigation

Purchasers of computer equipment, whether big data center servers or consumers face similar trade-offs in buying new equipment.  When the focus turns to the processor it's usually starts with a performance related criteria.  It could be the performance of a key application is slow on their older equipment and needs to be sped up.  Then there are cost considerations; and performance at a given price point is a requirement.  Recently energy efficiency has been an important factor.  Performance per watt and performance for a given power envelope become key metrics.

What are the second level factors for when these considerations are relatively even?

Having worked with servers for the past 12 years, RAS (Reliability, Availability and Serviceability) has always been a foundation of buying data center computers.  For consumer purchases RAS is not an important requirement.  Losing a bit while playing a game or a movie at home is not a big deal.  However for a data center losing a bit in a bank account or on manufacturing line has a much bigger implication.  So companies pay for error checking on memory and vendors create more and more sophisticated RAS capabilities so today a mainstream data center computer has many of the features that were reserved for mainframes in past decades.  Not all platforms are created equal from this standpoint though.  Incorporating these features requires an investment in engineering to design, test and enable them properly.  So RAS can provide a useful criterion for breaking a close purchasing decision.

So I work in the area computer security, so not surprising my vote is that security be a key purchasing criterion.  As the population is every more connected and more information is available electronically, there is more risk of being connected to a malicious program and having information misused.  Each year the new attacks hackers develop continue at an alarming rate. And organized crime and even everyday criminals are mugging our computers.  Since college, reading the police blotter has also been a guilty pleasure of mine. Over the past several years, the trends are clear, identify theft is on the rise.  In some neighborhoods, there are more identify thefts reported than physical theft. Sure, the crime distribution will vary widely in different neighborhoods, but without much research it's a pretty safe bet that it's growth by leaps in bounds. And talk to any major companies' security group and they will tell stories about computer attacks they are subjected to.

New technologies that protect our data and our software are becoming important features.  And these features often vary across product lines, providing a differentiation to tilt a close buying decision. In many ways, Security and RAS are related.  If a server is down due to a malware attack or breach, then availability suffers.  So RAS and Security both provide buying criteria for when traditional purchasing considerations are relatively equal.

What's been your experience buying data center servers?  What are your major criteria and tie-breakers?

As I discussed in my blog two weeks ago, one of the most powerful technical strategies in turning the tide on data center energy consumption has been the drive toward more proportional scaling of platform power with workload.



Sustained Efficiency Improvement



Performance and power consumption results are based on certain tests measured on specific computer systems.  Any difference in system hardware, software or configuration will affect actual performance.  Configurations: Two-socket Systems, Test Results for SPECpower_ssj2008, Testing by Hewlett-Packard.




Per the trend shown in this figure, while successive generations of platforms have delivered greater and greater performance, they have also delivered lower active power consumption. According to this data, the energy efficiency at 30% platform load has doubled every 16 months since 2004.


That is an astounding result when you put it in contrast to other industries. For example, the automobile industry increased fuel efficiency about 7% per year from 1978 to 1984 and then ceased to improve or innovate for over two decades.


The trend to proportional computing has produced real energy savings. Generally the idea behind proportional computing is that a platform should consume energy proportional to the work it is doing. When it is doing maximum work it will consume maximum power, and when it is doing no work it should consume (ideally) no power.




Improving platform scaling has taken us down and impressive road, but we face some serious challenges. These challenges are most easily recognized by looking inside the platform (or, to continue the car analogy, under the hood!) and the Power Scalability of Components.


To highlight this, let’s define the Component Power Scalability =  1 – (Component Idle Power)/(Component Max power). A perfectly proportional component would have a Power Scalability of one.


Here is a graph of a model system’s components. Generally dividing into quadrants, the upper left is best and the lower right is where the most work is needed.




Note that memory power scaling is nearly ideal, it has very low idle power and Power Scalability of over 90%. In this particular system the CPUs have the highest idle power of ingredients on the platform, but also have scaling above 80%


The components with the lowest scalability and biggest impct to proprtionality are readily apparent as those scaling much worse than 50% with high idle power. For instance, the power supply in this example has quite good efficiency at peak load (over 80%), but a scalability below 50% and a siginficant contribution to idle power.


Hard disk drives used in this platform are kept spinning at high speed even when the platform is at idle to reduce transaction latency and speed up performance. The cluster of points near the lower left each contributes a small amount to platform idle power, but together form a challenging barrier to improving proportionality.


Some technology solutions are becoming available to solve the above problems. For instance, improved fan speed control algorithms can improve fan power consumption scaling. Solid State Drives can reduce the amount of power consumed by storage while offering improved performance.


Within Intel, we are continuously focused on improving the scalability of our processors and silicon. We need Industry innovation to continue improving scalability of the rest of the platform. PSU scalability solutions stands out as a highest priority. Fan power scalability is another key opportunity. Beyond that, the “non-trivial many” components on the system will ultimately become limiters.


Does this proposal for looking "under the hood" make sense to you?


What and how else should we look under the hood?


What would you call the Component Power Scalability?


Input and comments welcome!


Together, we can “Turn the Tide.”

Yesterday, we had two of the Entry systems for Intel hybrid cloud hit our dock in Folsom.  We quickly opened one up and Jason walked us thru the process.



While this is good, jason & I have setup boxes a number of times, therefore we asked our newest team member (Monica) to do the same and follow the Duck N Bunny guide to see if it would work.     Let me also introduce Monica to the world, she has just moved into my team and will be driving the engagements with MSP's around the world.  She is focused on making the process easier, faster and make doing business with Intel simple.   If you see her out at events or online please say hi!.  


Here she is with the IHC box and when I asked her after for her opinion, here is what she said


"Intel Hybrid Cloud team obviously thinks of everything.   Following the set-up guide that was included in the box was just the way I like things to be:  Duck & Bunny simple! "   





IF you are interested for your MSP please go to and register.     


An IT Guy On A Mission

Posted by cklatik Sep 28, 2010

He needs no introduction; he is “IT Guy,” and funny man Ian Thomas.  Why doesn’t he need an introduction?  Because he already made his own!  Right here on the Server Room, and on YouTube.


Just before IDF 2010 Ian set out on a mission to find the best of IDF  and deliver it back to his viewers.  Before leaving on his mission he  acknowledged some questions on total cost of ownership and the return on  investment on the CPU cores in data center servers and relation to  efficiency, number of cores, and price.In this video Ian hopes to earn a new nickname possibly Corey… well  sorry Ian.  I’m already Cory… How about “hard-core,” because your  knowledge of CPU cores is extreme!






Although IDF is over, the IT guy Ian Thomas has more videos coming!  Follow him on Twitter for the latest and greatest posts, including his latest video post with a Smart TV walkthrough and interview with Geordi err, I mean LeVar Burton.

The Intel Capture Your Experience IDF 2010 contest is now over!  We tasked participants of the contest at IDF to capture the data center experience at its best.  We made our decision on the judged winner; it was then the audience’s turn to make the decision.  And, thanks to all those that voted we have a winner. YouTube has kindly (automatically) tallied the liked videos on our playlist, and we have the winner of the Boxee box, TV, and Intel Atom based Home Server.  So who is it already?





Allen Light, you Captured Your Experience at IDF 2010, and according to the YouTube masses get to take home that Ultimate Home System!


The contest turned out great!  For that, please take our gratitude and huge kudos to everyone that participated in the contest and voted on the IDF 2010 Capture Your Experience videos.  Make sure to know what’s coming up, follow us in real time on Twitter @IntelXeon.

If you were unable to attend the IDF class on Intel Hybrid Cloud and want to hear what we said.   I recommend you check out the audio enabled presentation that the IDF team created for us.


In this session we cover:
• Intel® technology solutions for providing a subscription-based IT managed service to small businesses
• The current ecosystem and hardware providers
• Integrating software onto Intel® platforms
• A demonstration of Windows* Small Business Server (codename Aurora) integration with a future Intel® server platform (codename Bromolow )


Listen to Matt Clarin talk about what CPR has been able to do with utilizing the new platform.  Jason discusses how ISV's can join us, while For goes over the future roadmap and aurora integration.


Here's one of my favorite foils from the presentation.




Josh Hilliker

If you have heard of the Intel Hybrid Cloud, however you have not seen any details on what it is, how it works and who it's for, then this video will help.   The team got together to publish this quick video on the what, how, who, why of the Intel hybrid cloud.





If you are interested to hear more after the video, check out


The team is working on a shooting more how-to video's and recap of our IDF (Intel developers forum), therefore stay tuned for more.  Also please let us know if there any areas of the Intel Hybrid Cloud you would like to hear more about.


thank you


Josh Hilliker



At IDF 2010 in San Francisco last week there was one place to find everything Xeon, High Performance Computing (HPC), most anything cloud, and everything you needed to know about mission critical solutions for data centers.  Yes, that is right, the Data Center Zone.



Whether you looked for cloud, power efficiency, security, Xeon or just to learn something new, you found it at the Data Center Zone. For all your questions staff were onsite, resident data center dudes if you will.  Two in particular Jon Markee and Rob Kypriotakis were on site, check out their interview with Cisco to see what they had to say.


While IDF raged on the Data Center Zone was a technology showcase, host to ongoing demos, and many classes.  Some of the demos to highlight are:


HPC: High Performance Computing

  • Flexible Datacenter in the Cloud

  • Intel® Many Integrated Core Architecture for Highly Parallel Computing


  • End to End Integrated Cloud

  • Intel Intelligent Power Node Manager

  • Security in client aware usages

  • Open Scalable Cloud Storage
    …Abridged, read this post for full list and this recap on the cloud computing events at IDF for more info …

Mission Critical

  • Better data protection with broader encryption

  • Power savings, even on the big servers


Of the classes ready for the taking, the main tracks and some available courses were:


Architecture: Next Generation Intel® Microarchitecture


Cloud Computing: Evolution of the Data Center

The largest track, some courses are:

  • Turning the Tide on Data Center Energy Consumption

  • Using Intel Technologies to Provide Managed Services Solutions for Small Business Customers

  • Securing today’s Data Centers against Tomorrow’s Attacks

  • Using Intel® Technologies to Provide Managed Services Solutions for Small Business Customers


Eco-Technology: Environment and Productivity at its Best with Energy-Efficient Products and Technologies


High Performance Computing


Next Generation of PCI Express Technology:  Usages and Technology Features


Energy, Thermal and Mechanical Technology


SuperSpeed USB: Advancements in a Ubiquitous Technology


Solutions: Connecting the technology to the economics and requirements of business


For more info on these classes and more check out the IDF 2010 catalog for available recorded courses.

At IDF 2010 in SF, we had the opportunity to demonstrate a nice use  case for HPC in the Cloud – when you have tapped out your local cluster  resources, provision your excess work to the Cloud. 10GbE iWARP was used  for the local cluster as the performant low-latency fabric.  Mainstream  10GbE was used in the Cloud as it provides the dynamic virtualization  and unified networking features required for virtual data centers.  See  the class I presented on HPC in the Cloud networking - it covers some  additional usage models and some key features required for cloud networking.



At IDF 2010 many people were on the looking for him why?  Because if you found a person wearing the “Data Center Dude Approved” t-shirt you could have been one of six to win a 160 GB Intel Solid State Disk. (SSD)  Not everyone received an SSD but great conversations were had no matter what, and it was great to meet everyone!  Thanks for finding us, and the great chats, a few of the winners are:


  • Kevin Vuong
  • Charles Wang
  • Chaim Gartenberg


greg-wagnon.pngAs for the Data Center dude himself, that is Greg Wagnon.  Greg hosts a video series focused on connecting viewers with the technologies and experts behind those technologies at Intel.  Through demonstration and discussion with those experts, Greg introduces everyone out there to the people who make it happen.  You can find out more on the Chip Chat interview with Greg and by checking out the Data Center Dude series at the Server Room.


For more news, and updates remember to follow us real time on Twitter @IntelXeon.

What is 100 to you? One hundred millimeters make a centimeter, and one hundred centimeters make a meter.  How about one hundred podcasts, what does that make?  To Allyson Klein Director of Technology Leadership Marketing at Intel, blogger, podcaster, and host, it makes Intel® Chip Chat. aklein-pic.jpeg


Chip Chat is a podcast series covering a variety of subjects highlighting the rock stars of technology and generally the brightest minds in the industry.  Since it started in 2007 the series has focused on brining listeners one-on-one with the men and women behind the innovations and inspirations that make the future of computing.


As Allyson has already said in her post, she got to talk with executive vice president David “Dadi” Perlmutter.  She also touched on the fact that she met with other technologists and technology rock stars in attendance, a preview of that line-up is:

Intel Chip Chat in Session at IDF 2010:
Bob BeauchampCarl HansenJustin Rattner
Randy ChanGenevieve BellTom Stachura


Congratulations to Allyson and Intel Chip Chat!  Remember to catch this great line up of interviews and more check out Chip Chat at its home on, and on iTunes.  In episode 101 Allyson sits down with Sunil Ahluwalia from the LAN Access Division to discuss Fibre Channel over Ethernet.  Keep up on all the Chip Chat news on Twitter, Facebook, and of course at the Server Room.

ETA 10/26/10: New Patch Changes.


Today, Oracle announces the addition of Intel® AES-NI acceleration into Oracle 11g Release 2 ( Advanced Security Transparent Data Encryption (TDE) on Intel® Xeon® 5600 series processors! TDE capability has been around for some time which makes encrypting sensitive data in application table columns, or application tablespaces seamless as the cryptographic operations are performed by the database kernel. This, and the built-in key management, dramatically lowers the cost and complexity of encryption. What is new is the Intel® AES-NI – based encryption acceleration using Intel® performance primitives (IPP) crypto library!


How to take advantage of TDE and the AES-NI acceleration?


Oracle Advanced Security is an option which can be purchased with the Oracle Database Enterprise Edition 11g Release 2. With Patchset 1 (Oracle Database release, Intel® AES-NI is automatically detected and used by default for *decryption*. In order to enable hardware acceleration for *encryption* in TDE tablespace encryption, patch  needs to be applied. TDE column encryption does not currently support hardware-based cryptographic acceleration.


Once an external security module (Oracle Wallet or Hardware Security Module (HSM)) and master encryption key is created, application tablespaces can be defined as “encrypted”. No changes to the application are required.


TDE does not provide access control while data is inside the secure perimeter of the database (as long as the wallet is open and hence the master encryption key available to the database). The database itself provides complementary access controls through object and system privileges granted directly or through roles, fine-grained access controls implemented with Oracle Label Security, Oracle Database Vault, or Oracle Database Firewall. As soon as the data leaves the database (on a disk that is exchanged for maintenance, Data Pump export files or RMAN backups), the access controls are no longer enforced, but encrypted data is separated from the TDE master encryption key, which resides on the server in the Oracle Wallet, or an HSM. The password that encrypts the Oracle Wallet can be split between multiple custodians, so that the real password is not known at all. If the TDE  master encryption key is stored in an HSM, it never leaves it unencrypted, and usually HSMs themselves have extremely powerful, smart-card-based access controls and separation-of-duty in place to ensure extreme key security. So the wallet or HSM password can be kept secret from the DBAs that manage your databases, but it does not have to – it's up to your individual security policy.


An outline of simple steps to use TDE is as follows:

First, you will need to create an Oracle Wallet using the SQL*Plus  command-line or in Oracle Enterprise Manager. Once the Oracle Wallet is created (the TDE master encryption is automatically added), you can open it for use against your database. When you choose to encrypt application table columns with sensitive content or entire application tablespaces, a table or table space encryption key is automatically created that encrypts and decrypts the data. These keys are encrypted (wrapped) with the TDE master encryption key.


Figure 1 below shows the speedup with AES-NI where encryption and decryption workloads were run against the Oracle 11g database. Using IPP crypto library with AES-NI, speedup can be as much as 10x on encryption and 8x on decryption in AES-256 TDE CBC mode. Figure 2 shows the speedup in AES-128 TDE CBC mode. Decryption usages are more common as the data is read over and over vs adding data to the database. CBC decryption can be potentially parallelized more so than encryption. Full details (pdf) can be found in our latest white paper.




Figure 1. Oracle Advanced Security with Intel® AES-NI in AES-256 TDE CBC mode.




Figure 2. Oracle Advanced Security with Intel® AES-NI in AES-128 TDE CBC mode.


So, TDE is for data at rest. But wait, Oracle 11g database also provides encryption for data in transit. The database can be configured to reject connections from clients that do not encrypt data, or allow unencrypted connections. Network security can be configured using Oracle Network Configuration administration tool.



Peter Wahl, Senior Product Manager, Oracle Database Security

Matt Lanken, Member of Technical Staff, Oracle
Beth Marsh-Prime, Database Performance Engineer, Intel

Rod Skinner, Database Performance Engineer, Intel


1. Oracle with TDE, time takes to insert 1 million rows 30 times with AES-256 CBC  mode into an empty table on Intel® Xeon® X5680 processor (WSM, 3.33 GHz, 36MB) optimized with Intel® Performance Primitives crypto library (IPP) vs. Intel® Xeon® X5560 processor (NHM, 2.93 GHz, 36MB) without IPP. Time measured is per 8KB of data and shown as encryption processing rate in MB/CPU second.

2. Oracle with TDE, time takes to decrypt a 5.1 million row table with AES-256 CBC mode on Intel® Xeon® X5680 processor (WSM, 3.33 GHz,

3 6MB) optimized with Intel® Performance Primitives crypto library (IPP) vs. Intel® Xeon® X5560 processor (NHM, 2.93 GHz, 36MB) without IPP. Time measured is per 8KB of data and shown as decryption processing rate in MB/CPU second.

3. same configuration as in footnote 1. TDE mode is AES-128 CBC.


4. same configuration as in footnote 2. TDE mode is AES-128 CBC.

Intel Developer Forum (IDF) 2010 came to a close in San Francisco last week and what an experience!  Thanks to Cisco for providing their Flip MinoHD camcorders and participants in the Capture Your Experience Contest, there are fantastic videos on YouTube for everyone to see and share the experience!


Let The Voting Begin, Round 2!

We’ve reviewed the submissions, and picked our favorites, and boy it is hard to pick a favorite among all these great videos!  Now it is your time to decide. Which video captures the IDF 2010 Data Center Experience the best? Go to YouTube and vote thumbs up or down for your favorites by September 24 (9/24) to decide who is truly “data center dude approved!”



Our Judge Award Winner

As for our favorite you may ask?  Who is the judged winner you ask?  Well as hard as it was, we had to do our part and pick one, and that person is Chaim Gartenberg, with his straight to the source submission.



He is the lucky judged winner walking away with the Ultimate Home system consisting of a 55 Inch TV and Boxee box, and a home server with an Intel Atom processor inside.


We’ve made our decision, now it’s your turn, remember go to YouTube and vote thumbs up/down by September 24 (9/24) for the best video that captures the data center theme from IDF 2010!

The Power Usage Effectivness (PUE) metric is an excellent metric for overall data center efficiency. As a result of standardization and clairifcation of hte metric by The Green Grid, it has become the defacto metric for data center efficiency.




A recent whitepaper, TrendPoint Systems advocates breaking the TotalFacility Power consumption into smaller sub-components and proposes a “Micro PUE” metric.


While the intent (improving overall data center efficiency by focusing on subsystem efficiency) is a good one, creating the term “Micro PUE,” in my opinion, just muddies the waters by giving a new name to something that has already been standardized into the industry lexicon.


As Michael K. Patterson pointed out to me, the original whitepaper by The Green Grid on Power Usage Effectiveness (PUE) has already defined sub-system indicators.


The Cooling Loading Factor (CLF) and Power Loading Factor (PLF) are defined as:

  • Cooling Load Factor (CLF) is the total power consumed by colling equipment (including chillers, cooling towers, computer room air conditioners (CRACs), pumps, etc. divided by the IT Load.


  • Power Load Factor (PLF) is the total power dissipated by the Power Distribution System, including switch gear, uninterruptible power supplies (UPSs), etc. divided by the IT Load.






Expanded PUE.bmp

So with









you get the relationship


Simplified PUE.bmp


Obviously, understanding CLF and PLF are requisite for any improvment plan. And as TrendPoint indicates, tools are available to do this.



While sub-system efficiency is the right engineering approach to improving data center PUE, inventing the new term “micro PUE" just muddies the waters in terms getting to an agreed industry lexicon for discussing key factors affecting data center efficiency.


As a disclaimer, I am the Intel alternate member of the Board of Directors of The Green Grid. These comments reflect my own opinion, and may not reflect those of The Green Grid or Intel.


Intel has officially launched it's newest technology site for Intel Intelligent Power Node Manager!




The site reviews the major use case models that are provided by Intel Intelligent Power Node Manager Technology:

  1. Realtime Monitoring of Power & Inlet Thermal Temperature per node

  2. Increased Compute density

  3. Dynamically balance resources/workloads based on power decisions (also called Power Optimization)

  4. Improved business continuity by minigating power risks and thermal events


We have posted some familiar white papers, case studies and other pertinent information as well. Please take a look at the new site and provide feedback here.


Tell us your power concerns and how we can help your company with Intel Node Manager technology.

I wrote last week about prepping for my 100th episode at the Intel Developer Forum.  I'm winding down my IDF, and what a busy IDF it was.  Chip Chat was on the scene interviewing technologists for episodes coming out over the next few weeks (hints on content - unified connectivity for the data center, and user experience and mission critical computing oh my!).  But the featured conversation was my chance to sit down with Dadi Perlmutter and discuss his history in the computing industry as well as how he sees the future.  Dadi runs Intel's IAG group. IA stands for Intel basically Dadi is responsible for delivery of all of our products. And to put into context how much influence Dadi has had to computing, he's the guy who was behind Centrino.  You've probably heard of that.  He also was the guy who could be argued to be behind Atom...kind of a big deal as well.  Dadi is a guy who thinks bigger than most and has the chops to back up these big thoughts with society transforming innovation.  I was very curious to hear about what inspired these designs as well as what he thought about what was coming next.


As I mentioned, we had a lot of great conversations at IDF.  If you'd like to ensure you don't miss an episode sign up for an RSS feed of the program, follow us on Twitter @intelchipchat and on Facebook at  And feel free to comment on the episodes - I would love your feedback.

A critical element of the Data Center Energy Efficiency landscape is the role of government regulations and standards in establishing efficiency goals for servers and data centers. Intel has been an active participant in working with the Environmental Protection Agency on it’s ENERGY STAR program for Servers and Data Centers.



On our last day here at IDF in session ECOS004, Henry L. Wong, a Senior Power Technologist at Intel, reviewed the world wide landscape of energy regulatory requirements and provided insight on how to comply and what is next. For servers, an ENERGY STAR program has been in place since 2009.  The ENERGY STAR requirement sets goals for system idle power and power supply efficiency based on the Climate Savers Computing Initiative Standards.



Henry makes an effective argument that idle power is a less than ideal metric for server energy efficiency. For instance, a server with twice the performance and the same idle and active power should be considered twice as efficient (it will get twice the computation done for the same energy). However, a metric purely based on idle would not recognize this efficiency improvment.



Henry reviewed efforts underway with SPEC and the EPA to develop a Server Energy Efficiency Rating Tool (SERT), which measures both platform power consumption and computational efficiency. The tool is based on learning from SPECPower_ssj2008 and it has industry support from The Green Grid, CSCI, and other groups.






When available, the SERT tool will be a huge advance for measuring server energy efficiency. Performance gains that are achievable in the computing industry are in many ways unique at improving productivity and efficiency in consumer and industrial activities. Increasing energy efficiency which provides more output (compute performance) for energy consumed is generally recognized as key solution to environmental and economic issues regulatory bodies face today.


Since servers are industrial machines purchased to do computational work, it's refreshing to see that “work output” may finally be reflected in official metrics that value efficiency.

It used to be you could put a bunch of servers into a room, hook up some air condition units and some perforated floors and you were good. Uniformity was bad. Analysis was weak. But it worked. Well, as servers have become more powerful and, more importantly, as the complexity and quantities of tasks we ask them to do has grown, that approach no longer works.


Now people commonly worry about total cooling capacity, stranded power, hot zones, cold zones, etc. All “luxuries” we can’t afford in our data driven world.

Many believe the evolution point of the Energy Efficient Data Center is “smart;” a data center than can sense and manage with high precision its power demand, cooling capacity, temperature, and other environmental and operational variables.



I am sitting in the excellent IDF session by David Jenkins and Todd Christ on Intel Intelligent Node Manager. In the power constrained data center, eeking out every margin becomes an extremely cost effective endeavor. (why? Because your alternative is to invest in a new data center!). Node Manager builds intelligent nodes, which are required to build the ultimate smart data center.


SF10_DCCS003_99legal gm_dj  Node Manager.jpg


The current Node Manager reports system power and temperature and caps system power.  CPU & memory subsystem reporting/capping as well as the boot power and OS failure capping comes with the Sandy Bridge generation of processors announced at IDF 2010. To accelerate time to market, a Data Center Manager SDK is available.



The principle of operation is straightforward (in concept). The external management system sets policy and monitors compliance. The heavy lifting is all done within the Node Manager, which monitors, decides, and acts to control the platform power. In the newest generation (2.0) of Node Manager, it will report not only platform power level and inlet air temp but also CPU and memory power.



As discussed in David and Todd’s slides, many end users, OEM, OS, VMM, and console vendors are adopting this capability in the newest generation of platforms.



It sounds like a great way to get some smarts into your data center!


Cloud at IDF - Day Two

Posted by megan_mcqueen Sep 14, 2010

Day two of IDF 2010 has drawn to a close and cloud technology was once again well represented.


Intel IT gave a well-attended session that looked at how the group has integrated and implemented cloud technology. Intel and partner Microsoft provided an informative session on increasing cloud reliability with platform RAS. There was also a hands-on lab providing detailed instructions on setting up a basic cloud computing environment.



The day concluded with Intel experts talking about the vision of the cloud in the next five to ten years. Rajeanne Skillern, Billy Cox, Dylan Larsen, Andy Tryba, and RK Hiremane provided insight about how to get smarter on the way we build infrastructure in the future.




Don't forget that session content is posted online at!

It's hard work to keep a small business going in an economic downturn. How do you find new customers and keep current ones happy while juggling cash flow challenges and employee schedules?  The last thing on your mind is probably an IT upgrade. But there are good reasons to think about making a technology investment, especially in light of proposed tax deductions for new equipment purchases.


As marketers, we're here to tell you that an efficient IT environment can help you in several ways, from increased productivity to better data security to easier maintenance of your hardware investment. But what do improvements in productivity, security, and maintenance look like in the real world? We'll go into more detail in future months, but first we want to introduce ourselves to you – and give you some insight into what we see as key IT challenges for small business owners.


So where are we getting this information? Market research, yes – but more importantly, stories from family and friends who run small businesses, some of our smaller customers, and even our own work experiences outside of Intel. (In case you're wondering who we are, we'll go ahead and establish our "geek cred" with this video of us talking about why ECC memory is important. More on that later.)


We hear a similar tune from everyone we talk to: running a successful small business means doing more with less, and that includes having an IT environment that handle anything you throw at it. Want to run applications like Microsoft Exchange Server? Want to share files and work collaboratively? Want to back up your data? Want to access your email remotely? Want to check the status of your computer systems even if you're out of the office? That dusty old desktop system you've got stashed in a back closet won't cut it. You need a "real server" to run your business.


Wait, what's that? You can't afford a server?  Well, can you afford to lose sensitive customer data? Can you afford to be offline for hours (or days) at a time, with no way to place new orders or close out existing ones?  Over the next few months, we'll be discussing the benefits of a “real server.” And we want a discussion. Give us feedback on the technology needs – and challenges – of your small business. Our theory: older equipment is slower, costs more to maintain, and can result in significant downtime. In 2010, if you're not online, you're missing customers – and missing opportunities to increase your revenue.  What do you think?


Encrypt the World

Posted by Jeff_C Sep 14, 2010

In preparing for a visit to come customers, I was reviewing the instructions needed to make sure AES was the preferred cipher used in SSL/TLS and playing around with a test server to confirm things were working.  These secure layers allows for the encrypting of information we now take for granted when completing online purchasing and viewing our bank accounts.  It was important to know how to ensure AES is the preferred cipher, since new software like Microsoft's Windows Server 2008 R2 and most new Linux distributions will see a huge performance gain in encryption on Intel's new processors only if they choose this cipher.  The same technology is also available for encrypting data at rest in databases.


I was also reviewing some news clippings I had saved about some of 2009's biggest computer security stories.  When reading the responses about the NARA incident that potentially exposed part of database including privacy information of 76 million servicemen, reasonable people asked how an organization could have that concentration of information and not have it encrypted.  It seems obvious after the fact of sending a malfunctioning hard disk out to a non-government organization that having the data encrypted would have been a good idea.    But why wasn't it encrypted.  Sadly, it is not unusual thinking.  Many, dare I say most, organizations consider the hard disks in their data centers and the information they contain physically secure.  And why not.  They have strict policies of admission and typically require destroying of media.  Yet with all these policies in place headlines seem to continue.


In between playing with SSL encryption and reading some headlines on data breaches I had to ask myself why not just encryption everything.  Really, what if we just encrypted everything.  Why not?


Well the reasons usually cited include performance, key management and cost.  Anyone that has taken a new hard drive and done a complete encryption of the drive knows it can take some time, often hours.  At the data center application level, most benchmarks don't include use with any indication of the impact of encryption.  Some papers suggest the overhead encrypting a database could be 25-30% percent.  What DBA wants to sign up for that kind of performance hit?  However with the latest processors the performance is really starting to become a non-issue.  New processors in servers (and clients) have instructions that accelerate one of the most popular encryption algorithm today, AES.  These new instructions speed up the encryption itself 3x-10x which translates in to the applications like databases and full disk encryption not having an impact to compute performance.  Actually the limitation now is more in the hard drive technology.  The CPU time to encrypt is not the limiter as much as it is the time taken to read and write all the data on the hard drive.


So why not encrypt all the time.  Well certainly there is the potential for additional cost.  Although there are free or bundled software encryption products out there, many software vendors charge extra for security packages.  Given the average organizational cost of a data breach is over six million dollars, for the few applications that charge a premium would be a cost-effective insurance policy.   So that leaves one remaining barrier, key management.  Some of suggested the small scale IT could lose more data from losing keys than a data breach.  As some recent spy novel-like news stories suggest, poor storage of keys is often the easiest way to "break" encryption.  But key management really not that hard.  In the modern world, what person doesn't have dozens of passwords and pins numbers to manage.  And what large organization doesn't already have sophisticated key management infrastructure and policies in place.  So back to the original question, why not encrypt everywhere?  Now that performance isn't an issue, it seems the last real excuse is gone.


Cloud at IDF – Day One

Posted by megan_mcqueen Sep 13, 2010

Day one of IDF 2010 is in the history books and the cloud was well-represented throughout the event.



The first cloud session of the day, led by Intel’s Rekha Raghu and Jake Smith along with Jason Downing of Citrix, was so popular that an overflow room was required. The group provided a informative overview of the Intel Cloud Builder program to a rapt audience.





In the afternoon, Josh Hilliker, Jason Davidson, and Heung-for Cheng from Intel along with Corporate Technologies' Matt Clarin, provided more detail on the Hybrid Cloud technology and pilot program, and Intel’s David Jenkins and Todd Christ provided insight into configuring Intel Intelligent Power Node Manager for cloud power management.



The day concluded with dueling cloud sessions – a panel of end-users talking about their challenges and successes with cloud technology; and another winning session from Rekha Raghu about the practical steps to accelerate the transformation to the cloud.



Missed any of today’s sessions? Downloadable versions of the slides will be available at starting tonight.



Stay tuned for day two highlights…

Modern servers can now support up to terabytes of main memory and a failure of even a single memory cell can lead to a crash. Besides soft errors that can be corrected in hardware, hard uncorrectable errors can occur; in such case the only option for a server was to stop operation. In view of this recent report memory errors might cause downtimes and recovery, which is unacceptable for mission-critical enterprise systems. To address this issue Intel has introduced a wide range of reliability and high-availability features in the Intel® Xeon® 7500 processor series (code-named Nehalem-EX).


These features are supported in Linux as Andi Kleen explains in his presentation: It is now possible that hard memory failures are caught by the operating system and exposed to applications. This way a server application can handle memory errors and continue to operate if running on Intel® Xeon® 7500 processor series.

SAP has announced at Sapphire that they are working towards revolutionizing their enterprise software by taking advantage of their in-memory technology, which will allow fast queries and real-time processing. Instead of waiting hours to compile reports or days to replicate data in business warehouses, business users will get immediate responses on real-time data. Naturally, for in-memory processing, it is very important to be resistant against memory errors. Please check out our SSG booth at Intel Developer Forum 2010 in person and my colleague Otto Bruggeman will be happy to show you how SAP’s in-memory database is handling memory errors on Intel® Xeon® 7500 processor series.


Best regards,


The time has come and I hope your ready for a great IDF! If you've been keeping track of our chatter on Twitter or read our "We Would Like To Meet You" post, you've seen we're quite excited about this year's Capture Your Experience contest.


So, what's Capture Your Experience? It's an opportunity for you to share with us your Data Center experience at IDF. Show us your experience attending sessions, watching demos, chatting with your colleagues and catching a glimpse of Intel's data center experts. Need ideas of what you could do? Check out this video from this year's CES, where an attendee filmed their experience with Intel's Infoscape. That could be you, showing us your experience at the Data Center Zone, or walking the halls at IDF talking about the latest and greatest in data center technology.



Obviously we're not relying on you having a video camera at IDF, so we've partnered with Cisco to provide you with a Flip Mino HD just don't delay to sign up, we've only got 200 cameras. And to make things even tastier, we'll be awarding two lucky winners the Ultimate Home System. The Ultimate Home system consists of a Boxee Box, 55" HDTV, Atom™ based home server, and more!


So, what's the criteria? How are you going to be judged? Well, there's two chances for you to win: Judge Award or Audience Award. For the Judge Award, the judges will evaluate entries based on overall content creativity, content relevance to Data Center theme, quality of production, portrayal of accurate information and adherence to the 3 min time limit. The Judge Award will be announced at IDF and winning video shown before the 3rd day's Keynote.


For the Audience Award, we will rely on Intel's YouTube channel and have the YouTube audience judge the submissions. That's right, we'll turn this over to all of you to judge the videos, so make sure to spread the word amongst your friends and colleagues. We'll stop counting the "likes" and "dislikes" on September 24, 2010.


That's it! I hope all of you will join us, stop by the booth to say hello and give your video filming skills a try!


Schedule of Events


Contest Registration & Camera Pick Up

Monday, 11am - 5pm: Capture Your Experience Welcome Desk, 3rd floor


Tuesday, 8:30am - 7pm: Capture Your Experience Welcome Desk, 3rd floor


Submissions Due

By Tuesday, 7pm:  Data Center Zone, 1st floor or Welcome Desk, 3rd floor


Editing Stations

Monday, 11am - 5pm: Welcome Desk, 3rd floor

Monday, 5pm - 7pm: Data Center Zone, 1st floor


Tuesday, 8:30am - 7pm: Welcome Desk, 3rd floor

Tuesday, 11am - 1pm: Data Center Zone, 1st floor

Tuesday, 4pm - 7pm: Data Center Zone, 1st floor

Intel Developer Forum  (IDF) is coming!! Sept 13-15 at Moscone Convention Center in San Francisco.


Now, for everyone who thinks this show is only for clients---IT'S NOT TRUE…there are lots of cool server and data center sections.  Checkout the IDF planner for a session list.


Also be sure to check out the keynotes for insight into future server chips!  Plus it's an opportunity to catch up with your favorite Intel contact.


Last but not least, make sure you check out the dedicated server booth area, Data Center Zone.


My favs are :


The New Economics of Mission Critical Deployments: Intel® Xeon® Processor Based Platforms

Michael Demshki

Rm 2011 @ 1:05 pm on Monday


Business Intelligence and Run Time Architectures

Tony Hamilton, Matt Hogstrom

Rm 2011 @ 3:20 pm on Monday


Enhancing the Intel® QuickPath Interconnect for Next Generation Intel® Microarchitecture Codename Sandy Bridge

Bob Maddox

Rm 2002 @ 4:25 pm on Monday


A View of Cloud Requirements from Intel IT

Sudip Chahal, Christopher Peters

Rm 2001 @ 1:05 pm on Tuesday


See you there!

My name is Pauline Nist  (yes, the National  Institute of Standards and Technology stole my name).  I've been involved in the design and delivery of Mission Critical server systems for most of my life (there was a brief stint in IT early in my career--good training).


I worked on a lot of Vaxes and Alphas (SMP and Clusters) Systems for DEC, then moved to Tandem  where I was responsible for the NonStop hardware and software - including the SQL MX database (shared nothing clusters). Basically Tandem really delivered systems with 100% uptime--the gold standard for Mission Critical.


Then I moved into the "merger" phase of my career, were Tandem was acquired by Compaq, who then also bought DEC. Finally it was all swallowed by HP. There was a lot of indigestion to go around during these years.


Looking for a  significant change of pace, I moved to Penguin Computing, a clustered Linux server startup.  Penguin sells to high performance computing and web customers and gave me a great introduction to X86 computing.  We rode the wave of 64-bit capability coming to X86, and could offer technical computing customers 300% more performance at 30% of the price of RISC/Unix. I also found out that startups teach you a lot about cash accounting.


Now I'm at Intel. Quite the change, but in many ways a logical progression to what is now the emerging way to deliver Mission Critical computing. I made the change because with the introduction of the Xeon 7500, standards based computing has come of  age.

At Intel I'm the GM for the Mission Critical Server Segment in the Data Center Group. I get to work with customers and partners and help deliver Xeon and Itanium server systems, focusing on database and other business critical applications where business continuity are key.


This past year has been really exciting as we've seen an unprecedented number of high end Xeon 7500  (Nehalem processor) series systems (>4S) from our partners. What with a huge leap in system performance, high availability, coupled with virtrualization, cloud computing  (particularly private clouds), SSDs, and various database and BI appliances, from a variety of vendors, there is a huge amount of innovation going on in the data center.


Stay tuned for future data center discussions, and great sessions at IDF in San Francisco next  week.

The key value prop that Intel Cloud Builder provides is the reference architecture that clearly documents the methodology on how to build a cloud jointly with Intel's ecosystem partners. To promote these reference architectures and provide more information, every two weeks the Intel Cloud Builder Webcast Series brings you cloud computing best practices and lessons learned from Intel and cloud ecosystem partners (both hardware and software). The Webcast will provide practical, hands-on information for building a private, public or hybrid cloud, and we believe this information will be helpful to IT managers who are investigating options in building clouds.


Next Webcast:

"Stand It Up and Go As Fast As You Can – Parallels Virtuozzo Containers* for High Performance Cloud Computing"

Thursday, September 16, 2010, 11:00 a.m. ET


Please visit to register for future Webcasts and also download presentations and FAQ from past webcasts.


Here's a great overview of the Intel hybrid cloud.



If your interested to sign up, here are the 4 easy steps it takes to join the pilot program.



Or go to    


Cloud Computing at IDF

Posted by megan_mcqueen Sep 8, 2010

Cloud computing is an important transition and a paradigm shift in IT services delivery – one that promises large gains in efficiency and flexibility at a time when demands on data centers are growing exponentially.  Cloud computing will be a key focus at next week’s Intel Developer Forum in San Francisco with session, labs, and demos.


The Intel Data Center Zone


The Intel Data Center Zone highlights the technologies and solutions that are defining cloud, mission critical, and high performance computing innovation.



Cloud-focused demos in the Data Center Zone include:


  • End-to-end integrated cloud– See how Intel Architecture is driving the cloud from Server to handhelds

  • Intel Intelligent Power Node Manager – Showcases Intel Intelligent Power Node Manager running on servers from four OEMs, reporting power and dynamically applying power caps.

  • Intel Ethernet with SR-IOV over NPA and FCoE– Highlights the performance advantage of SR-IOV, Network Plug-in Architecture, and demonstrates vMotion capability.

  • HPC in the Cloud: Modeling a local HPC cluster in the datacenter and a “remote” cloud cluster for over-capacity provisioning.

  • Intel VT-c delivers SR-IOV for Citrix NetScaler* VPX– The demo will highlight the performance advantage of SR-IOV.

  • Remote Attestation for the Cloud– Will demonstrate a method of building platform trustworthiness by integrating open sourced remote attestation software with Intel TXT technology into cloud management.

  • Security in client aware usages– Intel® Xeon® 5600 series and core i5/i7 equipped with AES-NI capabilities, VM View 4.5 is able to deliver an encrypted virtual machines from the server to the client in much shorter time.  This usage model saves both operational cost and provides mobility extending into the automated and client-aware cloud.

  • Open Scalable Cloud storage – The emergence of the cloud paradigm has driven new usage models for how storage is used and deployed.

  • Compute server caching for Scalable Cloud storage– Open scalable cloud storage solutions are built on a RAIN (redundant array of independent nodes) technology.  NFS v4.1 (a.k.a. pNFS) is the standard reference emerging for consolidate enterprise private cloud.

  • Micro Server: Intel Xeon-based innovations for entry hosting – Showcases Intel Xeon-based innovations for entry hosting and web services usage in datacenter environment.

Technical Sessions


Monday, September 13


Panel: Cloud Computing End Users


Intel has the unique opportunity to work with some of the industry’s most innovative and entrepreneurial leaders in the industry. This panel will feature some of the early enterprise and service provider innovators of the Cloud Computing industry. Attendees will hear their thoughts on the technological and social hurdles to deploying clouds, as well as the future of cloud computing and the technologies that will drive the transformation of today’s data center over the next five years.




Jake Smith, Advanced Server Technologies, Intel Corporation

Shishir Garg, Director of Middleware & Platforms Group, Orange Labs

Jason Mendenhall, Executive Vice President, Switch Networks

Alan Gin, Zeronines Technology

Dr. Jason Choy, Vice President and Distinguished Engineer, JP Morgan Chase & Co.


Class: Cloud Architecture and Intel Cloud Builder Reference Architecture


Transforming the data center is a multi-year process that requires a commitment to a cloud architecture that is secure, efficient, and simplified. Intel has worked with the cloud computing industry for several years to design cloud solutions with the world’s leading providers of cloud technologies. Attendees will learn about Intel’s vision for cloud computing and the evolution of the Intel Cloud Builder program, as well as hear about architecture and technologies involved in deploying cloud computing.




Rekha Raghu, Senior Software Engineer, Intel Corporation

Jake Smith, Advanced Server Technologies, Intel Corporation


Class: Cloud Power Management: Configuring Intel Intelligence Power Node Manager


Attendees will learn how to configure a system to enable Intel Intelligent Power Node Manager and integrate Intel Intelligent Power Node Manager systems into a console and the data center. The session will provide an overview of available tools and resources and highlight successes using Intel Intelligent Power Node Manager to manage power.




David Jenkins, Server Technologies Marketing Manager, Intel Corporation

Todd Christ, Systems Engineer, Intel Corporation


Class: Using Intel Technologies to Provide Managed Services Solutions for Small Business Customers


This class will provide an overview of Intel technology solutions for providing a subscription-based IT managed service to small businesses, and is targeted for solution providers, consultants, and integrators who are providing solutions for small business customers. Attendees will hear about the current hardware and software ecosystem, and see a demonstration of Windows* Small Business Server (codename Aurora) integration with a future Intel server platform (codename Bromolow).




Jason Davidson, Systems Engineer, Intel Corporation

Heung-for Cheng, Product Marketing Engineer, Intel Corporation

Josh Hilliker, Intel® vPro™ Expert Center Community Manager, Intel Corporation

Matt Clarin, Director of Operations – Grand Rapids, Corporate Technologies LLC



Tuesday, September 14


Hot Topic: Cloud Vision Technology and Business Q&A with Intel Experts


This session will be an open question-and-answer session with leading influencers at Intel that shape Intel’s vision of cloud through technology, business and industry efforts.




Raejeanne Skillern, Director, Cloud Computing Marketing, Intel Corporation

Billy Cox, Enterprise Architect, Intel Corporation

Andy Tryba, Director of Marketing – Business Clients, Intel Corporation

Dylan Larson, Director of Platform Technology Marketing, Intel Corporation

Radhakrisha Hiremane, Product Marketing Engineer, Intel Corporation


Class: A View of Cloud Requirements from Intel IT


Intel IT has adopted a multi-year cloud computing strategy that seeks to utilize both public and private cloud services through a multi-year phased investment strategy and implementation roadmap. The transition to a cloud-based architecture represents an underlying architectural shift and significant business process changes for our organization. This session will share the vision of Intel IT and discuss the key requirements for hardware and software developers to support cloud computing in the data center.




Sudip Chahal, Principal Engineer and Compute and Storage Architect, Intel Corporation

Christopher Peters, Intel IT Manager, Intel Corporation


Class: Turning the Tide on Data Center Energy Consumption


The session will focus on recent developments in microarchitecture technology and the impact of these developments on energy efficiency in the data center as well as preview some eco-technology best practices. Learn about the impact of both architecture-level and platform-level efficiencies in the data center and how adjustments in physical room configuration, air flow, and water supply temperatures can enable an increased IT load while maintaining energy consumption.




Allyson Klein, Director of Leadership Marketing, Intel Corporation

Winston Saunders, Director of Power Technology Execution, Intel Corporation


Class: Increasing Cloud Reliability with Platform RAS (Reliability, Availability, and Serviceability)


Data center trends will continue to require increased compute performance and memory density. A critical factor in controlling the operation cost and increasing the uptime for business critical and highly consolidated environment is the platform RAS. This session will review the trends and discusses the RAS solution benefits provided by Intel Machine Check Architecture and Microsoft* Windows* Server 2008 R2 for cloud-based computing.




Radhakrisha Hiremane, Product Marketing Engineer, Intel Corporation

Scott Rosenbloom, Virtualization Product Manager, Microsoft Corporation

Steve Krig, Leader of Software and Application Compliance and Interoperability Effort, Intel Corporation


Class: Designing Cloud Storage Solutions


This session will include information on the reference architecture for storage solutions in the cloud, analysis of cost, power, performance, availability, and discussion of the tradeoff and benefits of using Intel components in cloud storage architectures.




Greg Scott, Strategic Initiatives Manager, Intel Corporation

Tony Roug, Principal Engineer, Intel Corporation


Hands-on Lab: Developing a Cloud Using Intel Platforms Lab


This lab is for those who are interested in a hands-on learning experience of cloud computing based on Intel platforms and tools. The focus will be on setting up a basic cloud computing environment on several Intel platform-based servers.




David Mulnix, Software Engineer, Intel Corporation

Aamir Yunus, Software Engineer, Intel Corporation

Rekha Raghu, Senior Software Engineer, Intel Corporation

Sanjay Sharma, Senior Performance Engineer, Intel Corporation

A few years ago my manager gave me a challenge…find something out about this thing called "social media".  Yep, that was about as clear a definition as I got.


After thinking about it for awhile I realized that the best thing I could do was share my favorite thing about working at Intel, sitting in an Intel café or conference room and listening to a technologist explain how a technology really works.  There is something special in this process, a glimmer in the eyes or a sideways grin that shows the pride of invention or the confidence that this will make a difference in the world.  Without much idea of what might happen to the program but a notion that others would find these conversations as interesting as I do, Intel Chip Chat was born.  We quickly created a few rules: no discussing what will be discussed in the episode before the episode (kind of like not talking about Fight Club…but different), no bringing prepared questions or comments or any paper into the studio, and above all bringing a willingness to chat about oneself as well as technology.



Next week at IDF we’ll deliver the 100th episode of the Chip Chat program.  I am currently preparing for questions I’ll ask our Executive VP, Dadi Perlmutter about the future of technology.  I would love to hear suggestions on what you would ask a senior Intel executive if you had a chance.


I’ve also been spending a lot of commute time thinking back to all of the episodes before this special one.  I’ve had the good fortune to interview many people who I would call heroes.  Favorite episodes for me include my talk with the inventors of USB - Ajay Bhatt, Bala Cadambi and Jim Pappas, guys I know very well but never get a chance to discuss what it feels like to invent something as well known as the USB port, my conversation with Genevieve Bell about societal impact on technology…or technology’s impact on society, I’m still not sure which (maybe someday she'll tell me), and the many conversations with our Eco-Technology GM Lorie Wigle on the state of Green IT.  I’ve had aha moments like realizing when I was talking to Mark Bohr that the mild mannered gentleman sitting in front of me held much of the world of Intel on his shoulders…but could still be gleeful in discovery when he shared the moment of realization that our 45nm High-K technology really worked.  Justin Rattner talked to me about the rise of the 3D internet and his visions for a world of computing that I hadn't considered yet.  That was in 2008...and most of his insights are things we discuss everyday now: We’ve had guests from across the industry and across the globe, from university labs and government agencies (thanks Andrew and Katherine for sharing Energy Star with us).  We even had one of the best interviewers in the business, NPR’s Moira Gunn, on the show.


Whatever the topic, I have learned something from each person I've chatted with. I have enjoyed the journey very much and most especially have loved the feedback on how to make the program better.   Thank you very much for listening.

Intel and its partners have been talking a lot about cloud computing lately. Did you miss a blog, podcast, or Webcast? Well, below are a few recent highlights to get you up to speed.


Cloud Computing: Confusion to Convergence?

Boyd Davis, a general manager in the Data Center Group at Intel, gives his thoughts on the transition from industry confusion on the subject of cloud computing to a convergence of opinion about the transformative power of the cloud as an architecture for delivering IT.


Intel Chip Chat: The Future of the Cloud

Intel’s Jason Waxman, a general manager of the Data Center Group, provides an update on IT cloud computing, the development of private vs. public clouds, and where the cloud will be in 2015.


Webcast: Introduction to Intel Cloud Builder Program - Your Blueprint to Success

Billy Cox, Intel Director of Cloud Strategy & Planning, and Jim Blakely, Intel Director of End-User Platform Integration, talked about how Intel is working with the software and hardware ecosystem to test and validate public, private, and hybrid cloud computing infrastructures. The Webcast provided perspectives on cloud implementation strategies and key areas of consideration when building and enhancing clouds, as well as an overview of the Intel Cloud Builder program.


Webcast: Top 5 Must-Haves for Hybrid Cloud Computing

Experts from Intel and Univa addressed the use cases, benefits, and top five most important things for hybrid clouds, where an internal computing environment is extended to an external infrastructure based on requirements and demand. 

This is the second blog in my series on server performance, the first one focusing on Database Performance.  With VMworld 2010 wrapping up last week I wanted to address a common question we continue to get on server virtualization –


What metric should I use to size my virtualization servers?


There are many metrics that can be used for sure; from number of virtual machines (VMs) supported, to performance per VM, to amount of memory available per VM, to cost per VM, etc.  I will be the first one to tell you that it really depends on your environment and your objectives.  For example someone virtualizing 50 email servers may want to maximize the amount of VMs per server and minimize the cost, whereas someone virtualizing 20 database servers may want to maximize the amount of memory and performance per VM.


But one common virtualization goal for anyone should be to maximize the utilization of your servers.  After all, it’s one of the primary reasons to virtualize – eliminate sprawl, reduce the number of servers, in turn making them more manageable and maximizing your total cost of ownership (TCO).  And the key to utilization is server performance, not processor core count.


Let me illustrate this point with an example using one of the most popular virtualization benchmarks in the industry – VMware’s VMmark.

The leading VMmark score is currently held by Fujitsu’s PRIMERGY RX600 S5 rack server, which is a 4-socket server based on the new Intel® Xeon® processor 7500 series, achieving a score of 75.77 @ 50 tiles.  This is a 32-core result that outperforms 48-core (59.74 @ 41 tiles) and 64-core (48.23 @ 32 tiles) results by up to 57%.


For illustration purposes, let’s assume a VMmark score corresponds to the maximum number of virtual machines (VMs) that can be supported on a server (e.g. a score of 75.77 for the 32-core server means it can support up to 75 VMs at near 100% utilization).  If you were to then use a policy of allocating VMs strictly by core count using 1 VM/core you would get the results shown in Figure 1.


Allocating VMs by Core Count.png



As you can see, each platform has a different VM capacity resulting in very different server utilization rates.  The 64-core server can only support 48 VMs based on its VMmark score, so it is essentially over-utilized.  The 48-core machine would be allocated 48 VMs and has 19% capacity still available.  Now the 32-core Xeon® 7500 based server would be allocated only 32 VMs and would have 57% capacity still available - you are wasting available processing power or unintentionally increasing your cost/VM.


The other unintentional risk of allocating VMs by core count is that the performance of each core can change significantly with each processor generation.  Your performance per VM can potentially decrease resulting in not meeting your service level agreements, or it could increase leaving more of your server under-utilized.


The alternative is to allocate your VMs based on server performance so  that you can deliver a more predictable utilization rate.  You can see from Figure 2 that setting a target utilization rate and allocating VMs based on the performance of each server yields very different results.  In this example we used a target utilization rate of 60%, which is typical in enterprise data centers to allow headroom for unpredictable demands.




The new Intel® Xeon® processor platforms offer virtualization  leadership across the board, so you should be able to deploy the most VMs on these platforms (or the biggest VMs with memory capacity advantages) and feel comfortable with utilization headroom.  Take a look at the 2-socket VMmark scores and you’ll find the new Dell PowerEdge R810 Rack server has the lead with 37.28@26 tiles using the Intel® Xeon® processor 7500, and the new Cisco UCS B250 M2 Extended Memory Blade server comes in at 35.83@26 tiles using the Intel® Xeon® processor 5600.


What’s Next for Virtualization Benchmarks?


The landscape is changing for virtualization benchmarks so you should consider others beyond VMmark.  Whatever happened to Intel’s vConsolidate benchmark that allowed you to run non-VMware  hypervisors?  There is a new SPECvirt_sc2010 benchmark released by SPEC (Standard Performance Evaluation Corporation) in July 2010 that has similar characteristics.  This one is sure to catch on as it allows you to measure a mix of workloads using different Virtual Machine Managers and application stacks.  The first two results published are on IBM’s new rack servers running Red Hat Enterprise Linux 5.5 KVM, the System x3650 M3 using the Xeon® 5600 and the System x3690 X5 using the Xeon® 7500.


Another trend is to begin virtualizing larger, business critical workloads such as Databases and Enterprise Resource Planning (ERP) systems.  AnandTech did an extensive review of the Xeon® 7500  using a new benchmark called vApus Mark II that showcases 8GB memory tiles  running SQL, Oracle and Web Application VMs.  They showed that a Xeon® 7500 based server can offer 2.3X better performance than the best 2-socket servers based on the Xeon® 5600 processor.


I’m interested in your thoughts on benchmarking virtualization servers and what matters the most in virtualization deployment decisions.

An important element of controlling data center cost is, of course, making sure the energy use  is as efficient as possible. The Power Usage Effectiveness, (PUE) is a great indicator, but it's also open to misinterpretation.  Herein lies the CIO's dilemma: If improving the efficiency of your data center is an important goal, should you incentive the organization to improve PUE?  In case you want to stop reading here, the answer is, "Yes, but only if you also pay attention to the bottom line."


In a recent white paper entitled "A Holistic  Approach to Energy  Efficiency in Data Centers"  Dileep Bhandarkar of Microsoft  points out that reducing fan power of your servers, for instance, would DECREASE the overall energy consumption of your data center, but INCREASE its PUE. So as a CIO, if you incentivize your data center operations on just PUE, you could end up on the wrong track.


To get the most our of PUE you need to realize two things: 1. PUE is a "system"-level indicator that provides a measure of capability only when things are optimized, and  2. Optimizing the server energy consumption of your data center still might be the right thing to do even if it raises your data center's PUE.


The big point about PUE as a "system" is reinforced in the recently published results from the datacenter2020 collaboration between  Intel and T-Systems. Here incremental steps were taken in optimizing elements of the data center, all with the idea of improving overall efficiency.  As the details of thewhitepaper disclose,  although some of the steps taken early on (like plugging leaks in the floor tiles) had little immediate impact,  once the air pressure was optimized the full benefit of that housekeeping was realized.  In fact, optimizing the data center allowed a greater density of IT power consumption, further increasing the PUE.


What PUE misses is the notion of work-output of the servers and the data center. This is a generalization of the problem pointed out in the Microsoft whitepaper. PUE is a ratio of two power metrics; : like the "energy profit margin" of the data center.  It's a good indicator, but just like margin, you can't run a successful business without also paying attention to the bottom line.


You can imagine having a data center full of old inefficient servers, and replacing them with the newest generation of Xeon 5600-based servers. Because of advances in power management technology, these servers will consume, even at equivalent loading, far lower power and yet will produce far greater work output.


The chart below shows the evolution of hte energy efficiency of a SP server family over the last six years. As the performance of the servers increases, their power consumption drops.







So, refreshing your servers, your power bill will go down, your facility equipment will not change, the work output of the data center will increase dramatically,and your PUE may go in teh wrong direction.  Again, if the data center operations are incentivized on  PUE alone, you might get the wrong behavior.


So as CIO you face a dilemma. You want to increase your data center's energy efficiency. You have heard about PUE? What do you do? The right thing,  of course, is , "Pay attention to PUE, but also pay attention to the bottom line indicators like Power Consumption and Power Costs."


Of course, the real issue is optimizing the productivity of the Data Center. For highly homgeneous workloads, I imagine the metrics are well established. But I think there is still tremendous work to do in defining more general productivity indicators for data centers.


What do you think?

Security is a top IT concern. This continues to be true as data centers adopt virtualization and cloud technologies.  As the attackers shift to more criminal –based and as regulations require explicit security steps like encryption and reporting, Intel security technologies can really help protect both data in flight, data at rest, and data in applications.  Intel® AES-NI, when rightfully provisioned on the web servers, can assist in removing the performance overhead associated with SSL/TLS transactions used in on-line trading, on-line banking, and ecommerce. Since the easiest place for attackers to intercept is via emails, we urge you to encrypt emails. Also encrypt your drives, encrypt databases, encrypt virtual machines, and encrypt whenever and wherever you can to build defense in depth! Be sure to use AES-NI to make that encryption faster, easier, and stronger.


Let’s look at the faster side of things. In a secure transaction like openSSL, first there’s the RSA asymmetric handshake to establish authentication, followed by AES exchange of bulk data, then followed by SHA integrity checking. AES-NI speeds up the AES slice by more than 10x and speeds up a single openSSL transaction by more than 2x. In an application environment, according to Amdahl’s law, the speedup will be much less and we have seen that a web banking workload can support 23% more simultaneous user transactions on an Intel® Xeon® 5600 series compared with Intel® Xeon® 5500 even without encryption. AES encryption details are available for download.

Intel® AES-NI makes encryption faster by alleviating the need for add-in cards, additional discrete cryptographic silicon and easier because it’s built into the processor.  Stronger meaning there’s no more table lookups like in the software-based algorithm where memory/cache search patterns can lead to shortened key search space.  The hardware instructions based execution reduces vulnerability to side-channel attacks.

Intel® AES-NI is great, but how to make sure the web server is in fact executing AES when you type "https://"? There are a slew of other algorithms out there like RC4/MD5 which is widely used with XP today. The selection of cipher algorithm between the client and server is much dictated by the OS, except in the case of Firefox.  As win7 install base grows, AES will be at the top of the default cipher list. In addition, on the client and the server, using gpedit from the command window, in administrative templates, network/SSL configuration settings, select TLS1.0 and above is a sure way to establish an AES-based secure transaction. If both endpoints, the client and server, are Intel® Xeon® 5600 series (code named Westmere-EP), then voila, you have an AES-NI assisted transaction! Newer Linux builds have openSSL with AES-NI distribution and use $openssl ciphers -v 'AES:ALL' to ensure AES is at the top of the cipher list. The details of how to provision a Windows and Linux web server for Intel® AES-NI encryption are now available in a whitepaper.

In summary, Intel AES-NI helps better protect platforms and data and this helps with compliance that requires explict encryption and reporting.  We urge you to encrypt everywhere, take advantage of AES ciphers, and use Intel® Xeon® 5600 series with built-in AES-NI capabilities to make it faster and stronger. More security deployed, makes for more secure data centers!

Filter Blog

By date: By tag: