Skip navigation

When one hears of the advantages of cloud computing, the same benefits come up again and again.


  • The IT consumer gets real agility. This means instant response times to provisioning and deprovisioning requests – no red tape, no trouble tickets – just go.  The consumer also gets a radically different economic model – no pre-planning, no reservation, no sunk costs – the consumer uses as much as they want, grow and shrink in whatever size increment they want, and keep hold of the resources for only as long as they want.  Lastly, the consumer gets true transparency in their spending – each cent spent is tied to a specific resource used over a specific length of time.
  • If a proper cloud infrastructure is built, acquired, or assembled, the operations costs for the datacenter administrator are much lower than with traditional IT. Cloud infrastructure software, if done right, gives scale-out management of commodity parts by introducing (a) load balancing and rapid automated recovery of stateless components and (b) policy-based automation of workload placement and resource allocation.  Customer requests automatically trigger provisioning activity, and if anything goes wrong, the system automatically corrects.  The datacenter admin is relieved of the day-to-day burdens of end user provisioning and break/fix systems management.


The challenge in this world stems from the fact that for all this to be delivered, clouds must span organizational units. There needs to be economy of scale to drive down costs. There need to be many workloads from multiple customers peaking at different times to achieve the “law of large numbers” to achieve high utilization and predictable growth. Once you have multiple customers on the same shared infrastructure, you get the inevitable concerns – is my data secure, do I have guaranteed resources, can another tenant through malice or accident, compromise my work.


Clouds, both public and private, strive to provide secure multi-tenancy. Each service provider and each cloud software vendor promise that tenants are completely isolated from each other tenant. Obviously, different providers do this with varying levels of competency and sophistication, but there is no controversy regarding the need for this isolation.


Once you are comfortable with your cloud’s isolation strategy, though, you should turn around and ask, “How do I take advantage of multi-tenancy?”  We live in an ever more interconnected world and different organizations need to collaborate on projects large and small, short-term and long term. If two collaborators share a common cloud, or two or more clouds that can communicate with each other, shouldn’t the cloud facilitate controlled and responsible sharing of applications and data? Shouldn’t we turn multi-tenancy from the cloud’s biggest risk into its biggest long-term benefit?


To answer this challenge, we need to ask

  1. Why would we need to do this?
  2. Are there any specific examples of this today?
  3. How would we go about achieving a more generalized solution?


First, why would we do this?   There are many examples in many sectors.

  • Within large enterprises, different business units generally need to be isolated from one another, for privacy or regulatory reasons, or simply to keep trade secrets on a need to know basis. But, when large cross-functional teams are asked to deliver a complex project together, sharing becomes necessary.
  • Also in business, external contractors are used for some projects. How can they work as truly part of the team for one assignment, while being safely locked out of all other projects?
  • In education, universities collaborate on some projects and compete on others. How can the right teams work together openly while others are completely isolated?
  • In government and law enforcement at all levels, collaboration can save lives and property, but proper separation must be enforced to protect civil rights and personal privacy.
  • In medicine, doctors and insurance need to share certain records and results in order to streamline care, facilitate approvals, and reduce mistakes.   But, privacy must be protected with only the proper and allowed sharing taking place.


Since this seems like a nirvana state, the second question is what is practically being done along these lines today? To this, I would say that the SaaS providers have been on this path for some time. Google calendar allows you to selectively share your schedule in a fine-grained manner – who can see your availability, who can see your details, and who can edit your meetings. LinkedIn allows you to share your profile at varying levels of depth and regulate inbound messages based on your level of connection and common interests.


This leads to the third question – how can we do this more generally? How can a single cloud or a group of clouds facilitate generic sharing of any application or data without breaking the base isolation that multi-tenancy generally requires? Obviously, in a blog we can’t answer in gory detail, but we can discus some high level requirements.


1. Recognize distributed authority and have a permissions scheme that models this well


In all the examples we discussed in the “why” section, there was no shared authority. From the point of view of someone who wants to access something of someone else’s, there are two completely different and independent sources of authority. First, does my manager authorize me to be working on this project with these collaborators? Second, do those collaborators want to share with me, what exactly do they want to share, and what level of control over their objects do they allow me? A cloud that facilitates collaboration must have a permissions system that allows these different authorities to independently delegate rights without the need for an arbitrating force. Imagine if two government agencies needed to go to the president to settle an access control issue.  With doctors and insurance companies, who would a central authority even be? Once you have a permissions system capable of encoding multiple authority sources, you need the ability to apply that system to compute, storage, and network resources. You need to apply it to data and applications. You need to apply it to built-in cloud services and third party services.


2. Provide extremely flexible networking connectivity and security


Permissions speak to who can do what on what objects shared on a cloud network. The next part is about the network traffic itself. The cloud needs to govern connectivity in a secure, but still self-service manner. It will be impossible to build a responsive and agile collaborative environment over legacy VLANs and static firewalls. Once collaboration is setup politically, project owners need to be able to flip the switch to start the communication flow immediately. If a project ends, they need to be able to turn it off just as quickly if not faster. Given a project that already has network connectivity, as that project expands, new workloads added to the project need to be instantly granted the same network access as all the other workloads. For all this to happen, there need to be network policies that govern communications. These policies need to instantly regulate all new workloads on the cloud.  They need to be created, destroyed, and modified by the actual collaborators, not network admins. Lastly, these policies need to be governed by the collaborative permissions system described in requirement #1 so that proper governance is achieved without requiring a common authority.


3. Have a way to extend these systems across clouds


Once you have a permissions model and a networking model that work within a cloud, you need to extend those functions to work across clouds so that multiple organizations can share their resources amongst each other, not just when they share a common public or community cloud, but even when hosted in their own separate private clouds. For this to happen, identity must be agreed upon. User permissions from one cloud must be trusted by the second cloud so that those permissions can be mapped against what has been delegated by that second cloud. The networking policy mechanisms must be transferable across the Internet and take into account various levels of routing, NAT’ing, and firewalling.


Nimbula believes that we are on the path to providing general purpose collaborative clouds. Our flagship product, Nimbula Director, is architected to deliver this value in the long term and has taken substantial steps in this direction in our generally available 1.0 release. We recently completed a podcast and Webcast where you can learn more about Nimbula Director and the reference architecture we completed for the Intel Cloud Builders program.


Visit us at to use Nimbula Director on 40 cores for free and to download whitepapers and product documentation.

itc_cs_cadeptwater_xeon_library_preview.jpgThe California Department of Water Resources needed to modernize its IT infrastructure. With help from Intel and other technology suppliers, the organization’s IT group designed and deployed a virtualized infrastructure based on HP ProLiant* server blades equipped with the Intel® Xeon® processor 5600 series. The overhaul consolidated operating costs by 25 percent and provided more than four times the capacity for new business solutions. Now the IT group has the flexibility and scalability to support an organization that’s changing and growing at top speed.

“By moving from an aging infrastructure to a virtualized environment with Intel processor-based systems, we are saving 25 percent in operating expenses,” said Tim Garza, chief information office for the California Natural Resources Agency. “We have significantly reduced our budget while enhancing our ability to provide IT services to our lines of business.”

To learn more, read our new California Department of Water Resources business success story. As always, you can find this one, and many others, in the Reference Room and IT Center.



*Other names and brands may be claimed as the property of others.

I’m in England (outside of London in Hampshire) for Intel’s  Enterprise Board of Advisors (EBOA).   This is an annual event where we meet with key end user customers to get  their perspectives on current technology and issues confronting IT  organizations. It includes both client and server discussions.  The specifics are confidential, but I can  share my interpretations with you.


I’m in attendance because we are doing a breakout  session focused on Business Intelligence and Database, with a goal of  understanding customer reactions to everything from  Database and BI appliances to recent mergers  and acquisitions like EMC & Greenplum, IBM  & Netezza and the evolution of new technologies such as SAP HANA ,  Columnar stores , etc.


What seems to be fueling a lot of discussion is a lengthy  paper by Philip Winslow, a Research Analyst at Credit Suisse, entitled:  “The Need for Speed – How In-Memory and Flash  could transform IT architectures and drive the next ‘Killer Apps’”   While you would need to acquire the full paper  from Credit Suisse we do have a summary presentation on The  Need For Speed paper.


In short form, the paper proposes that bigger, cheaper  memory and solid-state storage are creating new opportunities for breakthroughs  in both OLTP and real-time analytics.  The paper also outlines which software vendors are best prepared to exploit  this market evolution. I happen to think they are right, the only question is exactly when it all comes together to create the right performance at the right  price point.


I think REAL, real-time BI is the Holy Grail, and there  isn’t a business out there that doesn’t want to make smarter, timelier  decisions, with the flexibility to do REAL ad-hoc queries in response to  changing market situations.  The question  has always been what does it cost and what’s the ROI? The big guys do it today  - they just spend a lot of money.  The  proposition is that NVRAM could deliver price points that make this a reality  for the masses –then it’s just a “mere matter of software.”


I suggest you check out the Winslow Slides, and  stay tuned to my next blog which will actually report back on the high points  of what we heard from the advisors.

I’d love to hear your reactions to Winslow’s views.

Last week “cloudgate”  entered the twitter-sphere and the concepts of cloud availability and security went from analytical  concepts to gut wrenching concerns (especially for my friends on Foursquare… would they still be mayor once  the app came back online?).  All kidding aside, cloud computing industry  hubris was provided a healthy dose of reality.  An  interesting perspective on this event was provided by George Reese.




While the future of cloud computing continues unabated as a  fantastic opportunity (and looks more like the real Cloud Gate shown above than  the doom and gloom of last week’s prognosticators), the need for hardened  solutions based on thoughtful, enterprise ready design has never been more  evident.  This starts with development at the confluence of hardware and  software and excellent engineering insight to build layer upon layer of cloud  infrastructure, something I spoke about in my recent blog on Day  in the Cloud in Beijing.


A perfect example of this engineering insight  was displayed in a recent conversation on cloud  security I had with Hytrust CTO Hemma Prafullchandra.   Hemma is one of those people who almost vibrate from too much intelligence… She  understands IT compliance issues inside and out as well as the layers of challenges in IT compliance and  security when applying virtualization and cloud to the data center.  She  was nice enough to provide me with a primer on the challenges IT managers face  and how Hytrust is delivering security controls that are as agile as workload  virtualization policies inside a cloud environment.  I found the  conversation incredibly insightful and reflective of IT’s continued focus on  security as a key requirement to address for widespread cloud computing  adoption.  If you’ve got questions for Hemma and the folks at Hytrust  after listening to this podcast please comment.

While I may not have the name recognition of say, Dr. Moira Gunn or Intel Rock Star Ajay Bhatt, I did have the good fortune to be interviewed by Allyson Klein of Intel Chip Chat fame on a new Intel Intelligent Node Manager podcast.  Despite not being a syndicated technology expert nor a co-inventor of USB, Chip Chat always helps me to reach out to technology enthusiasts to let them know what is going on in the industry.


In this podcast, we discuss the next generation of Intel Intelligent Power Node Manager technology, industry adoption updates and how instrumentation is critical to improving data center efficiency in terms of both operational costs and reliability and availability.  Node Manager is really gaining momentum in terms of end user deployements and OEM/ODM support - stay tuned for more news on this around IDF in September.



Thanks for listening - I am looking forward to your thoughts and comments on this podcast and the next generation of Node Manager technology.




"We’re moving the RISC based applications to Xeon based servers.  What size server do I need?"  This is usually the 1st question and is the right question - But at the wrong time.  By the time it’s the right time, you’ll have the answer well in hand.   I’m going to explore the answer here and in future posts.measuring and sizing your server needs

Image attribution: Flickr User: aussiegall



My response is ‘Why are you sizing and buying the end state server when you haven’t even started the migration process?  First you need to see how well your application(s) fit on the target Xeon based server.’  As most shops do, you run a Pilot or POC to determine feasibility.


This and the next few blogs will discuss this process of getting to the right sized server for the pilot or POC.  Previously I discussed sizing tools.


I'll focus on sizing your future server based on current performance and anticipated growth.  Tools exist for precise measuring of your performance.  They have a cost and are sometimes wrapped in to consulting engagements.  Instead here I’m looking at the tools for a Do-It-Yourself performance monitoring and eventual sizing predictions.  And remember, we’re just looking at the size of a server for the test platform.   No need for exact precision here.


These tools are good for coming up with the target server for the testing.  Testing will prove to be the most accurate method to come up with the correct sizing.  But how do you determine what to use for the testing?

You want to look at:


  • CPU
  • Memory
  • I/O
  • Network


There are tools available in the operating system or you may need to add the Sysstat package of performance monitoring tools which provides:



The  system activity reporter (SAR) tool monitors the performance data back a number of days or weeks on 20 minute intervals. (Depending on how sar was configured.)  The other tools are for measuring performance over shorter terms.   With the other tools, (i.e. vmstat, iostat, netstat, mpstat) they start monitoring as soon as you start them up.  For instance the command vmstat 1 1 will have vmstat output data each second for 1 minute.  This is not enough time to get a profile.  You can set these tools up to run for a long time but they’ll generate a ton of data.  To make sure they continue running after you log off use nohup before the xxstat command and pipe the output to a file.  You should be able to load this data into a spreadsheet for graphing.


With this data there are a series of questions you are looking to answer.


  • How many users are on the machine concurrently?  This isn’t the total number of named users or potential users but the estimated users that are on at the same time.  What are the resources used by each user?
  • How much memory is being used by the applications without any users?           
    • Use the SAR data to get this over a longer period of time.
    • What is the current load on the CPU at peak usages?             
      • Measure over a business cycle.  This can be a month, a quarter or rarely, a year.
      • Use Capacity planning tools to get the data or use the SAR data.                 
        • Tune SAR to sample more frequently than every 20 minutes but be prepared for a LOT of data.  Also have SAR save data for the month rather than delete it.

(Let’s assume that the observed CPU wait states or queues will be handled by the new server.)

  • For core by core data you can use MPstat
  • What I/O bottlenecks are there?  The sar data or iostat will, also show you the I/O queues if your SAN or NAS management tool doesn’t have it.
  • What network bottlenecks exist?  To get network data use netstat data or again SAR.


If you’re daring this can be pretty simple.  Check out the benchmark data on your existing SPARC or POWER system.  Use or to find the system.  Look at the same data for the Intel Xeon system you would use for testing.    For instance look at the benchmark for the Intel Xeon Processor E7 family.


So, if the SPARC server you are replacing is rated at a SPECint of (SWAG here) 45 and the Xeon based system you are buying to replace it is rated by SPECint at 253; I don’t believe you need to be worried that you will have a problem with this Xeon based system as a test platform.


So now you have an approximation of what hardware you need to migrate from the old RISC system to Xeon based systems.  The next step is the POC or Pilot and the planning for that process is the subject of my next blog.



I would love to hear what you think of this.  I look forward to your comments.

I’m really excited about work we are doing with the Texas Advanced Computing Center (TACC) at the University of Texas at Austin and wanted to share some of the details with you. You can also listen to my recent podcast on MIC. (A link to the TACC announcement on this topic is here.)


You might remember Intel first talked about its plans for the Intel Many Integrated Core (Intel MIC) processors last fall at IDF and our VP Kirk Skaugen gave an update at IDF Beijing earlier this month. The Intel MIC architecture provides optimized hardware performance for data parallel workloads. We are making a lot of progress on MIC since SC10 (where we did a demonstration on the impact of making football helmets safer) and we are accelerating our efforts even still.  But today, I’d like to focus on work we are doing with TACC in the development, porting, and optimization of science and engineering applications for future Intel MIC processors.


We recently delivered our “Knight’s Ferry” software development kit (SDK) to TACC and they have now started porting some of the more interesting science applications they encounter from all over the HPC spectrum. They also started creating new science applications optimized for future Intel MIC products.  Applications like molecular dynamics, and real-time analytics involving massive, irregular data structures like large trees and graphs should all see benefit from this development over time.


One of the keys to our approach with MIC is having a consistent programming model and optimization approach between our Intel Xeon processors and our future MIC processors. Due to that shared architecture, developing and adapting applications for MIC is intended to be far more straightforward than alternative architectures. We expect this to ultimately save cycles for developers as compared to porting code to another architecture.


Our partner, the Director of TACC Dr. Jay Boisseau has this to say about our collaboration on MIC: "We are excited to be working with Intel to help researchers across the country take full advantage of both future Xeon processors and forthcoming Intel MIC processors to achieve breakthrough scientific results. These powerful technologies will enable our researchers to do larger and more accurate simulations and analyses while using well-established current programming models, enabling them to focus on the science instead of the software."


I am often asked how our future MIC architecture products fit in to our overall HPC strategy. The Intel MIC architecture, along with our Intel Atom, Intel Core and Intel Xeon processors create a complete portfolio of optimized solutions for a broad set of mainstream HPC workloads. Our future Intel MIC products will enable customers like TACC to draw on decades of x86 code development and optimization techniques and create new science and new discoveries.


These are the kinds of applications that will solve some of the world’s biggest research challenges, as well as hopefully lead to great scientific discoveries in the future. THAT is what I love about my job.


courtesy of flicr user: fred_v


I had the fortune to attend the opening of the new B&D data center in Grenoble, France last week. It was a pleasure to visit beautiful Grenoble and see this new concept in sustainable data centers first hand. The thinking here in many respects seems truly unique in its scope for setting a high standard for sustainable data centers.


The B&D data center opened April 7 in Grenoble. The first point of interest about the data center is that it’s constructed in a pre-existing multi-story industrial building. To be sure this reuse constrains the use of some innovations like free-air or convective cooling, but reusing a building and an industrial site obviously reduces environmental footprint– also a huge contribution to sustainability.


Business and Decision.JPG


That said, the forecast electrical efficiency achieved in the data center at full build-out is an impressive PUE = 1.35 -  excellent for a Tier 3 data center (for instance comparable to Cisco’s recently announced Green DC). There isn’t a lot of room for air handling in the structure, so they employed a rack level system developed by Schneider to isolate hot and cold aisles. Such a solution achieves very high level of efficiency and would be an excellent retrofit model for existing inefficient data centers still using CRACs and floor tiles.


But the B&D data center is not just about achieving world class PUE, it is designed for a much broader perspective on sustainability. For instance, the engineers in the data center optimized their Power Factor , achieving an impressive facility level Power Factor of 97%. This point is often neglected, or infrequently reported.


The data center is powered by renewable energy (electricity from hydroelectric, photovoltaic, and GEG ENeR’s wind farm sources) and hence the facility is 100% carbon-free. Thus, on the basis of the new metrics of data center sustainability from The Green Grid, the CUE = 0. Comparable results for a data center powered entirely by coal-generated electricity would be on the order of 3kg CO2/kWh.


Another unique element is that the Business & Decision data center uses water from Grenoble’s underground aquifer for cooling. The water emerges from the ground at 14C and is returned at a temperature below 19C (In the future some of this heat wil be used to heat adjacent buildings). Via heat exchangers, an independent closed-loop cooling system within the data center is responsible for the actual cooling of the air circulation units.  This ensures essentially zero water use in the data center, and hence a site WUE  near zero. (no water is used for humidification). The folks involved in the B&D data center pointed out to me that in case of emergency, provision is made to be able to "consume" water to cool the data center. However, since this is a back-up scenario only those considerations are not included here). This compares to what might be a typical usage of 7.7 liter/kWh in a typical open loop data center configuration (in other words for a PUE 2.0 data center).


These results are summarized in the table below.









3.1 kg CO2 / kWh



7.7 Liter / kWh


*Based on assumed industry average PUE

**Based on US DOE estimates of carbon content from coal.

***Typical based on a reported 360,000 gallons per day for a 15 MW data center and PUE of 2.0



It’s interesting to contrast this on a per server basis for an assumed average power of 200W per server.


Per 200W server



Power  Overhead

70 Watts

200 Watts

Carbon per hour


0.3 kg/Hour

Water per hour


1.5 Liter/Hour



I encourage you to find out more about this state of the art data center by visiting them in person or online. If you are so privileged, you might even get a chance to peak at their online metrics page, where you can see real time the energy use, PUE, and other factors about the data center operation.

eolas B&D daa center indicator page.bmp


The dust continues to get stirred up following Oracle's announcement regarding plans for the Itanium processor and HP-UX.  Plenty of authors have been commenting on the subject of late, often concluding that the announcement has negative consequences for HP customers who've been relying on Itanium and HP-UX to run their Oracle-based mission-critical database workloads.


Far from being on the verge of falling, however, the sky is actually very bright for those customers.


That's because, for pretty much the first time in the history of the enterprise relational database market, those customers have a perfectly viable alternative readily available.


That option is IBM DB2.


"Why is DB2 a more viable option for an existing Oracle customer today than in the past?" You might well ask - To answer that question, a bit of history is in order.


E.F. Codd published his seminal paper - "A Relational Model of Data for Large-Scale Data Banks", in 1970.  I was only 9 at the time, so it wasn't on my reading list.  Which proved to be OK, since it took until I had graduated college, some twelve years later, for computers to become powerful enough to handle the compute requirements of processing SQL. Codd worked for IBM at the time his paper was published.  Oracle beat IBM to market with the first SQL database by a small interval, but Oracle and DB2 have been competing with one another basically since DB2's introduction in 1983.


I first installed DB2, on an IBM MVS mainframe, in 1985.  My first Oracle experience was a few years later - ironically enough using OS/2 running on one of the first Compaq SystemPro servers to come off the line.  So I've been watching this competition play out for some time.


SQL wasn't standardized at the beginning, so different implementations had different 'dialects'.  The first ANSI SQL specification came out in 1986.  It has served as the standard definition of what constitutes the SQL language itself ever since, with multiple revisions published over the years. The idea behind a standard definition of the language was to allow for easy portability of applications and database definitions between DBMS's that implemented the standard.  As long as the applications and database definitions adhered to the standard, went the theory, portability would be preserved.


From the very beginning, however, commercial SQL DBMS vendors have provided proprietary 'extensions' to their implementations that tempted programmers and DBA's to sacrifice portability in favor of optimized performance and functionality.


The inevitable result was effective lock-in to a particular DBMS.

Once programmers and DBAs started down the slippery slope of using proprietary in-database stored procedure languages and SQL semantic extensions, migration between DBMS's required source-level changes that could be time-consuming, risky, and costly. That was the case until May 19, 2009, when IBM delivered its Oracle compatibilty extensions with DB2 9.7.


DB2 9.7, for the first time in the history of Database Management System's , fully supported almost the entire collection of proprietary extensions to the SQL standard that Oracle's DBMS provides.  So instead of requiring a complete re-coding and re-testing of applications in order to migrate from Oracle to DB2, customers who wish to migrate from any Oracle database to DB2 merely have to unload their database contents and re-load them into DB2 9.7 - no source code changes or DDL changes required.  It isn't entirely seamless, since an unload/reload is still required, but the process is vastly less daunting than it ever was before, and much less intrusive than an entire platform change.


So if, like tens of thousands of your peers, you're running the core of your enterprise on HP-UX and Itanium and you'd like to keep doing just that for the foreseeable future, rest assured that you have a viable alternative available - one that's fully the equal of Oracle's in terms of suitability for mission-critical workloads.


The analyst community has been taking note of the arrival of this new capability.  George Weiss of Gartner Group, commenting on the subject of Oracle's announcement and its impact on end users, said:


"For Oracle applications written internally, you can move to IBM's DB2 9.7 (and future releases), which contains the Oracle database compatibility feature. This enables Oracle code to run on DB2 unchanged (with about a 97% compatibility as reported by references and Gartner clients)." 

(Source: Q&A: The User Impact of Oracle Ceasing Itanium Development; April 5, 2011; George J. Weiss, Andrew Butler, Donald Feinberg)


They say that imitation is the sincerest form of flattery.  If so, Oracle should be feeling very flattered at the moment.  And customers who are currently running Oracle databases on the Itanium processor and HP-UX should feel reassured.


The next time your Oracle sales rep shows up to talk about plans for migrating your Oracle databases to a SPARC platform, be sure to have a DB2 9.7 coffee cup prominently positioned on your desk.  Have a chat about your success in moving your Oracle databases and applications to DB2 without any need to change platforms.  Be sure to mention that your modern Itanium-based platforms already significantly outperform anything available using SPARC, and you're looking forward to the next decade's worth of continuing improvements from HP and Intel.


Should make for an interesting conversation!

itc_cs_empirestate_xeon_library_preview.jpgNew York’s Empire State College wanted to build up its distance learning program, but first it needed to expand its IT resources to accommodate a growing number of students and improve IT agility while reducing energy use. The college’s IT group decided to virtualize servers and desktops using a hardware foundation of IBM System x* servers based on the Intel® Xeon® processor 7500 series.


Through virtualization, the college scaled its IT resources substantially and increased flexibility while using 90 percent less energy. Now it can launch new online programs, accommodate many more students, and enable anytime, anywhere access to education.

“We wanted to maximize hardware density,” explained Curt King, assistant vice president of integrated technologies for Empire State College. “The Intel Xeon processors provide the raw compute performance and memory capacity we need to hold numerous virtual machines on each physical host.”

To learn more, read our new Empire State College business success story.  As always, you can find that one, and many others, in the Reference Room and IT Center.



*Other names and brands may be claimed as the property of others.

Those who read my blog on Day in the Cloud Beijing might have been worrying about me finding China’s version of Walden Pond and  disappearing for months of quiet reflection on technology.  A bit of time  in one of Beijing’s more interesting shopping districts snapped me out of it,  and I’m back to tell you about what’s happening at IDF.


I’ve always loved China, I have visited here since I was a  child and my experiences here helped shape my life’s direction in many  ways.  I love the juxtaposition of the centuries old culture of dynasties  and emperors blended with a future horizon exploding with possibility. Then  there’s just the bigness of the place, and by bigness I mean everything.   Streets seem blocks wide. Squares are football fields in dimension.   Population? 25 million of the 1.1 billion citizens live in Beijing alone.   With all of this bigness it’s hard to imagine the Chinese people ever doing  anything on a small scale. This is what makes China IDF so interesting.   With thousands of local industry players, IT managers and regional influencers  on hand to hear what latest innovations Intel would talk about you get a sense  that the beat of new computing innovation is leaping forth.


After two days of industry discussion, executive keynotes,  and hallway chatter, my big take away from this year’s IDF is that innovation  in the compute continuum has never seemed more alive.  And the major  advancement fueling this compute continuum is the next generation of data  centers.  Never before have data centers seemed more relevant at the center  of a computing conversation, and never before has the rate of change in data  center solutions seemed faster.  Kirk Skaugen, VP of Intel’s Data Center  Group, wove this reality throughout his keynote.  He discussed Intel’s  vision for the cloud (pdf) and highlighted the advancements of the Open  Data Center Alliance and Intel  Cloud Builders program.  He also discussed the changing  economics of mission  critical computing moving away from proprietary alternatives towards  industry standard solutions based on Intel architecture.  What do these  things have in common? Massive efficiency delivered to IT with a result that  corporations can invest IT budgets to drive innovation, not to manage complex  overhead and sky high systems cost.  In all a pretty heady opportunity if  the industry can deliver to the vision…and we saw some proof of great  progress.  Not only did Skaugen discuss China delivery of mission critical  8-way platforms (by Inspur), but he also highlighted  the progress of usage model definition for the cloud computing coming from customers  themselves…with help from Open Data Center Alliance steering committee member China Unicom.


This coupled with the cloud innovation that I’ve discussed previously featured  from the Cloud Builders program and featured on the IDF stage through a  compelling container demonstration points to a world of opportunity for us to  harness. How do we get there? Through some of the same approaches we’ve used  for decades: Industry standard platform delivery. Solution choice.   Technology advancements to meet emerging customer requirements.   Collaboration across companies to fuel innovation and push us farther. And a  little magic called Moore’s  Law.


For those who want to see more of the highlights of Beijing  IDF check out

Following up on last week’s post about the Intel  Xeon E7 launch, I got to go to Atlanta, GA for an Americas Launch event  held at the Institute of Health  Technology Transformation Summit (iHT2).


It was a great event that detailed the information on the  new Xeon E7, with special focus on Healthcare Solution Providers like Siemens, McKesson, Allscripts, GE Healthcare,  and others, along with technical leaders from  healthcare IT customers.  See the slides  from the presentation below.




Just as Intel works with key enterprise datacenter ISVs to  deliver an ecosystem that supports our new 10 core Xeon E7 series servers, it’s  also important to work with ISVs in key markets like healthcare to deliver optimized solutions for those markets, along with understanding  key customer concerns and needs.


There was lots of opportunity to chat about what’s going on  with the government push for Electronic  Medical Records (EMR), with most customer institutions hurrying to have  efforts underway by 2012 to meet both initial schedules and 2015 deliverables.  I also got much more detailed info from local customers and partner meetings.  I’d love to hear more from end users deploying  these systems.


While there are great healthcare solutions being delivered  today on both our Itanium and our new Xeon E7 high performance systems, there is  certainly longer term interest in the opportunity for cloud based solutions  aimed at smaller hospitals and providers, but it’s clear that HIPAA only adds  to all the data and security requirements necessary for even private cloud  deployments.


Our event also featured a really cool presentation on high performance teams by a former Top Gun  and Blue Angel pilot John Foley.


Also if you’d like to see the latest E7 launch info including  an update on Itanium plans and announcement of a new Itanium OEM. Check out  Kirk Skaugen’s speech at the Beijing Intel  Developer Forum (IDF).

Maybe it’s because I’m in the capital city of a thousands  year old culture, but I found myself being philosophical in viewing the Day  in the Cloud event in Beijing yesterday.  As I walked around  the room looking at partners showcasing their cloud computing solutions developed in partnership with our Cloud  Builders program, I thought back on a quote from Thoreau that I’ve  always liked:


If you have built  castles in the air, your work need not be lost; that is where they should be.  Now put the foundations under them.

Listening to the cloud zeitgeist over the past couple of  years, one could easily surmise that cloud meant many things to many  people.  Software services, the next wave of virtualization…or web.   The death of the corporate data center perhaps or a one fits all model offered  by a large solutions provider.  To say that the industry has spent a lot  of time talking about cloud would be an understatement, but as you pull back the  curtains on the hype real value is revealed.  We see an ability to deliver  IT resources when and where they’re needed and to empower users the access to  the services they need in a timeframe that is inconceivable in traditional  terms.  With this change we also see the capability of IT organizations to  streamline operations and place their investment in solution innovation and  business value, not the steady stream of money going to oversight of current  assets that bleeds IT budgets today.  And ultimately, if done well, we see  a world where data centers can interconnect to flow compute capacity where  required and provide both individual users and corporations the security of  having the right IT compute capacity without the cost of  over-provisioning.  So perhaps the industry is correct in building our  castles in the air as we know that as an industry that’s where we’ve always  placed them and that’s where the greatest visions of the future can become  reality.


But just as we dream about this bold vision of a new  computing reality, we must reflect on the fact that the data centers of  tomorrow will continue to be run on hardware, and this hardware working  together with software developed to provide the security, efficiency and  automation the cloud requires, will form the foundation of our computing  future.  Intel  Cloud Builders is all about building that foundation in the form of  industry collaboration on focused cloud  computing reference architectures that address unique challenges  facing the evolution of data center computing to cloud models.  These are  the sticky technical requirements that IT is seek…things like trusted compute  pools, data center level automated efficiency…standard approaches to cloud  on-boarding…that will let cloud infrastructure grow from the early deployments  of today to the expected ubiquity of tomorrow’s vision.  And it’s in this  ubiquity that cloud delivers its real value and the wide scale federation of  data centers become a reality. Our second Day in the Cloud event featured eight  of our most recent reference architectures from some of the leading cloud  computing solutions providers in the China market as well as some of  our global partners.  For details on each of the reference architectures check out my teammate Rekha  Raghu’s blog.


What was fascinating about the  event for me as we started with presentations from Intel’s Jason Waxman and  Billy Cox, and as they told the story of Intel’s vision for cloud and the  details of the cloud builders program one editor raised his hand and said “I  have a simple question...why Intel and cloud”.  It’s a question I’ve heard  before (not unique to China), and it’s an understandable one that usually comes  from people who have heard all about the vision of cloud but not reflected on  the technology stacks required to deliver this vision.  Billy described  our deep engagement with leaders in the industry on our collaborative solutions  delivery, but I think the editor still didn’t buy it.


Intel Cloud Builders Day in the Cloud demonstrations to the press.


But then we moved  into the reference architecture room, with leading partners highlighting things like cloud on-boarding, trusted compute pools, and efficient data center  delivery.  There was Lenovo’s solution that provided a client aware  experience based on embedded APIs from Intel running on the hardware.  There were senior executives from Vmware and Microsoft speaking passionately  about how their reference architecture solutions are solving customer  requirements. There were senior technologists talking about the value  reference architecture collaboration with Intel has represented in hardening  their solutions. As groups of editors moved from solution to solution in a  frantic dance that Reuven Cohen CTO of Enomaly called “speed dating for the  cloud”, the rare glow of the results of deep collaboration was in there air,  and I think our guests saw the foundation being poured for broad cloud  deployments. We like to view it that as it has many times in the past, Intel  architecture forms the heart of this foundation with the performance engine to  fuel cloud technology integration and workloads such as Intel Trusted  Compute Technology (pdf) and Intel  Power Node Manager that help deliver the capabilities that will make  cloud evolution something IT can confidently navigate in driving to that vision  of cloud ubiquity.  Why Intel in the cloud? Because we want to continue to  enable the industry to build their castles in the air and provide the industry  the platform foundations to make these dreams a reality.

The DELL PowerEdge C C1100 (aka DELL DCS CS24-TY) is based on the Intel 5500 (Tylersburg) chipset with support for dual-socket Intel processors.   This 1U (1.7”) system was designed by DELL for HPC, Web 2.0, Gaming and Cloud Builder environments.  Here are the 4 test servers that I have in my lab environment:



The small 1U form factor gives dual-socket Xeon platform the capability of using 144GB (8GBx18 DIMM slots) of local RAM for high memory intensive workloads, and the granularity needed for power and thermal monitoring via Intel Intelligent Power Node Manager Technology.  As for local storage options, you can opt for 4 3.5” or 10 2.5” drives – and yes, they support SSDs in the 2.5” form factor!


Support for the following Operating Systems and Hypervisors are available:

  • Red Hat® Enterprise Linux® 5 (x86 or x64)
  • Red Hat® Enterprise Linux® 6 (x86 or x64)
  • Novell™ SUSE™ Linux ES 11
  • ESX 4.x
  • Microsoft Windows® 2008 R2


For the best experience using Intel Intelligent Power Node Manager technologies– it’s important to have the latest BIOS and BMC Firmware loaded on your server, and across your server grid.  The updates for BIOS and BMC firmware come in 3 different packages; based on Linux, Windows or a bootable flash device.


Out of the box, the DELL PowerEdge C1100 is setup to deliver power and thermal readings.  Here is the process for you to setup the platform to get to the data.  If you’ve setup BMC’s before – it’s very simple!


Press ‘F2’ to get into the BIOS on startup – you can see the BIOS and BMC version on the initial screen:


Next you can traverse to the Server Tab – and you’ll see  “Set BMC LAN Configuration”


Out of the box, your BMC should pick up a DHCP address if you have it on a DHCP enabled subnet – the default setup will be Dedicated and DHCP is Disabled – meaning you’ll have a dedicated management drop for the server and will have to assign an IP when installing the server – in our scenario we have it setup as Shared-NIC and DHCP is Enabled.


Once you’ve setup the IP address – you can get more info via the Web User-Interface. The Web-UI for the BMC is relatively simple and this is where you can give a logical name to your management interface.


Simply open your browser interface and type in the IP address; in our case this will open a login window to your management interface, so login with the default username/password setup in your documentation.  Out of the box; our setup was root/root.



Once you’re logged into the BMC, you can go to the Configuration - Network tab where you can put in a logical name for your server’s management IP address – this simplifies things a bit for future usage.


Now you save the changes and you’re ready to start using a console to monitor and manage power usage on your DELL PowerEdge C1100 server.  Basic BMC manageability is included - simple ipmitool command line usage can be used.   To talk to the Manageability Engine - the bridged IPMI commands are found in the Intel Intelligent Power Node Manager 1.5 API


For a much simpler implementation, you can also use the Intel Data Center Manager SDK to get a quick and easy visual representation of the system’s power and thermal (inlet air temperature) data.


Once you setup the DCM SDK, once you have setup your system – here’s all the data you need:


Once entered, the system will be monitored and polled on a regular basis. Our scenario below shows a 30-second refresh over a 1-hour time window.


Your end-result is a great graphical representation of the metrics shown in the screen capture below; average power, maximum power, minimum power, average inlet temperature, and maximum inlet temperature.  While this is hardly an exhaustive list of the metrics captured, it gives you a quick graphical representation to the data.


Stay tuned for more DELL Poweredge-C Server blogs!

I'm very excited about the launch of our newest small business server platform, the Intel® Xeon® Processor E3-1200 product family. Featuring 32nm process technology, this new family of single-socket Intel Xeon processors are getting rave performance reviews. By upgrading from your 4+ year old desktop-based server to a new Intel Xeon Processor E3-1200-based server, you'll get almost 6 times better performance (and even greater energy efficiency) when running business applications. You'll also get greater protection for your valuable business data, with servers that support ECC Memory for 24x7 dependability, Intel® AES-NI for 58% faster encryption and decryption, and Intel® Rapid Storage Technology that sends an automatic email alert in case of a hard drive failure.


Also launched last week, the Intel® Server Board S1200BT product family is the first of a new line of server products available from Intel Resellers for small businesses. The S1200BT supports the Intel® Xeon® processor E3 1200 series today and will support the next generation single-socket processors from Intel  so you can feel good about your investment now - and in the future!


IT management can be a scary thought to a lot of small businesses, but with today's on-demand environment, it's more important than ever to have fast, reliable, anywhere and anytime access to critical business data. You can count on servers featuring the new Intel Xeon E3 processor to keep you online and keep your information secure. And check out this short video I made last month to see how easy it is to configure a small business server:


Back in the Saddle, Again

Posted by mshults99 Apr 11, 2011

(Apologies to Gene Autry for that title )



Have you been wondering where old Shults got off to after posting his 'coming out' blog?


Well, wonder no longer.  I was traipsing about in New Zealand, enjoying a 2-month sabbatical.  I was thinking more about green-lipped musselsand Sauvignon Blanc than I was about mission-critical computing and Intel!

Sheep on a Bay of Islands hillside.jpg

But I'm back now, so to business......



If it's been so long since my last post that you've forgotten my background:


  • I'm the mission-critical segment strategist here at Intel.  You might think of me as 'Robin' to Pauline Nist's 'Batman'.  ("Holy MCA Recovery, Pauline!")
  • I've been at Intel since 1992, always part of the server portion of Intel's business.  I've seen my share of platform technology advances since I launched the Pentium Pro processor (Intel's first truly server-centric processor), back in 1995 (additional history here)
  • I've been involved with server database technology since 1982, when I joined Andersen Consulting fresh out of Rice University (go Owls!).  Andersen (now Accenture taught me how to 'do mission critical' on mainframes and UNIX minicomputers.
  • At Andersen, I experienced a Eureka! moment when I discovered on a project that the rather limited 386-based PC's of the day, properly networked and running early versions of Oracle and SQL Server (then from Sybase) were capable, in the right hands, of running the same core enterprise applications that Andersen's customers had been paying so much money to build and run on mainframes!
  • Thinking that PC's just might be a better way of doing things, I suggested to the Andersen hierarchy that it might be worth pursuing.  They disagreed.  So I left to become CTO of a client/server startup - Business Systems Group.  That led to getting noticed by Intel, and here I am.  Almost 20 years later, I'm still here, much to my surprise!


I'm still here because I'm still excited about the server technology that Intel continues to create.  I'll be using this blog to tell you about some of that technology and why it might be relevant to you.


While it's true that it's Intel's name on my paycheck, I aim to avoid too much product-speak.  There's too much of that in this industry.  Having spent a decade on the implementor's side of the technology table, I think I have some idea of what you're facing out there.


Pauline is something of a legend in the mission-critical server industry segment.  She'll provide the overall perspective on all things mission-critical and Intel.  I'll focus on the intersection between business strategy and Intel mission-critical server technology.  Our colleague Wally Pereira fills out the troika with a focus on practical implementation issues from his position on the front lines of Intel-based mission-critical solution deployment.


Much of my focus will be on database technology, since  the database management function is so fundamental to essentially ALL mission-critical deployments.  I don't pretend to have the industry contacts and deep technical savvy of a guru like Kurt Monash of DBMS2 fame, but I think I bring a useful perspective when it comes to interpreting the events and trends of the database community as they impact the mission-critical-on-Intel community - in other words, YOU!  Please let me know through your comments if I'm hitting the mark.

In the coming weeks and months, I'll be blogging on at least the following topics.




The Itanium Processor and DB2: A secret that shouldn't be so secret

Oracle's Larry Ellison has caused a bit of a stir lately by announcing that Oracle will be halting new development for the Itanium processor and HP-UX.  You may not know that IBM's DB2 runs very well on the Itanium platform, and the latest version directly supports almost 100% of Oracle-proprietary database syntax.  You could have more options than you thought!

Big Memory and Databases: The coming confluence

DBMS's have always loved memory.  But only recently has memory become big enough, cheap enough, and reliable enough for users to contemplate putting the ENTIRE DATABASE in memory, and using disk only for archival storage and recovery.  Innovative DBMS vendors are delivering technology to take advantage of big memory, and they're getting interesting results.

Big Memory and Analytics: Go real-time or go home

Big memory is good for both high-throughput transaction processing and analytics.  But the near-term opportunity is probably analytics, particularly of the real-time variety.

Big Memory and Semantic Databases: A 'Beyond SQL' breakthrough that matters?

Ever heard of 'triplestores' and the 'Semantic Web'?  Neither had I, until recently.  Then I ran across a small company whose technology is enabling some big customers to do some very big, innovative things.  I'll tell you more about it in this blog.



Please suggest others that you'd like to see.  And let's make this conversation active and interesting, shall we? - If you comment, I promise to respond as quick as I can.

你好! Ni Hao! What a beautiful day here in Beijing at Day 0 of Intel Developer Forum!! Not a single cloud in the sky... Not a problem! We will make our own clouds with Intel Cloud Builders Program After an outstanding success at the USA Day in the Cloud event, I am here at the Day in the Cloud PRC event where a number of local Press Analysts and local cloud vendors are getting together to demonstrate the Intel Cloud Builders Reference Architectures. A great collaboration from our cloud builders partners in bringing this event together. The event highlighted how we, along with key industry advocates, are delivering on a cloud computing strategy of “Listen to customers --> Deliver technologies --> Develop the ecosystem.”  Some of the reference architectures that are being demoed today include several regional vendors, Fujitsu, Inspur, Huawei, Neusoft, Lenovo, PowerLeader and global vendors including VMware, Microsoft, Dell and Enomaly.  The goal of each of these refererence architectures are to help customers deploy their cloud by taking a set of use cases and solve a specific customer problem like power management, secure client access and building a cloud infrastruture. Any questions on what is in a reference architecture? Listen to my Conversations in the Cloud podcast for more information.


The event was kicked off by Jason Waxman, General Manager for High Density Computing. Jason highlighted the importance of this event where several partners are getting together to demonstrate how to address the problems of building and deploying a cloud infrastructure. He articulated our Cloud Vision 2015 around being federated—sharing data securely across public and private clouds; automated—so IT can focus on innovation and less on management; and client aware—optimizing services based on device capability to enable a secure and consistent experience across the IA-based compute continuum. He reinforced the value of the Open Data Center Alliance, with a membership of over 100 companies; they are focused on creating a usage model roadmap to set requirements for inter-operable data center solutions


Here are the details of each of the reference architectures being demonstrated today:


Build Real Clouds with Enomaly ECP and Dell: The Enomaly Elastic Computing Platform Service Provider Edition (ECP SPE) running on top of Dell based hardware built on Intel® Xeon® processors form an ideal platform for high-density and multi-tenant cloud infrastructure. When IT architects combine scalable Dell systems, efficient Intel Xeon processor 5600 series, and ECP SPE, they can support very large clouds with many thousands of servers in complex designs. This reference architecture will help IT professionals to quickly achieve the benefits of infrastructure as a service (IaaS) in very large organizations. It will be of most interest to organizations with unique, cloud-ready workloads that need to remain under close control. Check out Rueven Cohen, CTO of Enomaly Inc's blog to learn more about Enomaly cloud products.


Simplify Cloud Deployments with Fujitsu Primergy CX100 and VMware vCloud* Director:This Fujitsu Dynamic Cloud reference architecture is built on the PRIMERGY CX1000, an innovative scale-out cloud server infrastructure platform that allows companies to scale big by packaging 38 industry-standard x86 server nodes, based on Intel® Xeon® processor technology, into a dedicated datacenter rack with shared cooling architecture and a small footprint. PRIMERGY CX1000 optimizes the data center density, power consumption and heat dissipation problems in a one step approach with its innovative shared cooling architecture, Cool-Central*. This architecture enables companies to see significant reduction in energy consumption and dramatic savings in data center space, thus removing the strong inhibitors for cloud data center setup. VMware vCloud Director works with this solution to provide the interface, automation, and management features that allow enterprises and service providers to supply vSphere resources as a Web-based service.

Check out this awesome hardware with preconfigured vCloud stack being demonstrated today!


Accelerate to the Cloud with Huawei SingleCLOUD: Huawei SingleCLOUD* solution is designed for the cloud computing data centers of Cloud Service Providers and enterprise customers. Based on the SingleCLOUD solution, Cloud Service Providers construct network-based office environments which provide “pay as you go” server and storage services for enterprises, especially small and medium enterprises. This reference architecture discusses the Huawei SingleCLOUD solution optimized on Intel Xeon® processor-based platforms and describes how to implement a base-solution to build a more elastic and complex environment of cloud computing.


Efficient power management with Neusoft Aclome Cloud:Neusoft Aclome* provides a complete cloud computing solution for enterprise IT infrastructure, enabling customers to receive the benefits of the cloud without too much additional work to build and validate the solution. This reference architecture provides a step by step guide to build a cloud and optimzie power management using Neusoft Aclome and Intel Intelligent Power Node Manager.


Simplify your private cloud deployments with Microsoft System Center Virtual Machine Manager and Power Leader Rack Servers: For cloud service providers, hosters and enterprise IT organizations who are looking to build their own cloud infrastructure, the decision to use a cloud for the delivery of IT services is best done by starting with the knowledge and experience gained from previous work. This reference architecture outlines a private cloud setup using Windows Server, Hyper-V* and the Microsoft System Center Virtual Machine Manager Self-Service Portal* 2.0 (VMMSSP) on the Powerleader Power-Rack* (PR) Series Servers, powered by the Intel® Xeon® processor. VMMSSP is a free, partner-extensible portal that enables private cloud and IT as a Service with Windows Server, Hyper-V and System Center Virtual Machine Manager. With the portal, customers and partners can dynamically pool, allocate, and manage resources to offer Infrastructure as a Service (IaaS).


Policy based Power Management with Dell and VMware: VMware vSphere* and Intel® Intelligent Power Node Manager (Intel Node Manager), integrated by using Intel® Data Center Manager (Intel DCM), extend the ability of cloud and virtualization resource management engines. This solution reduces total cost of ownership by enabling users to monitor and cap power in real time at the server, rack, zone, and data center levels. This reference architecture details how the use of Intel Node Manager, Intel DCM and VMware vSphere on Dell* PowerEdge* servers yielded power savings through the deactivation of unnecessary hosts and the migration of workloads to fewer servers during periods of low resource utilization. It will be of most interest to administrators and enterprise IT professionals who seek power management solutions to achieve better power efficiency within new or existing data centers.


Client-aware Cloud Demo with Lenovo and Stoneware: Lenovo and their ISV partner Stoneware along with Intel have collaborated to enable platform optimized delivery of cloud services.  Secure cloud access (SCA) is based on a balanced approach to delivery of cloud services that takes advantage of the intelligent infrastructure enabled by Intel end to end cloud solutions.  Together with Lenovo and Intel, Stoneware has enabled their application detect compute, context and capabilities of ThinkPad and ThinkCentre platforms based on 2nd Generation Intel Core and Core vPro Processors .  Equipped with this information, users can dynamically optimize service delivery based on the ability to execute all or some portion of the application in either the cloud data center or on the end point device.


Design and deploy a cloud with Inspur Vertical Cloud: “Inspur Vertical Cloud” is focused on addressing the specific requirements defined and maintained by a particular vertical business or a group of businesses in the same vertical segment. This reference architecture is focused on helping industry partners to build cloud platforms that meet the basic needs of  the vertical customers, so that the cloud solutions will be simplified, energy efficient, secured, and intelligent. It will allow users to easily access cloud services provided by such cloud platforms and enjoy the full benefit of cloud computing.


Dont worry if you are not at the event. We will be posting more information on the demos on the Intel Day in the Cloud website.  To learn more about each of these reference architectures being demostrated today, please visit the Intel Cloud Builders reference architecture library.


Signing off from Beijing! Have a great time at Intel Developer Forum! 再见 zài jiàn!

itc_cs_timewarner_xeon_library_preview.jpgTo improve its call center infrastructure and provide better and faster service to its customers, Time Warner Cable replaced the 11-year-old infrastructure hosting its call center system. The old infrastructure was based on Sun SPARC* servers and thin clients running the Oracle Solaris* operating system. The new infrastructure was based on HP ProLiant* DL-series servers and HP ProLiant* BL460c Server Blades with Intel® Xeon® processors 5500 and 5600 series.


The results were dramatic. The new infrastructure lowers total operational costs by 31 percent, improves customer service representative productivity by five points, and speeds call fulfillment—building customer retention and satisfaction. It also saves 20 percent in energy costs over the old hardware and even reduces personnel costs by mitigating the need for high-paid UNIX* expertise.


“HP thin clients and ProLiant servers enable us to bring our requirement to the business: great technology that is easy to use and manage,” explained Cesar Beltran, vice president of information technology, East Region, at Time Warner Cable “It’s a great investment. And what I like most is the technology is transparent to our business—doing what it’s supposed to do.”


To learn more, download our new Time Warner Cable business success story. As always, you can find this one, and many more, in the Reference Room and IT Center.



*Other names and brands may be claimed as the property of others.

itc_carselect_cs_alpha_xeon.jpgCanada’s Alpha Trading Systems needed to enhance its IT infrastructure to create a stronger competitive advantage for executing trades, supplying market data, and providing technology-related services. The company conducted a server proof of concept with HP ProLiant* DL380 G6 and G7 servers featuring Intel® Xeon® processors 5500 and 5600 series and found the business technology improvements it was looking for.

For example, sub-1-millisecond response times bring the company’s technology infrastructure to a lower level of latency and boost usage among market participants. Also, the new servers deliver up to three times the load throughput capacity of the previous environment and address unpredictable spikes in market activity. The new servers mitigate the need for a data center expansion, since they fit in nearly the same server footprint as the old configuration. Finally, Alpha Trading Systems is now ensured of high availability and business continuity with fully redundant, fault-tolerant servers and application stack.


“The biggest surprise for me is the amount of performance gain that we receive from ProLiant DL380 G7 servers over our original infrastructure,” explained Karl Ottywill, chief information officer for Alpha Group. “The performance has far exceeded my expectations, and it allows us to bring faster speed and higher quality execution to the equity trading market.”

To learn more, read our new Alpha Trading Systems business success story. As always, you can find this one, and many more, in the Reference Room and IT Center.



*Other names and brands may be claimed as the property of others.

The Xeon E7 was released on April 5th - Now the SGI server here in our lab can now support 320 threads on 160 cores to process data along with its 512GB of RAM (expandable to 4TB with the existing configuration!).  And this beast can be expanded to 5120 threads on 256 cores.


What a long strange trip it’s been.  30 years ago I stood in a semicircular crowd around a foldup table where a new IBM PC sat.  It had 64KB of RAM, a tape port (for storing data on a cassette tape), and a 5 ¼ inch floppy drive (no 8 inch floppies for this baby).

IBM PC.jpg


This IBM PC sported an Intel 8088 processor, (8 bit bus for the 16 bit 8086 processor), and it was amazing.   I was working for FORTH, Inc then and I was impressed.  The Intel Architecture has moved so far from those days of the Intel 8086.




But enough of the past; the present holds great potential and a lot of water has gone under the bridge.  The Xeon E7 brings AES (Advanced Encryption Standard) encryption and decryption right into the core where this security capability can run with nearly no apparent latency for the application.  The recent theft of the email addresses from Epsilon demonstrates the dangers in storing data in plain character format.


Many enterprises have shied away from database encryption due to the inherent latency in decrypt and encrypt process.  This process is handled either through software or an accelerator card on the PCI bus.   This builds in latency that can have an effect on the application meeting SLA response time requirements.However, with encrypted data in a database the only time the data is exposed in human readable format is when it is being placed into the client systems and then it is limited to the immediate data at hand instead of the entire data set being exposed.  This is a significant security enhancement for mission critical applications.  Now, with the Xeon Processor E7 family this trade-off between living with significant latency for security or living with no encryption at all is gone.  Corporate Security Officers and developers don’t have to go through those ‘cost – benefit’ breakdowns any longer.


Another big gain for System Administrators and Corporate Security Officers is the inclusion of Trusted Execution Technology (TXT) into the processor.   This TXT process ensures that an application or virtual machine is spawned in only a ‘known good’ environment.  Like current anti-virus ware that looks for ‘known bad’ signatures in the software, a process that can be charitably described as leap-frog between the virus writers and security vendors, the TXT in the core looks for a ‘known good’ signature in the new platform by comparing the configuration of the new hosting platform to a signature of the corporation’s standard platform.


And now, with the 32 nanometer process, Intel has been able to squeeze 10 cores on to one processor along with accelerated I/O features and other enhancements.   Each core is dual threaded so the operating system sees 20 cores.  Look at Task Manager in Windows or /proc/cpuinfo in Linux.  With one 4 socket machine you’ll see 80 cores.


E7 Cores.jpg

All this brings me to the point of explaining the taxonomy here.

  • A 4 socket machine has 4 NUMA nodes
    • Each Socket has one chip or processor
      • Each chip or processor has 10 physical cores
        • Each Core has two threads for 20 threads per socket
          • The operating system sees each thread as a core.
  • Whew - Each of these threads blows away the Xeon cores of the past in speed, reliability, and security.


A rough heuristic

That IBM PC I brought up at the beginning of this blog is now 30 years old.  At first it was viewed as a hobbyists toy like the TRS-80 from Radio Shack.  But businesses found the IBM PC expandable and Intel, following Moore’s Law, began the march to ever smaller and more powerful processors giving the consumer and business user more bang for their buck.   The old 8086 architecture is barely recognizable in these modern processors.

Now the SGI machine in the lab here will have hit the world record performance in SPECint_rate_base2006 of 27,900 (as soon as we finish swapping out the Xeon 7500’s and installing the E7s – yes they are socket compatible with a BIOS update).

Today Facebook announced the Open Compute Project which opens their innovative platform and data center specifications to the industry and showcases the advancements in their new Prineville data center. The data center has a remarkable design PUE somewhere below 1.07 according to Jay Park, Facebook’s Director of Data Center Design and Construction. This implies less than 9% of the energy consumed in the data center goes to needs other than the IT equipment.

What is even more remarkable, however, is the extent to which Facebook has achieved improvement in what we’ve called “Grid to Gate” en_1234209334_facebook_logo.jpgfficiency. PUE only measures improvement of power distribution and cooling equipment in the data center. Once PUE reaches such low values other sources of inefficiency become more significant.

And this is where Facebook has excelled. With some willing support over the last 18 months from Intel engineers, Facebook has implemented several key design features, from carefully selected high efficiency power supplies and distribution equipment, to a novel “un-shadowed” platform design that optimizes thermal efficiency by enabling lower power, higher efficiency fans to meet system cooling requirements in a data center with higher ambient temperature.

Another outcome of the work with Facebook that benefited Intel was the development and maturation of new technologies. For example, the genesis of the reboot-on-LAN function (rebooting a server by sending a special packet through the network) was born from Intel’s work with Facebook.

Facebook has pushed their data center into a new regime of efficiency -- from beyond PUE to the regime of “Grid to Gate” efficiency. Congratulations to the engineering teams at both companies!



** Based on a 55% performance increase from Intel™ Xeon™ L5520 processor to the Intel Xeon X5650 processor using Facebook representative model benchmarks.  Combined with Facebook’s overall 38% DC power reduction = 2.5 perf/W improvement = 60% W/perf improvement for a 60% reduction in power consumption per user.   Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors.  Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions.  Any change to any of those factors may cause the results to vary.  You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.  Configurations: Intel EPSD Willowbrook: 2 x L5520 2.26 GHz “Nehalem” CPU, 6 x 2 GB DDR3 DIMM 1066 MHz (12 GB), 1 x 7.2k SATA HDD; Intel EPSD Willowbrook: 2 x X5650 2.66 GHz “Westmere” CPU, 6 x 2 GB DDR3 DIMM 1333 MHz (12 GB), 1 x 7.2k SATA HDD.  Test = Facebook “Dyno” Web Tier workload.  Tests completed by Facebook “Labs” team with Intel support.  For more information go to

Today, I’m participating in Facebook’s Open Compute Project that showcases Facebook’s latest advancements to drive data center efficiency and make their innovative specifications open to the industry. The announcement is an impressive accomplishment for Facebook, and we’re excited to be part of it.

Our collaboration represents more than 18 months of engineers from both companies working together to optimize performance per watt and develop a highly efficient board design. The results? A 60% reduction in  power consumption per user based on the leading energy-efficient performance of Intel Xeon Processor-based platforms, jointly developed system optimization, and Facebook's aggressive power optimization.


These world class results reflect our shared mission for utilization of industry standards and open infrastructure to advance data center efficiency with a goal of more data centers achieving PUE of 1.2 or lower. While Facebook will benefit directly from the efficiency of their new data center, they aren’t the only winners here. The collaborative effort pushed Intel to deliver technology for greater efficiency, which will ultimately benefit a broad base of data centers across the globe.

Facebook’s decision to openly share their data center design will also provide inspiration for other IT leaders facing similar technical challenges.

In all, it’s a big win for IT innovation.






** Based on a 55% performance increase from Intel™ Xeon™ L5520 processor to the Intel Xeon X5650 processor using Facebook representative model benchmarks.  Combined with Facebook’s overall 38% DC power reduction = 2.5 perf/W improvement = 60% W/perf improvement for a 60% reduction in power consumption per user.   Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors.  Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions.  Any change to any of those factors may cause the results to vary.  You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.  Configurations: Intel EPSD Willowbrook: 2 x L5520 2.26 GHz “Nehalem” CPU, 6 x 2 GB DDR3 DIMM 1066 MHz (12 GB), 1 x 7.2k SATA HDD; Intel EPSD Willowbrook: 2 x X5650 2.66 GHz “Westmere” CPU, 6 x 2 GB DDR3 DIMM 1333 MHz (12 GB), 1 x 7.2k SATA HDD.  Test = Facebook “Dyno” Web Tier workload.  Tests completed by Facebook “Labs” team with Intel support.  For more information go to

YesterdayIntel launched new processors in the E7 family. Along with the performance, security, and reliability features of this new family, we’ve also introduced capability which manages platform power for greater energy efficiency.


For the mission critical computing segment the key market requirements are performance, reliability, and security.  The top workloads for this segment include mission critical and high volume database transaction processing applications. The newly launched E7 processors (previously code named Westmere-EX) and server platform is designed to achieve exceptional performance and memory scalability using buffered memory architecture.


However, large memory can consume a lot of power in this new generation  we have substantially enhanced the  memory power management technologies. These capabilities have been shown to reduce the idle power of the platforms by over 120 Watts (based on Intel measurements on a model system configuration). This reduction can translate to hundreds of dollars over the useful life of the server, depending on the usage.


We understand that customer‘s first requirements for the E7 segment are the great performance, reliability, and security our platforms deliver. However, Intel is also committed to going the extra mile to make our servers as energy efficient as possible. The new E7 family is a great example of that commitment.

When Compaq introduced its first Intel-based ProLiant server in 1989, who could have imagined how this would change the server industry?  Today’s announcement of ProLiant and BladeSystem servers based on Intel® Xeon® E7-2800 and E7-4800 processors brings the vision of PC economics to the enterprise to new heights.


For years companies and their mission critical workloads have been trapped inside expensive, lower performing RISC-based platforms.  But with IT budgets shrinking and the cost, risk, dependence of maintaining and replacing RISC environments increasing, companies need an alternative.


The question of migrating mission critical workloads has always been a matter of trust – will anything be as reliable as RISC?  HP and Intel answer with a resounding YES.  New HP ProLiant and BladeSystem servers based on Intel® Xeon® E7-2800 and E7-4800 processors provide balanced scaling (up to 4 TB/10 cores/20 threads per processor), greater efficiency (Intel® Intelligent Power and Scalable Memory Buffer),  and increased resiliency (DDDC and HP Memory Quarantine).  All this for a fraction of the cost companies now pay for their RISC environments.


If we consider HP’s recent acquisition of Vertica, the integration of new HP ProLiant Intel-based servers with Vertica software significantly increases the ability to analyze massive amounts of data simply, quickly and reliably, resulting in “just-in-time” business intelligence.


While cost may be the motivating factor for migrating RISC environments, this announcement proves HP and Intel are committed to delivering on the promise of bringing PC economics to the enterprise with innovations that continually increase the scalability, efficiency and resiliency of your environment.


Visit to learn more about what HP is doing to scale-up to new heights.

Notice anything different about today’s Intel® Xeon® processor announcements?   Well, if you hadn’t noticed, the processor numbers are different.  Yes, the numbers are normally different with each processor generation, but I'm talking about the new format or construct.   So why do this?  The reality is the current system needed better reflect key processor capabilities and, after many years of use, was starting to run out of numbers.  With broader product choices for different IT manager needs, from data centers to high-performance clusters to small business servers…flexibility, clarity, and multi-year stability were key new number system objectives.


So now you're probably saying, "oh great, now I have to relearn new numbers".   As a member of the Intel team that developed the new numbers, I understand this fair sentiment.  However, there is a “but wait”….  In short, more meaning has been built into the Intel® Xeon® processor number and understanding this meaning will help you choose the right processor.  Now, of course, one needs more product information than just the number in selecting a processor for a server or workstation deployment, but it plays a valuable role.


Let’s take a look at an example.


New Number Construct- Detailed.png

It breaks down into:


1.  Brand (no change here).

2.  Product line (there are three...E3, E5, E7).

3.  Product family.

4.  Version (v2, v3, etc.).


The ‘product family’ actually encodes further meaning.  The first character tells how many processors are natively supported in a system, the second character, ‘socket type’, signifies processor capability, and the third & fourth are ‘SKU numbers’ which, along with the whole number, represent a collective group of capabilities a given price.  So, in the above example, it’s the ‘E7’ product line.  The ‘4’ in the 'product family' means it supports four processors natively in a system, and the ‘8’ is the socket type.  A ‘socket type’ of ‘8’ supports a higher general level of system capability, for example more memory and I/O, than a ‘socket type’ of ‘2’.  A given 'socket type' digit does not change over time....meaning the follow-on to the Intel Xeon processor E7-4800 v2 product family would be the Intel Xeon processor E7-4800 v3 product family.  The '8' didn't change.  All that changed was the version number (‘v2’ to ‘v3’).


So, where is the ‘version’ reference in the products launching today?  Well, for the first processor generations using the new numbers, there won’t be a ‘version’ reference.  This begins with the second version or ‘v2’.  Additionally, beginning with ‘v2’, Intel Xeon processors with a common version reference will share a common microarchitecture.


The new number system was rolled out today with the Intel® Xeon® processor E7-8800/4800/2800 and E3-1200 product families. All future Intel® Xeon® processors will adopt this new numbering.


Are the numbers so easy, a caveman could understand them?  Maybe not.. .but hopefully the processor number provides key information to help in the purchase process.  Let me know your thoughts.

I have grown up around small businesses all my life. From my father’s sandwich shop, to hamburger joint, to local restaurant. I have seen firsthand the types of challenges that small businesses face: long hours, high employee turnover, demanding customers, and lots of bookkeeping!


We have seen leaps and bounds in the computing industry over the past several decades and its impact on small businesses has been substantial. My father has come a long way from the days when he would ring up customers on an old-fashioned cash register. But what is next? And what types of IT challenges can computers, and namely servers, solve for small businesses?


Today Intel is introducing the new Intel® Xeon® processor E3-1200 product family – the ideal entry level server for small businesses. The server is typically the core of a small business’ IT, and servers based on the Intel Xeon processor E3-1200 product family are designed to increase business responsiveness, enhance business protection, and deliver 24x7 dependability, all at costs comparable to a desktop.


With up to 30% better performance than prior generation servers, Intel Xeon processor E3-1200 product family runs business-critical applications faster, enabling improved employee productivity. And with Turbo Boost Technology 2.0, you get an even greater burst of speed automatically when your server needs it most.



Security continues to be a real and costly threat, with the average cost of lost business due to a data breach in the US in 2010 at around $4.5 million. The built-in security capabilities of the Intel Xeon processor E3-1200 product family enable small businesses to secure their data with encryption acceleration technologies like Intel AES-NI, and secure startup of the operating system with Intel TXT. And, Intel® Rapid Storage Technology (Intel® RST) provides continual protection from loss due to a hard drive failure or data corruption, and should it happen, sends an automatic alert so hard drives can be swapped out quickly and the business can stay up and running. These are just a few of the new features available with the Intel Xeon processor E3-1200 product family.


Sometime in the next two weeks, Dell will refresh their entry level servers with these new processors. These servers will deliver substantial performance gains for running business-critical applications, provide system management capabilities designed to ease maintenance, and expand system scalability – all this to meet the needs of your business today, with headroom to support business growth of tomorrow.


What are your security concerns; do you feel these new features will help meet your needs?

We’re really, really excited about the launch of our new Intel Xeon Processor E7 family.


We’re adding capability and performance that I’ll detail,  but the bigger story is the new innovation these systems are enabling at ISV’s,  along with the great customer implementation examples!


Our new Xeon E7 family continues the story we started  approximately a year ago, with the launch of the Nehalem  (7500 Series) processor, of allowing IT solutions to deliver incredibly  fast answers to the toughest business questions, all at costs below proprietary  RISC systems, without compromising the reliability that IT managers expect.


The new Xeon E7 delivers up to 10 cores with 20 threads of Intel Hyper Threading technology,  30MB of last level cache, and supporting up to 32GB DIMMs—25 percent more  cores, threads and cache than the Xeon 7500 series with up to 40% more  performance, at the same maximum rated power (TDP) of the Xeon 7500 series!  Systems can scale from two to eight sockets  with up to 256 socket systems from OEMs using node controllers.  It also supports new Advanced Encryption  Standard instructions (AES-NI), additional RAS features, and Intel  Virtualization Technology and Trusted Execution Environment.




Since most Mission Critical solutions involve data, let me  tell you about some of the exciting  innovation that our software partners are doing with these new Xeon  systems:


  • Microsoft SQL Server R2 2008 is not only an  enterprise class database, it also integrates a high performance  Business Intelligence (BI) component in the software stack, offering both  complete online transaction processing (OLTP) and on line analytics (OLAP) in a  single license. It then couples this capability with Microsoft Power Pivot and SharePoint  Server to put this capability literally into the hands of individual business  departments.


  • IBM has delivered enterprise-class  database solutions based on IBM DB2 data management software which is now  available on affordable Xeon processor based servers.  IBM DB2 purescale, an optional feature of DB2, delivers nearly unlimited capacity, continuous availability and application  transparency, for transactional databases running on IBM eX5 servers based on Xeon processors.


  • SAP has a new innovative High-Performance Analytic  Appliance (SAP HANA) on the Xeon 7500 series.   SAP HANA is an in-memory  appliance that stores entire data sets in main memory instead of saving to  disk. It allows organizations to instantly analyze all available data from  multiple sources, so companies can gain insight into business operations in  real-time.


  • The Oracle  Exadata Database Machine is an integrated, optimized solution for hosting  Oracle database and delivering OLAP and OLTP. The X2-8 combines scale up (2 Sun  8S-64core) Xeon processors for a high performance, highly available database  grid, along with 14 Xeon based storage servers.


Now the real world jazz starts when customers take Intel’s  new technology, and pair it with this ISV software to deliver real world  solutions.


Anixter, a global distributor  of communication and security products, wire, cable, fasteners and other parts,  maintains a complex inventory of 425,000 parts. They deployed mission critical solutions for VAT taxes, e-Invoice, and  PCM product and parts information, on IBM System X5 servers running IBM DB2  pureScale. According to Bernie O’Connor,  “Performance  is very impressive and so is the resilience of the cluster. If a server  goes down DB2 pureScale recovers in seconds.”


The United States Customs and  Border Protection (CBP) is part of the Department of Homeland Security and are  modernizing its environment and have chosen the Oracle  Exadata Database Machine built on Intel Xeon processor 7500 series.  CBP is the largest Exadata user in the US  with 15 Exadata machines in operation. They found the Oracle Exadata machines  were one quarter (25%) of the cost of their aging SMP mainframe, and ran 10X  faster (do you think they painted flames on the sides of the cabinet?—probably  not).



These are just two stories.   For more checkout the Mission Critical page on the IT center

itc_video_anixter_xeon_carousel_preview.jpgIT professionals must handle the explosive growth in data, analytics, and transactions with ever-tightening budgets. Two complementary innovations are making it possible to get high-volume, low-cost computing for mission-critical online transactions. One is the IBM D2* X5 server, based on the Intel® Xeon® processor 7500 series, which can dramatically increase server performance, efficiency, and reliability. The other is IBM pureScale*, which helps data centers increase their database transaction capacity while reducing the risk and cost of growing IT systems.


For Anixter, a leading global supplier of communications and security products, electrical and electronic wire and cable, fasteners, and other small components, this combination provided the benefits of both a rock-solid, highly scalable hardware platform and a rock-solid, highly available software platform.


“We’re very excited about this partnership between IBM and Intel to provide Intel Xeon processing for the IBM D2 X5 server,” said Bernie Connor, director of IT for Anixter. "Using pureScale in that environment will provide the kind of availability, scalability, and recoverability that we need to serve our customers.”


To learn more, watch our new Anixter video. As always, you can find this one, and many others, in the Reference Room and IT Center.




*Other names and brands may be claimed as the property of others.

'We're gonna need a bigger boat' - Jaws


How to answer: "How big a box do I need?"


The number 1 request I hear from customers looking to move from RISC systems to IA is sizing.   What size box do I need?  This usually translates out to: ‘How many CPUs do I need?’


Most professionals understand how much disk space they’ll need.  They also have a good idea of the amount of memory that the application will require because it is already running on the legacy platform.  But the CPU difference is the wild card.  I know memory speed and disk types, disk technology and attachment methods all affect this but I’ll address other this and other considerations related to the target server in a future post but here I want to talk about the CPU.


Generational differences make a big impact.  Intel currently sells Xeon processors using 32nm and it is likely that the RISC processor is at 250nm or 90nm.  Intel sells Xeon processors that have two processing threads per core and the RISC processor could have up to 4.  There are processor speed differences but instruction pipelines and the number of instructions per clock cycle are a factor here.  How does this factor in?  It can be overwhelmingly complex.


So in desperation users turn to benchmarks to try to make the comparisons.  But what benchmarks should be considered?  Today there are so many benchmarks that this too is confusing and the benchmarks keep changing with new generations.  Some benchmarks are dropped because they gave an unusual advantage to one vendor or another.   And as technology advances with new application systems the benchmark councils add new benchmarks to address this.  For instance SPEC went from Integer and floating point testing to adding Java and Power consumption benchmarks.


Sizing tools are available on various vendors’ web sites but these are often for a new installation of a known application system, for instance such as SAP components or Oracle eBusiness Suite applications.


But you want to move an existing application system or systems from the RISC server to the IA server.  This is different.  You want to know how YOUR system runs on the new systems.


Sizing tools provide an advantage here.  These tools measure how your system is running on the current platform, like a capacity planning tool would, but then provide an estimate of what the load would be on a new target system.

The process goes something like this:


  • Install agents in the existing servers making up the application system


  • Collect data for a period of time representative of a normal business cycle for the application, usually a month but can be as long as a quarter.


  • Configure the sizing tool to account for performance requirements like your SLAs and other availability requirements and targeted Xeons.


  • Let the sizing tool ‘do its thing’ and process the accumulated data into a report.


  • You now have a rough estimate of the size of the system you need to buy or provision in a virtualized environment.



How does this tool know how many cores you’ll need in the new server?  I have had that question too.  In pulling back the covers on a few of these tools I find that they are using SPECint to compare processors.  On UNIX systems the agents are often just grabbing the sar data available to anyone.


So if you want to do your own sizing by using the sar data that is already available to you.  You then need to perform some minor ETL to get it to produce the graphs you want in your favorite spreadsheet program.  The you’ll need to pull down the SPECint values from the SPEC website.  You should be able to get something of an approximation after some work on your part.  For instance your RISC system has 4 cores but the only servers tested with the SPEC benchmark are 6 core systems of a different bin (or speed) of your system or a 4 core of the next generation of your RISC processor.   Some of the sizing tools have algorithms to do this interpolation for you.


After all of this, you can now buy your test system where you can load the backup of the application and test it against your careful sizing effort.


We’ll talk more about the loading of the backup in the near future.

Filter Blog

By date: By tag: