Skip navigation

The Data Stack

November 2007 Previous month Next month


I just got back from Supercomputing 2007. I remember a conversation 10 or 12 years ago with someone I really respect. That was just after Intel's Supercomputing Division had folded and HPC was in one of its cyclical downturns. Our conversation was roughly about there being no demand for large supercomputers anymore (outside of govt). We surmised that what seemed to be needed most was a cheap gigaflop. To some extent we were right, but mostly we were wrong. Right in that since then, basic engineering analysis has been a driver in the growth of HPC (thinking clusters) . Very wrong in thinking that demand for compute cycles would not continue to increase and a whole host of other things. Big, big miss on that one. If you believe the current market survey numbers, the high end of HPC is mostly stagnant (dollar wise, but not innovation wise), but the low end, small clusters, is growing by leaps and bounds.


One of the things I wanted to get a read on at the conference is 10GbE adoption. While a high performance interconnect can be important, especially if you are paying significant amounts for a SW license, so is convenience & ease of use. Particularly if the user base is increasingly non HPC geeks, but mechanical / electrical / aero type engineers who just need to get some work done. Plus, 20 Gb Infiniband / Myrinet / Quadrics might be overkill for small jobs (4 - 16 cluster nodes). My impression is that we still aren't there on 10GbE. I was hoping to see 10GBaseT but it was rare. A couple of vendors had it & could actually show me a switch, but that was it. CX4 really does give me the hives.



And there is the question of the day - accelerators. What I wanted to understand is the details of how people are programming these things to get an idea of whether the PCIe interface is going to be a bottleneck or not. Are people 'blocking' real codes at coarse enough granularity to avoid a PCIe bottleneck? I mostly struck out. I did have a good chat with the Clearspeed folks. Their programmability looked much better than I expected, but I wonder if it will be too labor intensive for all but the highest ROI situations.



Another item for me was small form factor boards & density in general. Supermicro, Tyan, Fujitsu, Intel EPSD all had small form factor / high density stuff - for rack n' stack configurations. SGI and Appro showed off what I considered complete systems based on small form factors. There were several more exotic options, but they tend to be outside my customer base.



The Sun / Rackable 'datacenter in a shipping container' seemed to get a good amount of attention. I'll be very curious to hear why end users like them (assuming they do). Is it reduced CapEx? Is it shorter time to datacenter implementation / expansion?



Going back to the conversation of a decade ago.  We've gotten to the cheap *flop - I'll claim its clusters, or something close to them.  Now the focus seems to be on making them 'user friendly' enough for the small industrial cluster crowd. Intel Cluster Ready is one example.  WinCCS (or whatever they are calling it today) is another.  But there were also a lot of booths emphasizing out of the box experience (SGI & Appro come to mind) or smaller players emphasizing custom configuration (per the application) & hand holding / down the street throat to choke type service levels.



A common question that most IT managers are faced with while they mull on virtualization is what kind of a system should I use? Do I need a 2 socket system? a 4 socket system?


Reality in my view, it is more like designer wear and there is no ‘one size fits all' generic answer for such question. There are few things (not exhaustive by any means) IT managers that are planning virtualization need to understand to come to that conclusion are


  • How many servers are being consolidated?

  • Workload and compute horsepower: What are the workloads being consolidated? What is the average utilization of the workload? What is the maximum utilization expected from the workload (so that you anticipate for the max and the datacenter capability does not fall apart if the workload utilization increase)? An overall look at the compute horsepower requirement to run the VM with the workload.

  • Memory: How much memory is required per VM to run at the acceptable or required quality of service guarantees of performance?

  • Manageability comfort and VM variation/headroom: The number of VMs the IT manager is comfortable putting on a same system either for ease of manageability, downtime managements, resource scaling if VMs get over utilized or over subscribed at peak demand, and/or also any intuitive comfort level of mixing different workloads or OS environments.


In my opinion, for high level of server consolidation with memory and I/O intensive workloads or VMs, less predictive workloads, and workloads that demand more headroom for peak demands, a 4 (or higher) socket system could be more beneficial. For consolidation to raise server utilization with very predictable and stable workload that may be smaller applications, a 2 socket system could be beneficial.



Feel free to write your opinion, or experience.



Some resources:




I often get asked what type of server a customer should use when landing their virtualised infrastructure, the immediate response is an obvious one, given I work for Intel - an Intel based server ! But beyond this the answer is a little more complex and to some extent depends on the philosophical approach the data centre manager  wants to take to architecting their data centre.


There are a number of choices that can be made when using standard Intel based server hardware - ignoring the obvious decision as to the hypervisor vendor - DP ( 2-way ) vs MP ( 4-way ) servers, rack mount vs blade.


Ultimately any server decision is the right one ( so long as its an Intel based solution ) but some of the factors that will influence the decision are  :-




  • How many virtual machines ( VMs ) are you prepared to host onto a single server - MP servers can host substantially more VMs than DP - over 2x more depending on the workload within the VMs - this is down to the better memory capacity and larger number of I/O slots that MP servers typically support compared to DP servers. Against this using a DP server may be a better solution as 2 DP servers may cost less than an MP server, and combined host as many VM's whilst not having as many VMs hosted onto a single server.


  • Density & Form factor - DP servers typically have a higher density form factor than MP servers - at the expense of less I/O & memory capability. But you need to take into consideration that an MP server can host more VMs than a DP server so within a given rack space use of a lower number of MP servers may well enable hosting of more VMs than using more DP servers.


  • Blades vs Rack - there is significant momentum building behind the move to bladed servers, mostly driven by the fact that the density achievable using blades is far higher than that possible use rack mount servers. Also the shared resources of a blade solution ( power supplies, cooling, network switches etc ) can lead to cost and power savings in high density configurations. The challenge with hosting a virtualised infrastructure on blade servers however is that blades tend to be limited in the amount of memory and I/O that they can support. The trade off of course is that with the increased density of a blade solution its possible with fewer VMs/Blade but more Blades/rack the overall number of VMs that can be hosted within a given data centre is greater using blades than rack mount servers.


Other factors to take into consideration are that MP servers typically have a higher level of built-in RAS ( reliability, availability & serviceability )  features than DP servers and when hosting multiple VMs on a single server the overall reliability of a servers and its ability to be serviced without shutting down all the hosted VMs becomes very important to the overall efficiency of the Data Centre.


As I said at  the beginning - there is no simple answer and a lots depends on the approach you want to take in architecting your solution. Intel's own IT department has done lots of work in this area and have posted many of their results here for others to learn from their experiences.


The only thing that is for certain is that whatever decision is made on form factor the performance of the processors you specify has a direct impact on the number of VMs a server can host - the higher the CPU performance the more VMs that can be hosted and the lower the impact of the hypervisor overhead on the overall system performance, check out the latest virtualisation performance data  here  and here

Take a look at the chart below ... it's telling you something... isn't it?

It's more than performance numbers and marketing, it's data... REAL data!

But what does it mean - and ultimately - how can you relate to it?



If you're really into high-powered computing, you're probably quite familiar with common benchmark data. With every new CPU release, there are tons of new statistics, models, and ways to test the increased performance of the newer technology device - in this case, the 45nm based CPUs just recently launched this month. But what exactly does all this data amount to? Reading benchmarks is more than just seeing a bar chart - there's a science to digging into the data...


First, lets take a step back for some of you who may not fully understand what benchmarking is for. Benchmarks help to provide a common ground for comparing the performance of various systems across different CPU/system architectures. A common set of instructions (or programs) are setup to run within a regulated guideline to ensure the testing is performed equally across the competing platforms or architectures. Very much like in sports, if you have two different runners - they run the same path - i.e. the 100 yard dash. This creates the comparative benchmark.


So let's get back to the latest hot stuff - the Intel Xeon 5400 Series and Core 2 Extreme QX9650 Quad Core based processors. In the past 18 months, computing models have taken a giant leap forward by adding more CPU's per socket thereby increasing the thread density of your platform. In dual socket systems, you used to have two threads you now have four or even eight! And in quad socket systems the count can go up to 16! You're increasing your capacity to perform computational data by a factor of 3 or 4 depending on the platform. This has made a tremendous change in how benchmarks have had to be setup to run and we have to evaluate the testing methods to ensure we're maximizing the computability of each platform.


There are a few key steps to take before you consider benchmarking your system:

  1. identify your problem area (processing power, network bandwidth, memory utilization, etc)

  2. identify your competing products

  3. evaluate the 'leaders' in your problem area

  4. survey for available benchmarking tools

  5. evaluate 'best practices' for testing (e.g. lower idle power based processors won't really help much if you're only doing high-end computing)

  6. and then - implement your findings in your chosen architecture(s)


In the high-end server space you usually see more vendor specific data rather than end-user testing. Primarily because of the finite set of data that server administrators are looking for. Many of these 'industry standards' are monitored for efficiency and ensure the end-user that the testing was properly performed and the results are repeatable:


Industry Standard Benchmarks


Intel uses many of these standards for benchmarking - as you can see here in the Xeon 5000 Series based Processors Benchmark Page


Even if you're a server admin, you most likely interact with clients for day to day performance as well. If you search the web for CPU benchmarks the most commonly viewed benchmarks are performed on the client side of computing, mainly because of a few factors:


  1. clients are usually cheaper and more abundant to test with

  2. visuals in client computing are usually more fun to watch than seeing SQL data fly across the screen (hey - just being honest here!)

  3. and servers in general are built for more specific reasons, whether it's application, storage, modeling or other specialties


Many of you have probably heard of benchmark sites such as: Anandtech, Toms Hardware, FiringSquad, HardOCP and many others (respond with your favorites please!)  Each of these sites use common tools/applications to benchmark the latest and greatest hardware against each other.  Depending on what you're looking to do with your hardware really determines what/how you want to benchmark your system (or look for data reviews for your configuration).  After all, a machine that can run the latest games at over 60 frames per second may not be the best SQL server for your datacenter - right?


If you're looking for quick 'brute force' computational tools to try your hand at CPU benchmarking, try something simple like BOINC, Super PI, or you can get more elaborate by using some methods as described by C-Net by using Cinebench, or SiSoftware Sandra. Once you've figured out some of the basics - and can repeat these simpler tests - you can jump into those Industry Standards and get into some serious work!


So in closing, there are so many variables to account for when looking to validate the performance of a given system. Processor speeds, I/O subsystem configuration, memory latencies, network bandwidth, power utilization, etc... the permutations are nearly endless. So you have to be diligent in initially addressing your key problem(s), and attack the solution in benchmarking using the best known methods. Also, when reading benchmark information BE SURE to read the configurations of the systems in question - are they truly comparable? are the components running at spec level or overclocked? Are the speed differences negligible, or substantial in real-world evaluation? And finally, focus on what's important to you and your computing requirements - after all, you need to be sure you've picked the correct system for your needs.


Data Center Innovation: Is Virtualization the latest hype or a key step forward in Data Center transformation?







Members of the technology development community, sometimes take the press at face value. In other cases, we accept the press, new media and old, for what they are, journalists. Journalists ultimately commissioned to sell eyeballs and provoke "cocktail chatter" over their brilliant prose. The question that it has always left upon me, as a member of this community of technology developers, do they really understand what we do? Do they understand or even care about the countless hours required to think of the next great technological innovation, determine the markets for its application, build an ecosystem to sustain, and continue to innovate in the face of dwindling profits and increasing competition. Clayton Christenson calls this the "Innovator's Dilemma"....though I am not sure he has ever felt the "sting" of the dilemma....better to write the story then live through it I suppose.



Virtualization has become the latest "grist" for the technology journalist "mill". VMWare, a 7-year "overnight" success story, led by the engineering team of Mendel Rosenblum, Steve Herrod and their "Captain" Diane Greene, has captured the industry's imagination and begun to transform Data Centers around the world. This team has innovated for years behind a simple premise to enable x86 servers to be logically replicated as much as and as many times as the compute cycles will allow. Many have argued they are replicating innovation that's been done on mainframes for years and to a certain extent,...they are right. Does that make the technology advances in hypervisor development and Data Center efficiency LESS innovative? No, in my opinion, innovation is different from pioneering. The current wave of Virtualization innovators, (VMWare, Virtual Iron, SWSoft, Novell, Oracle, Sun, Microsoft, 3Leaf Systems, Citrix, etc.) owe a strong legacy to pioneers of the Atlas Project in 1961 and IBM for innovating "time sharing" and resource pooling concepts over 40 years ago. However, their innovation have exceeded far beyond the basic concepts of "logical partitioning" of compute processes to include virtual machine motioning from a single physical server to another, resource scheduling and log file innovation for higher availability and the ability to be operating system "lite" for rapid application deployment. These innovations are reducing Data Center costs as much as 50-70% in some cases. What is compelling is that these new group of innovators are transforming the traditional client/server software development models for both IT enterprises and independent software vendors.



At Intel, we spend a great deal of our time developing silicon innovations in virtualization and we are once again pushing the "innovation paradigm" by extending virtualization innovation to chipset, networking and I/O technologies. Server Platform Virtualization (processor, chipset and I/O virtualization) has benefits for the industry, software developers and individual IT managers. For the industry, it facilitates a discussion between Intel and our competitors to drive the standards and best practices discussion to deliver virtualization capabilities with meaningful impact, such as the work we are doing with PCI-SIG around I/O virtualization. For software developers Server Platform Virtualization provides opportunities for innovation and new usage models for graphics virtualization, business continuity and storage management. The IT manager realizes all of these benefits by enjoying a reduced cost deployment infrastructure, ease of use in integrated management tools and increased efficiency on power requirements. Enough benefit, enough innovation to keep the "hype machine" alive and for good reason.



What does this mean? In my opinion, Virtualization is BOTH the latest hype machine for the industry and the 1st meaningful step towards Data Center innovation in a decade. The combination of virtualization technology, multi-core energy efficient processors technologies and 10GB+ networking infrastructure will transform the way we view Data Centers, both physically and logically over the next 5 years. Beyond 2012, innovators will still face "our dilemma", journalists will find the next article to write/hype and the pioneers will (hopefully) be debating the initial findings of their 1st personal quantum computer, and many of us will be determining how to incorporate yet another key innovation into our lives in the Data Center.







For a popular history of virtualization:







For the less popular version and TCO calculator:







For additional Intel resources:







As you read the blogs on this portal or visit most industry tradeshows, events or technology portals related to datacenter computing today, you will find it hard not to have noticed virtualization as a topic or as part of the solution for a challenge being discussed. Is it hype or are the people deploying virtualization being wiser? Are there benefits due to virtualization in datacenter? In my opinion the answer is simple: it's not hype, the benefits are real.



Virtualization has been there for decades on mainframes, but the dynamics are changing now with the availability of software and hardware assists that enhance the software and make the software implementation easy and robust for mainstream computing. The deployment of virtualization (including production environments) in mainstream servers is increasing and is projected to increase as many datacenters start to find benefits of virtualization to be real. It is one of the foremost things on the mind of IT administrators/managers, CIO's or CTO's today particularly in North America, Europe and Japan.



The primary motivator in the past few years (and most new adopters in mainstream) has been reduction in capital expenditure (CAPEX) such as consolidation of workloads running on underutilized servers and using virtualization for test and development for rapid deployment. By consolidating under utilized servers, the obvious gain is the reduction in number of servers and hence the power reduction. But that is only a portion of the real benefit. IT managers who have adopted virtualization for a while now have realized that, i.e., in the long run, they see added benefits of consolidation in terms of reduced cooling requirements, reduced physical inventory management, and better utilization of their existing facilities for scaling their services as customer demand increases. Overall a well planned and implemented consolidation can help improve the bottom line of the datacenter operation. Many utility companies also have come to realize the environmental benefits and are encouraging the datacenters in the service area to adopt virtualization. PG&E, SDG&E, and Austin Energy are among few such utilities offering incentives for adopting virtualization (read: ). For instance PG&E has a program where non residential customers in their service area can participate and get $158 for every server that is consolidated due to virtualization and SDG&E offer 8 cents for every KWhr reduced.



Similar to consolidation by being able to test a new environment to be deployed in an isolated manner on the real and very same system where the current workload/environment is running can speed up deployment of new environments and reduce cost due to any unforeseen downtimes.



IT managers who have already realized some the above CAPEX benefits are moving into new usages that offer better operational excellence (OPEX). That is implementing better load balancing and increasing agility by migrating workloads as required and building in operational resiliency with disaster recovery.



Given the above mentioned benefits the IT end users do not/cannot think of virtualization as a single feature or technology but most view it more as a solution. This is also the philosophy and bigger picture approach to virtualization that I can see in Intel products. After leading the introduction of Virtualization Technology hardware assists in mainstream processors in 2005, Intel has worked with a large ecosystem of software vendors to support/enable the capability for a robust solution. With Core Micro-architecture and now a year old Intel Quad-Core processing capability, IT can leverage the industries best energy efficient computing for virtualization. As consolidation and workloads on a single physical server increase, better performance per watt could deliver better results both in terms of consolidation and per VM performance and at lesser power consumption. Currently the 51xx, 53xx, 54xx, 73xx, processor families are all based on Core Micro-architecture, which means for IT focused on VM mobility and agility, this allows easy VM mobility across these different classes of servers. Introduction of Intel VT FlexMigration earlier this year acknowledges the emerging usage model of VM mobility and allows any VMM vendor to develop solutions that will allow future generation of processors to be pooled with older generation of servers (with Core Micro-architecture). This provides better invest protection for IT.



Further the holistic platform centric approach to virtualization hardware assists for greater performance and/or efficiency can also be seen in Intel's approach to virtualization. Intel VT FlexPriority capability (in the processor) most recently announced provides performance enhancing hardware assists for interrupt virtualization. Intel VT for directed I/O is a chipset centric capability that enables hardware assists for I/O virtualization that can enhance reliability and security through device isolation and I/O performance through direct assignment (read: And Intel VT for connectivity with technologies like VMDq at the networking device level provides throughput improvement in virtualization environment (read:



Overall virtualization has real end user benefits in form of capital expenditure reduction and improving operational excellence. When coupled with hardware assists that delivers platform and deployable solution centric enhancements, IT end users can stretch those benefits further.



I read recently that 50% of data centers will exceed capacity by 2012 - capacity being some variable combination of physical space, available power or available cooling. I am skeptical. I agree that if you project the current growth rate and available capacity and such, you could come up with the 50% number, but, we are far from status quo in our data center opportunities. I would hesitate to break out the wrecking ball. Today I see three, sort of distinct, opportunities that every data center manager should be looking at very hard before they write the big check for new real estate.


The first is efficiency. There are numerous avenues available here including consolidation (through virtualization), server refresh with more powerful ( and more efficient ) servers, and new approaches to cooling. If we quit thinking of the data center as a room, and start thinking of it as mainframe in a really big box, our approach to cooling can become radically different. Why make a data center comfortable? Instead just keep it within the boundaries of warranties. Nobody wants to be in there anyway. Data center optimization should be your first initiative - learn more opportunities for effiency from Werner.


The second path to capacity containment is external hosting. Improvements in network speed and reliability have nearly negated the need for local data centers, and many businesses already rely on geo distributed data centers. The shift to letting someone else build and run the raised floor area just makes sense. I think of the shift from self run data centers to commercially hosted data centers much like the shift from private to commercial suppliers for power and communications. It is also a shift that can be executed incrementally, moving just some of the application hosting to a service provider. A variation on this theme is the SAAS( software as a service) model - for example*. Virtually everyone in the application business is offering, or planning to offer soon, down the wire applications. Can you really run an email system for your staff better than a commercial system? By applying data center optimization and taking advantage of targeted hosting and SAAS, a data center owner can squeeze at least a few more years out of the current raised floor real estate.


For some businesses, or at least for some of their applications, commercial hosting or SAAS is not seen as viable. The application is too important a value differentiator, or the data is too big, or the work to special, or, whatever. This is especially prevalent in engineering and finance where large amounts of "top secret" compute are executed. Well, there is a solution here as well. When you need to "own every line of code, and how it is run" you can still shift some of the work to machines outside your data center and defer capacity expansion. I am referring to "cloud computing". The most recognized example of this is in the compute service offered by Amazon* that uses spare cycles in their server structure. I think we will see a growing number of large scale internet and service companies offering up clouds. With cloud computing you push a "unit of work" to be executed in a service providers compute cloud. With appropriate encryption and obfuscation, the "unit of work" can remain as secret and secure as you wish. The application, database, and work results remain under local management and control.


If I were looking at a shrinking capacity window( any type of capacity) in my data center, I would pay attention to these opportunities, and their variations. I would be looking very hard at my next $25,000,000 data center expansion to understand if an alternate approach and architecture could shift those funds to better use.


*Other brands may be claimed as the property of others

Continuing on the theme of measuring Data Centre efficiency - power consumption of the facilities and IT load are only one element albeit a large one - that contributes to the overall efficiency of a data centre. Ultimately a DC has to deliver useful workload and the amount of workload that can be achieved within  a given physical DC is an increasing challenge. Lowering server power and increasing the cooling effectiveness of a DC are one of several ways to enable  more equipment to be installed into an existing facility.


General consensus seems to be that the servers in many data centres do not always run a maximum utilisation - many are in the 10-15% utilisation range. This results from many IT shops following a policy of hosting one workload ( application ) per server and sizing the server to support worse case usage of that workload - this leads to low average utilisation of the servers. There are several approaches that can be taken to increasing the server utilisation


Consolidating several applications onto  the same server that have different mixes of utilisation - this is not perfect as a problem on one application could impact the others on that server causing significant business impact


Deploying virtualisation within the DC - this enables multiple OS/App instances to be run on the same server. There are multiple benefits here in that the server utilisation increases whilst the number of servers could potentially be decreased so reducing the overall electrical power consumption of the DC and consequently the utility bill. Another aspect of virtualisation is that to achieve the highest levels of consolidation it is best to deploy the latest generation high perf/low power servers, this can result in the removal of many older generation high power servers from the Data Centre and the deployment of a smaller number of newer more power efficient servers


There are circumstances where virtualisation may not be appropriate and it is necesseary to retain one workload per server - in this case an increase in the workload capacity of a DC can be achieved by replacement of older smaller servers with the  latest generation high performance servers - this can enable the workload capacity of a DC to be significantly increased without building a new DC, again the side benefit here is that latest generation servers consume less power than the older servers they are replacing.


There are many different ways in which the workload capacity ( and hence utilisation ) of a DC can be increased , with care most can also result in a reduction in the electrical power consumed by the DC.


Given the right tools the utilisation of servers within a DC is 'relatively' easy to measure, so this element of DC effectiveness can be quantified. There is another major element that I believe contributes to the effectiveness of a DC - that is the processes that are in place to manage the DC and hence the way a DC can respond to the new challenges placed on it by a business unit. Gartner have an infrastructure maturity model that is useful to try and quantify how effective a DC is in responding to business needs and looks at responsiveness, Service Level Agreements,  IT processes etc. Currently I do not believe many DC managers are measuring how effective their DC in terms of process and when asked to judge where they sit within a model like Gartner's many IT managers will judge themselves more efficient than they really are.


Are there other areas that contribute to the efficiency of a DC - I would be interested in your feedback.


There has been and will probably continue to be a significant amount of world wide press on energy concerns and ecological impacts.  The concern is so widespread that government agencies have renewed or escalated their plans to curb energy consumption and mitigate that aspect of global warming.  The renewed focus has escalated governmental investigations and policies on computing devices.  The primary organizations on the technical parameters and policies in the US are the US Environmental Protection Agency (EPA)  and the US Department of Energy (DOE).  The enhancement of tools such as the EnergyStar 1 and Federal Energy Management Policy (FEMP) 2  has received attention and proliferation to other worldwide regulatory agencies in Europe, Japan, Canada, China/PRC, Taiwan, Australia, and other countries.



The EPA through Energy Star enacted a significant revision to the specifications for computing devices effective July, 2007 10 and is commissioning an update to that specification for July, 2009 16.  The EPA, along with it's research consultants, also investigated and published its assessment of energy usage in data centers, currently determined as ~ 60 Billion kWhrs annually in 2006 (~1.5% of electricity consumption in the US) with potential of hitting 120 B-kWhr/yr by 2011 3 . The data center report, world-wide focus on energy, and ecological impact has added political focus to accelerate energy efficiency programs & policies.



Intel has and continues to pursue research and development of energy efficiency technologies  and programs that aid in the energy efficiency and ecological sustainability of the IT infrastructure.  Intel was one of the first computer companies to establish Energy Star criteria for computers in the 1990's. Intel continues to be a key developer of standards such as ACPI, advanced energy storage techniques, and power management tools we see in mobile devices today. Intel continues to be a key developer in creating an industry-wide materials analysis and conversions to remove hazardous substances from our internal operations and products that Intel and the industry delivers (e.g. ReductionOfHazardous Substances program- RoHS 4 ).  Intel is also a significant contributor to current programs, such as Energy Star  , European Code of Conduct 5 , California's PIER program 6, Climate Savers 7, The Green Grid 8, and a host of other consortia.  Intel's technical contributions are not only to worldwide government and industry organizations, but, also research groups such as Lawrence Berkeley National Labs 9.



There are, however, numerous technical perspectives of options to promote, incent, and achieve energy efficiency in IT infrastructure and computing devices. A key regulatory and industry disagreement is curbing energy consumption verses incentivizing energy efficiency.  As one can observe in the Energy Star v4.0 specification10 , there exist the paradigmn that simply setting limits on "inactive" states will save energy.  Though this may work for single function devices such as a light bulb or a washing machine, multi-purpose devices and every changing applications make computing devices and services a much more difficult task.  The difficulty is to not impact the function or purpose, whereas reduce energy consumption to support the purpose.  A simple example of the resulting mixed incentive is evident in servers. A 21kW rack of yr2006 servers, would replace the function of many racks of yr2002 servers totaling 128kW; however, incentives based on "inactive" states promote the purchase of lower capability yr2002 based servers. Such a mix incentive policy not only curtails true energy efficiency innovations in the industry; but, places IT on a path of ever increasing energy consumption in pace with compute demand.



Intel's (and indeed the industry's) target is to modulate and track energy consumption to the "useful-work" accomplish (achieving the purpose).  The key challenge is to assess the usage models per market, and develop energy efficiency metrics and guidelines which promote ever improving energy efficiencies. Examples in this area, Intel is pursuing include, power_conversion efficiencies (Climate Savers), data center metrics and practices (The Green Grid), benchmarks ( 11,  ECMA 12), and power_management software ( 13, etc).  Though such activity takes time, the urgency impressed upon the industry by worldwide regulatory agencies, is critical for ecological sustainability. The alternative of using a mixed incentive approach (which also leads rise to more power consuming devices rather than fewer) will detract and divert the industry from holistic innovations to energy efficiency.



1 Energy Star,

2  Federal Energy Management Program,

3  EPA Data_center Energy Report,

4  Reduction Of Hazardous Substance (RoHS),

5 European Code of Conduct,

6 California Energy Commission's Public Interest Energy Research (PIER),

7 Climate Savers,

8 The Green Grid,

9 Lawrence Berkeley National Labs,

10 Energy Star v4.0,


12 European Computer Manufacturers Association, ECMA,




In the second comment around the right time for datacenter refresh, I'd like to look at Costs. Power is covered in the comment from Chris  and I covered some comments on Space already in the discussion forum.    So what it really boils down to is cost of running your existing datacenter versus the costs of throwing the servers out and replacing them. It is clear also from the other comments, that it doesn't make sense to throw out servers which are utilized in average 15% and have them replaced by new servers, which are 5 times faster and utilize the servers 3%... Great achievement hu?... Server Refresh makes therefore most sense to do only when consolidating the environment. How do I consolidate the environment? By using virtualization.  See Helmuts blog and the whole theme next week on that topic.


Therefore let's look at the real cost factors, when refreshing the servers:


  • Cost of new hardware: That is obviously a significant capital expenditure and starting at about 2000$ for a reasonable DP server. But the trick is also that a lot of server companies offer financing models which make this an operational expenditure. But key is also to understand, that by consolidating your servers at the same time the depreciation costs of the servers may actually decrease, as you have less hardware to depreciate!

  • Maintenance costs: Again, reducing the number of servers running given applications, and at the same time unifying the environment helps significantly to reduce the maintenance costs. This can be a significant step in unifying on a given OS or hardware platform.

  • Power consumption: Similar to utilization, it doesn't make sense to just look at the power consumption by server, but at the consumption by performance and therefore I can save about 38% in power bills, on a given workload vs. the previous generation hardware and about a 10th of the power of hardware which is 2-3years old. Again, obviously only, if I do this in combination of consolidating the servers. Trick often is, that those costs are often not taken into consideration, as those are not billed to the IT department but to the facilities group. So it becomes an executive decision to ensure they are looked at!.

  • Switching costs. Obviously very hard to measure, as this depends on the environment of the customer. And I talked to the customer who said: "No I will never touch this AS400 system, as it just runs and runs and runs." On the other hand I had a customer who replaced just those AS400 systems and saw huge synergistic effects, because he put the application on a standard based architecture and was able to finally integrate it in the other production system and therefore have one reporting and analytics tool.


I try to make a long story short. This is not something you do very often, but you don't get married every year either. But most of the time it's worth going through the efforts. So thinking about replacing the servers which are older than 2-3years is definitely worth while and often an effort which pays off in the first year!




Agility in the Datacenter

Posted by H_Ott Nov 19, 2007

Since this is the first time I'm blogging on this web site, let me briefly introduce myself. I'm working at Intel since 1984 and started right after University to develop software mainly for the Industrial Automation Industry (way back with good old iRMX for Multibus I/II). After a couple of years of running IT for several Intel sales Offices in EMEA, I'm now running a team of Technical PreSales people to work with End Customers in the Enterprise space.


When working with End Customers in the IT space, we often hear about the requirements of reducing costs but at the same time being more agile. Particularly in the Datacenter this is important to achieve, in order to quickly adapt to changing business requirement and thus swiftly enabling business opportunity through IT. On the way to get to real Business Agility through IT, Gartner has defined the Infrastructure Maturity Model**. This consists of 6 stages with the ultimate goal to deliver Business Agility in almost real time. Before a company can get there however, one important stage is to get to a virtualized infrastructure.



In the storage area we have seen quite some progress in this space which has been adopted already in a lot of medium and large companies. On the server side, I can see server virtualization being one of the hot topics, which almost every company is looking into or even deploying currently in order to achieve this datacenter agility at least in the infrastructure area.



In the past, people typically have used virtualization at large SMP machines to better utilize those; more recently virtualization was used to consolidate (mostly older) servers/application onto 4 way Intel Architecture based Servers to avoid a zoo of different machines and OS Revisions IT has to support. However from the cost efficiency perspective, it is also appropriate to consider using 2-way servers in virtualization too. When we discuss this with end customers, we sometimes got the concern that the ratio of Memory/CPU-Core is not good enough. While we have a great deal of Processor performance, particularly through the Quad Core Technology, which is available in Intel's XeonTM Processors since more than a year now, the memory capacity at the DP machines could not always live up to the desired ratio. Recently however there are some new DP Servers on the market (i.e. the Sun Microsystems x4150,, which implemented the full specification of the memory interface providing up to 64GB of Memory for Dual Processor Server hosting 8 Cores altogether. While I can hear you saying already that this would need the most expensive Memory Modules (4GB ones), I can tell you, that I was pleasantly surprised about an offer I got recently from one of our suppliers to get the full 64GB, for one of our lab servers, for less than 5900Euros (8400 US$, as you see I coming from Europe). 32GB of Memory would have been just below 2100 Euros (2940US$, 2GB Modules). Obviously prices may vary, but I just wanted to give a ball park figure what the costs are for a DP server containing 4-8GB/Core Memory. So with these types of systems you should be easily able to expand your 4-way system virtualization pool at much reduced cost.



But don't get me wrong here, I'm not promoting that the complete server virtualization pool in a DC should only consist of 2-way systems, I just wanted to point out that with the decrease of cost of the higher density Memory modules and the increase in the number of Memory slots in Dual Processor server space, you have a nice option to select that server type, that fits the best to your needs. If you have for instance applications that need a lot of aggregated CPU Performance or a lot of I/O Performance you sure would be better off using a 4-way server. But I'm sure there will be a blog soon covering the considerations of using 2-way or 4-way servers in the virtualization space.



If you agree in my train of thoughts, one thing must appear as obvious to you. Analysis of the computing resources used by your current applications and capacity planning to meet the need of your future business is the key to success for your virtualization strategy. And here we come back to the Gartner model. As IT you can only become a business value, if you understand the business needs of your company.



When speaking about agility you obviously have to have the possibility to easily migrate a Virtual Machine from a 2-way System to a 4 way system. With the recent introduction of the Intel XeonTM 7300 processor based 4 way servers this is possible too. Xeon 5100/5300 processors are sharing the same micro-architecture (Intel CoreTM Architecture) as the 4-way servers (Xeon 7300 Processor), which means you can live migrate VMs from DP to MP systems very easily. This live migration is offered in the various management suites from Virtualization Software vendors. In VMware's ESX ( this is called vMotion, at Virtual Iron ( for instance it is called LiveMigrate.



So those of you, who carefully read Intel's announcement, might rightfully save that all the above is true but now Intel introduced the new Xeon 5200/5400 series using still the same Intel CoreTM Micro architecture, but with an extended instruction set, particularly for the SSE instructions. ...and you are right. If an application uses these new instructions you cannot do a live migrate of a VM from, say a Xeon 5400, back to a Xeon 5300 based system. But here the Intel Architecture offers some hooks (technologies) to still make this possible. For VMware for instance we have implemented a new functionality called VT Flex Migration. Since ESX has such a long experience in the Virtualization of Intel Architecture, it still uses Binary translation for 32 Bit OSs instead of Intel's VT-x (the hardware supported Virtualization). In VT-x Intel offers to mask some CPU functionality so that the OS/Application, when running in a virtualized environment, only sees a certain instruction set and thus can easily be live migrated from a Xeon 5400 to a Xeon 5300 Processor based system. So VMMs like for instance Virtual Iron or Xen ( may use this feature because they require VT-x. In order to enable the same functionality in ESX, Intel worked closely with VMware and implemented a hardware hook for VMware to allow even in Binary Translation (meaning outside VT-x) to mask certain capabilities (here SSE4) to be seen by the OS, hence making sure the OS uses only those instructions also available in Xeon 5100/5300/7300 Processors.



With this in mind you can setup a very powerful combination of 2-way and 4-way Intel Architecture servers being able to be shared in a virtualized Server pool and allowing live migration between them as the basis for a flexible and agile infrastructure. What you need on top of this now is the Management Software orchestrating the use of this server pool. Those are products like VMware's Infrastructure 3 or their Management and Automation tools such as Virtual Center. At Virtual Iron for instance this would be their Virtualization Manager. Those tools allow you to set rules and policies to automatically react on changes in the virtualization pool, such as a change of CPU load or memory requirements, to allow an automated move of VMs between the servers to still fulfill SLAs.



So I hope I was able to share my view of an agile infrastructure in the Datacenter, I realize that this is quite a hardware centric view of it, but after all I still work for Intel and server system oriented topics are the majority of my job.



I'm looking forward to hear your opinion or questions about it.



Best regards,










*Other brands may be claimed as the property of others



**Source: Gartner, Inc. "Infrastructure Maturity Model," by Tom Bittman. Gartner Data Center Summit, 2006.



As this is the first time posting here, here is a quick intro, I started out as a hardware designer for a UK computer company - back in the days when the PC was still a grey tin box with a 4.77MHz 8088 inside. I have been with Intel now for more years than I care to think about, with much of this time working with the OEMs and end-customers focused in the server market across EMEA.


As I trawl thru the press and listen to the industry analysts one topic that everyone is discussing is 'data centre efficiency' ( even elsewhere on this forum Intel IT Data Center Efficiency Initiative - Going Green, Data Center Efficiency ) but what's not real clear is what defines an efficient data centre - is it the efficiency of the servers, the cooling subsystems, the workload that can be handled in a given time or the operational processes that are in place to run the data centre ? And once you have decided what is considered 'efficient' how do you measure or quantify this efficiency.


Currently there are several approaches being considered by the industry to measure data centre efficiency, and I thought it would be worth spending some time looking at three elements that can affect DC efficiency - power, utilisation and process. Given the complexity of the topic I plan to take this in bite sized chunks ( rather than write a mass of text and lose the thread ). So, in this blog I will cover power and will come back to the topic in a subsequent posting to look to the other elements. If you think there are elements to DC efficiency that I am missing please feel free to chip in and provide your insights.


Power Efficiency - Measuring the ratio between the facilities load - cooling, power conversion etc vs. the IT load - compute/storage/infrastructure. Typically this approach focus's on the ratio of electrical power consumption of the various elements within the data centre. With the current focus on the 'environmental & green' aspects of data centres this seems to be the area where most of the attention on Data centre efficiency is focused.


If you look at the average Data Centre today its not just the compute infrastructure that consumes the Watts, power gets consumed by the cooling systems and air conditioners, voltage conversion & battery storage, lighting etc. All this contributes to the 'facilities load' - for many IT managers this does not hit their IT budget and they may not even see the power bill from the utility company so have no idea how much power is consumed by these key elements of their data centre. Current estimates indicate that upwards of 50% of the power that comes into the average data centre gets 'lost ' in the facilities load, more details here & here


There are several groups looking to quantify energy efficiency The Green Grid is working on metric called PUE ( Power Usage Effectiveness ) to measure the ratio of power consumed by the facilities load vs. the power available to the IT equipment in the data center - details in their white papers here. Also the Uptime Institute are doing something similar and various government institutions are getting interested as well and there's an extensive US govt white paper ( if you have a few hours spare to ingest its 150 pages) . In addition the European Union is working on a Data Centre Code of Conduct


The server OEMs are also working on a benchmark for measuring perf/watt ( ), these are great for measuring how good a server is on a test workload and how many transactions it can deliver for a given power input. With the increased focus on energy efficient performance this metric will become more and more important to the specifiers and purchasers of servers. With Intel's latest generation 45nm quad core Xeon processors we continue to drive up the performance a processor can achieve for a given Watt input, the challenge for the rest of the industry now is to lower the overall power consumption of the other elements within the server and to increase the throughput of the storage and I/O subsystems to complement the increase processor performance. But at the end of the day does a good perf/watt for a server indicate that a data centre is efficient ?


What's missing from this approach is that there is often no consideration made as to the utilisation of the servers within the data centre consequently it might be possible to achieve 'good' power efficiency numbers but have low server utilisation and hence not extracting the most workload out of the data centre. Here in EMEA we have initiated a Data Centre Efficiency Award to try and start to get a handle how best to identify DCs that are running best practices and delivering of power and utilisation efficiency.


I guess the question at the end of the day is do you consider that your Data Centre is efficient and how are you quantifying this efficiency ?


Watt do you care about more?

the Power Consumption of your servers (watts) or the Power Efficiency of your servers (performance / watt)

... or maybe you prefer the Performance per Watt per SqFt argument




I have spent a lot of my time the last several years discussing this topic with IT professionals around the world - and there are a lot of varying opinions.



I believe that Performance per Watt is a better measure of overall value for the data center and server room.

The power consumed by a server is an important measure, but power only comparisons can be misleading.



Example: If server ‘A' consumes 50W less power than server ‘B', then it can save IT $79 per year per server in power and cooling costs (assumes $0.08 kW/hr power costs and cooling costs equal to power costs). Scale that $79 savings per server across a data center with thousands of servers and it can be a pretty impressive number.



However, if a server with 50W lower power delivers lower application performance ... is the power savings worth it? The answer of course depends ... but generally in my experience the answer is a resounding No.



Example: What if server A (the 50W lower power server) underperforms server B by 33% in performance. This means that you need to deploy more ‘A' Servers to get the same performance as ‘B' Servers. In fact, with a 33% performance advantage, you need only 3 ‘B' servers for every 4 ‘A' servers. The higher performance per Watt delivered by server B reduces acquisition costs, reduces power consumption (less servers) and minimizes space and eases manageability. This example is shown graphically above



What do you think? What power and performance metrics do you look at before purchasing servers

... Lower Power or Higher Performance per Watt?





Data Center Efficiency

Posted by L_Wigle Nov 14, 2007

Over the past months, you have likely heard about the challenges that data centers in the U.S. and world wide are facing. Energy costs - typically around 10% of an IT budget-could account for 50% of the average IT budget in just a few years.1 59% of ITs cite power and cooling as a growth limiter. 2 While those challenges may seem daunting, Intel sees many opportunities to improve energy efficiency in nearly every aspect of data center operation that consumes power.


Intel's recently announced Harpertown processors, based on 45nm technology, go a long way toward helping address the issues data centers are facing. Because they deliver up to 2X the performance-per-watt of prior Intel® Dual-Core processors in the same power envelope in the same socket, Intel Xeon® processor 5400 series enables a data center to double its compute capacity or maintain its current compute capacity using half the number of servers. Either way, the energy efficient performance improvements that are delivered are quite impressive.



What is often lost in the discussion of processor power and performance is the fact that they are small but important part of a larger data center system. This system is comprised of the IT equipment (servers, networking, and storage) as well as non-IT support equipment (power delivery, cooling and air handling, and other environmental controls). By looking at the data center holistically, IT organizations can better manage increased compute demands, lower their energy costs and reduce total cost of ownership.



The IT industry, driven by the work of groups such as The Green Grid, is developing a series of metrics to assess data center efficiency as the ratio of useful work output divided by total power consumed by the entire facility3. This holistic view of where the energy is being used has identified large energy efficiency gains in the operational practices of getting power to the IT equipment, where in many cases as little as 50% of the energy is going to the IT equipment.



There are number of approaches to increase data center efficiency based on this holistic view, and they vary widely in terms of investment required and energy savings. In addition to our energy efficient processors and systems, Intel is working collaboratively with industry partners and government organizations to accelerate development and adoption of technologies, products and best practices that can improve data center operations. Examples of options to consider include:



  • Purchasing higher efficiency power supplies and mother board components

  • Installing higher efficiency Uninterruptible Power Supplies and other power conversion equipment

  • Monitoring energy consumption and environmental conditions to develop operational energy policies

  • Employing Virtualization to increase utilization and consolidate servers in ratios up to 30:1

  • Use of hot & cold aisle layouts and floor vent tiles to prevent hot air from mixing with cold air

  • For a more detailed list of ways to increase the efficiency of your data center, click here


How well do you understand the total energy consumption and efficiency of your IT facility? It's likely that there are a number of ways that you can improve your operations to handle the increasing rack densities and growing demand for compute capacity - and make the CFO happy because the power bill goes down as well...


1. Source: Gartner, May 2007

2. Intel DC Users Group 06

3. The Green Grid Data Center Power Efficiency Metric.



I'm excited about our server room blogs as a way for us to get feedback from you quickly. Would love to get your comments on a technology concept demo we did over the last 6 months.


I have been looking at the Internet video phenomenon over the last year. One interesting usage model is making most of what we see on TV today into video on-demand (wiki has a good description either as over the web (e.g. Google YouTube) or provided by a service provider via IPTV (e.g. AT&T Uverse). As Intel, we of course want to understand how we can optimize the on-demand video workload on Intel server technology.



On-demand video deployments today are engineered largely around three resources:

Server: typically a 2 socket, dual core processors per socket, 8G DRAM, rack mount server (workload is writing new videos to disk, reading requested videos from disk, formatting the video packets, transmitting video to client)

WAN: using GE ports, some configurations pushing to exceed 10GE

Storage: as a JBOD, in the past SCSI, moving to SAS and SATA Hard Disk Drives (HDDs)

Understanding this, we challenged ourselves to create a next generation configuration using our leading technology.



Here's what we ended up with:

Server: Fit into a 2U form factor with an integrated JBOD (

WAN: Replace GE with the Intel Dual 10GE NIC, target to achieve 20Gbps throughput (

Storage: Replace HDDs in the JBOD with prototypes of the Intel enterprise solid state disk drives (SSD)


We worked with Kasenna ( to pull the technology together into a prototype demonstration. Actually, they did most of the work as experts in doing high throughput on demand video streaming. Kasenna in the test achieved about 16Gb/s streaming throughput. In IPTV terms, approximately 4000 simultaneous standard definition (3.75Mb/s MPEG2) streams. The demonstration largely focused on the HDD versus SSD engineering.


If you're not familiar with SSD technology, wiki has a good overview ( Intel also discussed our NAND based solid state drive technology at fall IDF ( Pat Gelsinger introduces the technology about 40 minutes into his Tick-Tock - Powerful, Efficient, and Predictable presentation. Knut Grimsrud gives a good overview of the NAND technology in his Challenges and Opportunities for Non-Volatile Memory in Platforms presentation.



I won't post the gory details of the configurations today. If you're interested, send me email. Simple net, it took approximately 60 15K RPM HDDs to achieve the same throughput as 12 Intel prototype SSDs. Two major takeaways:

1. Intel solid state drives look to be ideal for high throughput workloads like on-demand video that require random access from disk. Kasenna achieved about 5 times the throughput on each solid state disk drive over the hard disk drives.

2. In this case, the Intel SSD configuration lowered the peak power for the configurations (disks, server, NICs, memory) to about 1/3 of the HDD configuration.



The demo also raised a number of other thoughts on whether the higher performance of the SSDs could reduce the amount of memory required in the server. No conclusions on this yet.



This was my first step in understanding the advantage of solid state drive technology for a server application. My conclusion, Intel NAND based solid state drive technology looks to be a promising technology for achieving higher throughput and lower power when compared to a hard disk. I'll be posting more examples in the future on where SSDs looked to be a good fit for applications. I would be interested in hearing your feedback on this concept demo and about server applications where you see SSDs as having high value.




I was recently going through IDC market data1 and realized HPC now represents 29% of the server market.  This is a significant number that has grown from ~20% to 29% in only 6 months.  The first thing you are probably saying to yourself is ‘why do I care?'  Well, take it from me; coming from Intel, it is a big deal.  Have you ever tried negotiating with someone when you have very little leverage?  You don't get too far do you?  That's how HPC has been up until now.  When trying to design products for HPC the question comes back, what is the return on investment (ROI)?  When we were just a sliver of the market, it was hard to justify.   Now that we are 29%, guess what, we have leverage.  So guess what HPC community?  You can now say it loud and say it proud.  Tell us what you need to make the HPC community stronger, faster and more efficient.  We're listening.



Performance continues to be top on almost everyone's list.  Intel was the first to introduce quad core and we are now launching our first 45nm processors on November 12th.  The new manufacturing process has enabled Intel to almost double the transistor count vs the current 65nm process.  We are increasing the cache and bus frequency to deliver better performance for HPC applications.  How will the added cache and faster bus benefit the commercial HPC applications?  Well, it should most certainly provide faster results for their customers and that is always a good thing.  As we deliver faster processors we continue to investigate the core count.  When we first introduced the Core micro-architecture to HPC, we also introduced quad core.  This has been seen as a very good progression in our silicon roadmap.  What are some of the benefits multi-core brings to market?  By adding cores, we are able to maintain power envelopes while increasing performance.  Are the increased cores helping the HPC market?  My immediate response is, of course, they are.  After talking to ISV's I begin to second guess my immediate response.  Commercial applications are licensed to customers by processor, by core, or by MPI instances.  If application performance does not scale with the increased core count, does it make sense to use a quad core processor?  Will a dual core processor work better than a quad core for certain applications?  As we drill down on this dilemma, we are quickly realizing the answer is sometimes.  Sometimes a dual core will be better and sometimes a quad core will be better.  There is no simple answer to cores and what is the right number for your environment. 



Another area of growing interest in HPC is the performance between nodes.  Our latest chipset, the Intel® 5400, offers PCI Express generation 2.  This will provide twice the bandwidth of generation 1 and is ideal for quad data rate InfiniBandTM.  The Gen 2 will also provide great support for visualization applications.  The Intel® 5400 chipset and the Intel® Xeon® processor 5400 series create our first HPC designed platform.  As we progress on to our next generation we need to ensure the HPC voice is being heard.  As InfiniBand continues to grow in the HPC market does it eventually replace GbE as the interconnect of choice?  Does an HPC optimized product support IB down?  What about Gigabit Ethernet?  How about 10G Ethernet?  What is the market willing to pay for the increased performance? 



There are lots of questions that need to be answered when creating an optimized HPC platform.  One thing is for sure; HPC is now a big time player and can't be ignored.  If you support high performance computers in your data center, make sure your wants and needs are being heard.



1IDC Q2 Server tracker and IDC Q2 Qview



Intel Uncut: The engineers and architects explain how Intel got down to 45nm.

Moore's law has pushed the phyisical limits of the current materials. Intel has used Hafnium based materials allowing for smaller devices without gate leakage. As Kelin Kuhn says, the technology is getting nearly "incomprehensable". With 45nm technology we are working on a scale where 400 transistors can fit on the the size of a human bloodcell. Modern processors are allowing for 100's of millions of working transitors, and devices in the fab are being produced at 1/10th the wavelength of light ... truly amazing.

Welcome to The Server Room, we've put together some quick videos today for you in order to put a name to the face of some of the bloggers and get a more personal touch for our interaction.  Arijit Bandyopadhyay, Nikhil Sharma and yours truly are found below... Enjoy!  and BLOG AWAY!






Eco-Technology - what does this term mean and why would Intel use it instead of "Green Computing" or something more common?


Moore's Law gives us the ability to deliver more performance and greater energy efficiency with each generation of microprocessors - and reducing the energy consumption of our products is far and away the biggest impact Intel can have on carbon footprint.


We recently completed an analysis of a high-performance computing configuration that was originally deployed in 2002 (coming in at number 17 in the Top500 Supercomputer list for that year) and is still in use today. This configuration consists of 512 servers fit out into 25 racks using 128 kW and delivers 3.68 TFlops peak on the LINPACK benchmark. Today, that cluster could be replaced with a single rack of roughly 53 blade servers drawing 21 kW and still giving us that 3.7 TFlops of performance (Energy efficiency in the data center). More on whether that level of density is appropriate for everyone later.....



Think of the incredible increase in productivity - and new innovations - that have been made possible by this phenomenal growth in compute capacity. The explosion of information that's available at our fingertips and the evolution of many aspects of our global economy to bits instead of physical materials.



And that's really the point of "Eco-Technology" which is defined as an "eco-sensitive" approach to technology that takes into consideration sustainability in both manufacture and end-use of technology.



So we're increasing both the energy efficiency of our products and we're eliminating potentially harmful materials such as lead and halogen from our manufacturing, but we're also as an industry continuing to contribute to productivity and transformation. Both are important.


As companies explore their IT Sustainability programs and we all work to define what green computing should mean, what are your thoughts on how to balance the imperative to do more work, deliver more business value with the rising costs of energy and our collective desire to slow climate change? The US Environmental Protection Agency is contemplating Energy Star for servers. If you were in charge, what criteria would you use to award the label?



Hi all!


I'm Trevor Lawless, community manager for the Server Room, and manager of Performance Benchmarking within Intel's Server Platforms Group. Because we regularly visit with IT, I am excited to bring a Server-specific forum to Intel's communities website. My desire here is to share the expertise of some of our key team members, and make the Server Room a knowledge center for you, the IT manager.



In the first few weeks of the Server Room we will be covering a number of topics via discussions and blogs from our experts. We are starting this week with discussions around Intel's new 45nm Hi-k metal gate processor-based platforms, and why we think they are "Optimized for you". You will see Intel experts sharing their opinions on Platform performance, power benefits, and our push to be Eco-friendly.  Check out Shannon Poulin's blog here.  In addition, on the schedule in the coming weeks are additional topics such as "Optimized for HPC": Intel's next generation CPU and chipset combination; "Optimized for Datacenter": Future Datacenter, Power benefits at datacenter scale; Virtualization "Where Silicon and Software Meet"; "Performance Optimized for Workstations", the future workstation.



I look forward to these discussions, and your comments, in the coming weeks. Happy blogging!






Before, there were few things and they were simple. There were few roads to take, few choices at hand, few decisions to make - so most of the time we could find a solution that would fit easily with our needs. Now, there are a lot of things and they are complex. I am not sure but did Lorie or Shannon ever imagine that we would need a search engine to search through the Internet 20 years ago?


How did we get here? - I think as humans, we love new ideas, new experiences, and new perspectives. So we build new things, we innovate, we create, we add value. As technologists, we know that innovations get complex.


But the real question is where do we go from here?


It sounds to me like we need to re-learn the concept of "fit" and "choice" all over again. Because simply put, to find the right fit from the myriad of choices is a lot of work these days. Our tech background makes us great pattern matchers. We think we know what fits perfectly for our needs. But do we really?



At Intel, my job is to figure out what matters to enterprise applications and its relationship to platform performance. Some applications "fit" perfectly with the architecture. Some applications do not "fit" with our architecture. I work closely with the software teams within Intel as well as software vendors that run enterprise applications. I have learnt that evaluating systems is not as simple as it might seem. Because computer performance depends on the workload, it is necessary to understand just what your needs are so that you can make correct trade-offs.



There are a lot of performance numbers out there. Just because one set of numbers might not make sense it does not mean that you cannot find out what is right for you. See all the numbers; make your own calculations. Find your "fit". Understand your trade-offs and choose well



By the way, I am piling up a stack of enterprise application "must haves" - scalability, reliability, performance per watt. When the server room came to life I said wow, here is an opportunity to share and learn from our customers their needs better.



Stay tuned for what I think matters in world of performance analysis, benchmarking, enterprise applications and some case studies.



Leading up to the launch of our 45nm processors I was often asked "what does this technology mean to my business?" or "what does it mean to me as a consumer?" My usual responses of improved performance, better performance/watt and better price/performance were all very true. But as I write this I am challenged to find more depth to that response. The solutions that you, the technology industry, collectively deliver include software, hardware and luckily for Intel processors that are now based on 45nm technology. We are on a line that is sloping up and to the right with respect to being able to deliver more performance over time. But so what? How can we look at single points on that line and reflect on their significance?


There are a number of examples where things start our revolutionary and simply evolve from there; flying, combustion engine automobile travel, the Internet, One day you walked/wagon/horse from place to place the next day you drove. One day you drove, the next day you flew. One day you wrote a letter, the next day an email. All of these had some groundwork that lead up to them for sure, but the new normal existed the day they became ubiquitous. Writing letters, putting a stamp on it and dropping it in a mailbox is now a lost art that we teach kids while we also explain to them what cassette tapes, rabbit ears and wired Ethernet are.


When was there enough performance, with low enough power and at a low enough price point for me to buy a handheld global positioning sensor unit that I can use to go geocaching with my kids? Clearly it wasn't ten years ago since I suspect the device may have existed for the military but wasn't quite portable enough for me or at a low enough price point to catch my eye. I am sure everyone can remember the first cell phones which looked like a car battery with a phone stuck on top. There are countless examples of points on a price/perf/power curve that lead to evolutionary or revolutionary products that change the way people live, work or play.


These new 45nm components are compelling and surely enterprise customers are going to find that they can run databases faster, develop software quicker and process transactions faster. Financial services companies will use these new products to execute faster trades. That in turn will allow them to win share against their competitors who are slower and it will reflect on their bottom line. Oil and gas companies will use these new products to more efficiently search for, locate and model the size of energy reserves. Search companies will use these products to ranks pages, target online consumers and drive advertising based commerce. Those things are evolutionary and allow companies to improve what they are already doing.


What are the revolutionary things that we will look back on and say "without the price/perf/watt that 45nm processors delivered in November 2007 xxx would not be possible?" Are you working on it? The technologies we develop are constantly looking to improve the present while also keeping an eye on the future. They are optimized for you, the developers and consumers, because quite frankly we are fascinated with what you are doing today and very interested in what you are going to do tomorrow with all of the high performing low power products that we are launching this month.


One last thing, if you're working on the next Google like revolutionary online platform drop me a note. I might want to alter my investment strategy J

Filter Blog

By date: By tag: