Skip navigation

I recently spoke with a large financial customer that has several hundred sparc boxes ( mostly inherited from an acquisiton). These systems are a challenge in that they are aging - some running out of available maintenance, slow, old, and the expertise in the company just doesn't extend to this architecture.


They were also very proud of there virtualized Xeon architecture where they could move vm's quickly to maximize efficiency and optimize resources. I think it is time to bring these two together.


So given 500 solaris servers:

about half of these are running enterprise applications - like Oracle(tm) - that run just great under windows or linux. Move these today.

Of the other half, most of these are - performance wise - tiny servers. You could put dozens of them - maybe all of them - in VMs on just a few large xeon servers. ( Don't forget about the phenominal virtualization perofrmance on the Xeon 7400 that Intel announced last week at IDF )


So how do I move these custom solaris sparc based physical servers into my super efficient Xeon based virtual machines?

Three ways:

1) recompile the apps for solaris 10 - which runs great in a vm on your virtualized pool

2) Use transitive quicktransit and move the binaries to Solaris 10 or Linux vms in the pool

3) Move to the windows or linux version of the software, or replace it with software that does the same business function.


Presto - 500 physical legacy servers - collapsed into a more efficient, more manageable, more modern pool of resources. What will you do with all the free space?

Everyone is talking “green-energy” and “power-efficiency” these days. Reducing carbon footprint, renewable energy, CFLs, solar power, biking instead of driving, etc… the list goes on forever. Many people are excited to do something to change power consumption, but as a server administrator - are the proper tools in place?


Many of you have probably experienced the power/efficiency example at home. When the summer gets hot - many of us run to the thermostat and set it accordingly. When it's REALLY hot outside, we tend to twist the dial cooler - knowing all along, that our electric bill will most likely be higher at the end of the billing cycle. So, what do we do?


Some of us just live with the higher bills, some of us turn off the A/C and struggle in the heat - but I'd hope that most of us set the thermostat to a 'livable' temperature - it may not be the coolest, but it's enough to do the job and keep the electricity bills at a more moderate level - in a sense, it's a happy medium. In today's modern age, thermostats are programmable - taking a lot of the guesswork out of our hands and automating many of the old day-to-day temperature functions that our parents had to follow... Intel server platforms are evolving in this realm as well!



As a server admin, do you have the tools and technologies to reduce power consumption? There are several avenues addressing this issue, and I suggest reading the post from Lori Wigle on The datacenter is different from the desktop… server admins aren’t likely to enable sleep states to save energy – but rather, increase utilization on fewer servers to maximize your performance output in relation to your server footprint.


When was the last time you looked at your server’s power footprint? Do you even know how much power you’re using? Some of you may have some power meters and can monitor a server (or a few servers) at a time… but how many of you can monitor a rack or servers or a datacenter?


What if this capability was built into your current generation Xeon server platform? The good news is that modern processors DO have power management capabilities. Based on the ACPI specs:


P0 Performance State

While a device or processor is in this state, it uses its maximum performance capability and may consume maximum power. Thereby the processor uses it's maximum power allocation.

P1 Performance State

In this performance power state, the performance capability of a device or processor is limited below its maximum and consumes less than maximum power.

Pn Performance State

In this performance state, the performance capability of a device or processor is at its minimum level and consumes minimal power while remaining in an active state. State n is a maximum number and is processor or device dependent. Processors and devices may define support for an arbitrary number of performance states not to exceed 16.


Each Pn State is a "notch" in the processor's performance powerband (as seen below)




As these performance notches are set, the processor will lower it's power envelope and reduce the power needed in order to save energy. Just as a note, EIST must be enabled in the BIOS for this performance enhancement to work on your platform.


If you attended Intel’s IDF (Intel Developer Forum) you may have run into a few demos in regards to Datacenter Power Management, my booth showcased 4 current generation Intel Servers based on Bensley/Starlake Xeon DP boards and Xeon 54xx Series (codename Harpertown) Processors.


Here’s a quick video showcasing the demo – and just a note - we’ll be redoing this in a higher-quality format soon – so stay tuned!


Hopefully if you’ve watched the video – you’ve got some questions! The good news is that we have a new website from the Intel Software Network that is focused on Intel® Dynamic Power Datacenter Manager. The site lists the features, system requirements, downloads, and FAQ to get you started!


I’m looking forward to your feedback and questions!





Demos on Demand

Posted by dstickse Aug 26, 2008

IDF SF08-Demos are an excellent tool for getting your message across.  At IDF we demonstrated Demos on Demand which allows the message to get out without having to take the equipment to the location.  Demos on Demand allow our customers, Fellow Travelers and corporate decision makers the opportunity to view our demos at any time from their location.  Please view the IDF presentation below and come visit us at Demos on Demand.



IDF SF08-Online gaming and sports leagues are growing every day and here at IDF this week we had the opportunity to see how Intel is making an impact. I was visiting the Virtualization Community in the IDF Showcase where I met Bjoern Metzdorf, Director of Information Technology at Turtle Entertainment who was speaking with Alan Bumgarner of Intel. Check out the video for a major success story including an 18:1 server consolidation ratio, 85-90% power savings and no observable latency for the gamer, this is cool stuff!






If you want to learn more about Turtle Entertainment and the Electronic Sports League (ESL) Click Me

IDF SF08-HP and Intel announced collaboration on the world's best 4-socket TPC-C benchmark result of >634K transactions/min. Check out the video with Aaron Spurlock (HP) and Noe Garcia (Intel)discussing the HP ProLiant DL580 G5 server with Intel XEON 7400-series (Dunnington) processors. Let us know what you think.




This week at Intel, we will host several thousand of the world's foremost software and hardware developers in San Francisco at our triannual Intel Developers Forum. It is an event that requires months of planning, years of product development and countless debates within our company. For many of us, Intel Developers Forum is a culmination of our lives work and committment to technology innovation. Intel has been hosting this event for 11 years and over those 11 years we have remained committed to providing the best venue possible for our colleagues in the industry. Each year we announce new technologies, introduce new products, technology usage models and provide our opinion on the direction of the technology industry. As an Intel employee this event serves as a tremendous source of pride for the hardwork and imagination of our engineers, manufacturing geniuses and executive leadership. It also provides us with a chance to continue to learn from the rest of the industry. I would also like to point out it is an opportunity for our customers, colleagues and developers to let us know when we have missed the mark.


Too much communication and collaboration is NOT always a good thing.


What continues to strike me (humble me as well) is the continued drive of the technology industry to innovate, problem solve and deliver the best products the world has ever seen, regardless of the profit motive. I would hope other industries would embrace a similar model of rewarding innovation and ingenuity....


As the world (and Intel) tranforms our technology infrastructure from static, immobile, expensive and exclusive to mobile, collaborative, inexpensive and inclusive it becomes critical for all of us in the industry to define architectural transitions that take advantage of this paradigm shift. Virtualization can certainly play a key role in enabling more mobile application deployments, more collaborative operating system environments, faster time to production for applications and better use of energy resources. It also has the potential to be a new frontier of collaborative innovation. Utlizing more compute, I/O and storage resources across a broader range of applications in a reduced carbon footprint. Reducing our mutual dependencies on any single source of software, hardware and carbon emitting suppliers. With time, it may even allow us to share compute resources with our suppliers and/or colleagues seamlessly and securely. From mobile phone to mobile internet devices to desktop to servers to mainframes and a host of additional embedded applications (think ATM, Food Dispensers, Checkout stands, Gas Pumps, etc), Virtualization with secure authentication could facilitate an interesting array of application usages for all consumers in the Global economy. It is not a Panacea, to heal all wounds of the industry, but merely a facilitating technology capable of providing a collaborative underpinning for the technological world we have become. What is possible and what is here today are obviously not always aligned in time or execution. Yet, I am always excited when any technology (from anyone and particularly Intel) has the ability to bring us more efficient use of our limited carbon footprint, limited time on the planet and sets the table for a world of autonomic continuity in which servers, desktops or devices don't "die"....they are simply retired.


I can only hope our former Chairman, the illustrious Andy Grove, was mistaken when he coined the term "Only the paranoid survive". I am of the opinion that survival is not enough, it is thriving and innovation which move us forward, breeds new ideas, new usages, new applications and new languages for us all to enjoy.



I welcome your comments, thoughts and ideas....



Back at IDF for Day 2 and still wrapping up some exciting news coming out yesterday. I met with Robert Zuber (IBM WW Marketing Manager)and Mike Moreno (Intel) and we talked about how IBM and the DB2 team, along with XEON 7400-series processors achieved this milestone of the Industry's First 1M+ TPC-C result. Here's a video with Robert and Mike in the Technology Showcase.




Check out the official Transaction Processing Council Site for details on the system configuration and full results.

Big news today at IDF SF08...Intel Exective VP, Pat Gelsinger delivered his keynote address here in San Francisco, Moscone Center. Innovation is always a big topic at IDF and today is no exception. Intel announced today new world record performance for the XEON 7400-series processor, code-named "Dunnington". And just what are these world records you ask? Watch the video for stunning results from Fujitsu Siemens (SPECint), SUN (SPECjbb, Dell (TPC-E), HP (4S TPC-C, SQL Server) and IBM with an Industry First 1.2 Million TPC-C result on Intel Architecture. Enjoy the video!



Update: 3:55pm.


More from the event.....currently debating "Container" DataCenter v. Traditional "Brick &'s our esteemed panel



Panel members include Jud Cooley (SUN Micro), Conor Malone (Rackable), Sigurd Anderson (IDC Architects), Bruce Myatt (Critical Facilities Solutions), & Phil Reese (Research Computing Strategist, Stanford Univ.)


Prior to that the debate was around High v. Low Density in the datacenter, here's the panel:


Panel members are David Driggers (Verari Systems), David Moss (Dell), David Segar (IDC Arch.), Christian Belady (Microsoft), James Shuder (Oracle)& Mukesh Khattar (Oracle)





Hi all,


Jason and I are "Live' from the Great Debates. The ICT Metrics Panel just concluded. Here's a photo from the event:




Panel members including Kathrarine Kaplan(EPA), Andy Rawson (AMD), Kathleen Fieher (Intel), Magnus Herrlin (Ancis), Ray Pfeifer(SynapSense), and Bill Tschudi (LBNL). Good discussion around specific performance metrics that should be taken into account for measuring data center performance. Also, some interesting discussion on what the EPA is doing around the Energy Star program for IT.


Check out the live webcast here: Eco-Tech Great Debates LIVE

Hank Lea and myself (Jason Davidson) will be covering the Eco-Technology debates at the Marriot Hotel in San Francisco on Monday, August 18th.  We will also be hosting a blog talk radio show around this event at 5:15 PM. 


In my tenure at Intel, I have had the pleasure of walking into major companies, educational institutes, non-profits, and government agencies to talk technology with many great people.  “How green is this solution” is a topic on many minds lately – no matter which topic of discussion.   Being an engineer by trade and scientist by education, I will typically dive into the details of around each component’s power consumption and the discussion ends with some simple math multiplying a number of units by their thermal numbers.  However, there is so much more to the overall impact, and as I walk in and out of these locations, I am always amazed at the number of larger issues with much larger impacts that are unresolved or overlooked.  For more information on these items, here is a blog.


The Eco-Technology Great Debates provide a unique and entertaining forum to expand your understanding of today’s most pressing data center and IT issues.  Come hear industry leaders take up both sides of some of the hot topics facing

the industry.








Attendees will learn about the pros and cons of high-density computing versus low-density computing and ready-to-use container data centers versus traditional brick and mortar data centers. There will also be a panel discussion on energy efficiency metrics, which will take a look at everything from chips to cooling systems and how they play a role in energy efficiency.










The energy consumption of servers and data centers has doubled in the past five years and is expected to almost double again in the next five, costing about USD 7.4 billion annually.1 There is no single right answer on what to do about this critical situation. Take an active step in solving this challenge by attending *The Eco-Technology Great Debates* and IDF at a special money-saving price.  Register for IDF now and enter promo code *CLOECOT* (admission to the Eco-Technology Great Debate and a 2-day pass to IDF) or enter promo code *CLTECOT*(admission to the Eco-Technology Great Debate and a full conference pass to IDF). The debate takes place at the San Francisco Marriott Hotel (located across the street from IDF).




1 [EPA Reports Significant Energy Efficiency Opportunities for U.S.

Servers and Data Centers (August 2007).|!OpenDocument]




<!-- /Item 3 ><! /Eco-Technology -->




Virtualization is growing up

Posted by K_Lloyd Aug 8, 2008

At a recent event the presenter, making reference to Pee Wee's Playhouse, said "virtualization is the word of the day".  Of course,  all of us older-yet not quite mature individuals had to cheer every time someone said the V word.  For you yougsters, I am sure an internet search will tell you more than you ever wanted to know about Pee Wee and the word of the day. 


Virtualization is everywhere.  If you have been avoiding it, i recommend *this* well constructed summary as a background guide to everything you should already know.


From my perspective two major trends are driving the maturity of virtualization.  First, on the software side - there are now multiple players.  Yes, VMware is the market leader, but there are credible and demonstrable solutions available from Xensource, Microsoft, SWsoft, Virtual Iron, and others.  Virtualization software is increasingly differentiated by the management tools and solution breadth, not the ability to virtualize. 


The second significant trend is the change in hardware platforms.  Both Intel and AMD have incorporated extensive features into their processors to support and simplify virtualization.  Intel has extended this integration to their chipsets and network adapters with Intel virtualization technology for devices and Intel virtualization technology for Connectivity. 


Virtualization has become the principle tool in the *data center* survival toolbox.  No enterprise data center optimization can be effectively executed without the big V.  This is sometimes referred to as virtualization 2.0,,, but like the web and many other 2.0 things,,, it is much more of a continuum between simple usage models - consolidate small servers , and advanced usage models - dynamic load balancing.


I met with three enterprise architects in the last week, all were looking at virtualization as the foundation for their dynamic "utility-esque" compute platforms.  To quote the chief architect at a major bank - "the most efficient and affordable server I run is a VM on a Xeon platform".  Managed virtualization can deliver efficiency, affordability, and flexibility.  At this point you are either actively rolling out virtualization or you are not paying attention.

Admin Note: This is a repost on behalf of Ravi Subramaniam.


This is the first video in a 3 part series - In this video series, I touch upon the topics that are in the news - Virtualization, Grid computing and cloud computing - each have had their day as or are the current hot/hyped topic. In this first video, I focus on virtualization.


I am looking forward to an interesting dialogue on these videos and the topics and to learn from your insights as I hope you will from mine. I would really like to get your feedback/thoughts and other topics/considerations that would be relevant and important here.


The intent here is to try and demonstrate these topics are in some way inter-related though the implementations/embodiments are distinct and relevant to solving the problems in their respective topic/domain. By understanding the connections, my hope is that, one can visualize new solutions/products (to solve new or higher order problems) that may be created through some appropriate compositions or by novel (re)organizations of the implementations and technologies in these respective topics. Well... I am getting ahead of myself here ...


To stimulate discussion for this blog I would like to add/highlight a few points/questions ...  


  • Virtualization (at least for me) is a broad concept and as, highlighted in the video, has many modes, facets or aspects - many of the topics of current interest are sort of related by the application of some aspect of virtualization. For the sake of time/brevity, I choose to briefly mention the broader aspects and relate quickly to the notion of virtualization that most accept i.e. what I would call 'machine virtualization'. Do you agree with the broad view of virtualization? An elaboration on your response (for or against) will be much appreciated.


  • Virtualization implies a relationship to the entity (physical or virtual) that the virtualization virtualizes - the ability to bind, manipulate and manage these relationships is what helps realize virtualization benefits like agility, consolidation, right sizing etc. The foil in the video "How to create virtualization?" describes some of the relationships (i.e. creating a virtualization establish the relationship describing the mode of creation). Do the ideas in "How to create virtualization?" section of the video make sense - do you agree - thoughts? Are there additional relationships (modes of construction) one may need to consider in the context of virtualization? Are there any product/product area that Intel could enhance by adding one of these virtualization modes/relationships i.e. that would solve (or improve solution of) a problem that you have (say emulation for example)?


  • Machine virtualization - is currently SW based with HW assists for performance and security. What do you see as the next inflection for machine virtualizations? Is there an increased role for HW (as different from the current role of enhancing SW solutions)? Are there any models for virtualization that you see that are better suited for implementation in silicon rather than SW?


Finally also looking forward to any other feedback/discussion on the video and video content ...


Thanks for your interest!



Filter Blog

By date: By tag: