Yes, Interop has Virtualization training.  It seems to be everywhere these days.  The question is, how much quality is in the quantity? 



Well, I am going to find out. 


I am scheduled to attend Interop next week (April 28 - May 2) and am signed up for over a dozen classes/sessions that have to do with Virtualization.  Here is a sampling;


- The ABC's of Virtualization: A shortcut Guide to Virtual Technology


- Virtualization and Security


- Virtualization beyond Consolidation; Driving down OPEX, Not just CAPEX


- Virtualization's Phantom Menace: Security


- Planning the move from physical to virtual: Migration and Deployment


- Storage Virtualization: What, Why, Where and How?


- Virtualized Data Centers - Beyond the Virtual Sum of Virtual Parts


- Microsoft's New Virtualization Strategy


- One for all and all for Xen



Here is the official Virtualization Track site for the event.



I'll post updates along the way... keep your browser running so you don't have to warm it up again. 



;o)

After coming back from IDF a couple weeks ago, I've had some time to go through the mountains of online material, presentations mostly and a few interesting videos. This video is from Pat Gelsinger's keynote address and features Mendel Rosenblum from VMware. Pat and Mendel discuss new technologies in virtualization and demonstrate "Flex Migration", just hit the play button below to view...

 

 

This is very interesting for those IT shops with multiple legacy platforms and new generation servers coming online. We will have more discussion on this topic in the future, and so in the meantime, let us know if you have questions on how this could benefit your datacenter.

ChrisPeters

45nm and Beyond

Posted by ChrisPeters Apr 23, 2008

Technology moves at such a rapid pace - it can often be mind-boggling. Even working directly with the product teams at Intel, I sometimes have difficulty keeping pace. The good news is that there is a tremendous opportunity today to be captured thanks to this rapid innovation, as well as a steady stream of advanced technology that IT can use to better support business and gain a competitive advantage. Recently I was interviewed by Tim Phillips from the Register about the current 45nm Quad-Core Intel Xeon products and the next generation Intel platforms based on the Nehalem processor.

 

A few years back, Intel fundamentally changed the way we design and develop our underlying micro-processor technology. We streamlined our innovation and accelerated it's pace. Internally, we call this new model Tick-Tock. I like to call it shrink and innovate.

 

A "Tick" is a manufacturing process shrink that delivers smaller silicon with higher speeds, more transistors and lower power consumption (example: moving from 65nm to 45nm process technology). The 45nm quad-core xeon processors (available since Nov '07) utilize unique materials (a high-k, dielectric) that are delivering industry leading performance / watt as measured by the industry's first and only standard benchmark, SPECPower

A "Tock" represents a more extensive architectural innovation (ex. Intel Core Microarchitecture) introducing new micro-architecture features and functionality fully utilizing the higher transistor count set up by the shrink. For Intel Xeon-based servers, the next "tock" is Nehalem. In addition to the new micro-architecture based on 45nm, a system re-design will incorporate next generation memory, I/O and virtualization technology for high performance, high bandwidth solutions compatible with today's leading software solutions

Listen to my podcast interview to learn more about the benefits of using today's products and the timing of next generation Intel technology featuring Nehalem. Is this information useful to you? If so ... how? Have any questions?

 

I'd be happy to hear from you. Chris

 



 

 

Here's the 4th follow-up post in my 10 Habits of Great Server Performance Tuners series. This one focuses on the fourth habit: Know Your BIOS.

 

 

 

My last blog talked about beginning your system tuning by consulting a block diagram. The other thing you should always look at is your system's BIOS. Many server BIOSes these days allow you to configure options that affect performance. Like everything in the performance world, which set of BIOS options will be best will depend on your workload!

 

 

First things first, how do you find this "BIOS"? Most servers have a menu called "Setup" (or something similar) that you can access while the system is booting, before it starts loading the operating system. This "Setup" menu allows you to access your system's BIOS. Changes that you make here will affect how the operating system can utilize your hardware, and in some cases how the hardware works. If you change something here, you usually have to reboot and then the change will "stick" through all future reboots (until you change it again). As platforms grow increasingly sophisticated, they are offering a widening array of user-configurable options in Setup. So a good practice is to examine all the menu options available whenever you get a new platform. Here are some of the most common options on Intel platforms that could affect performance:

 

 

  • Power Management - Intel's power management technology is designed to deliver lower power at idle and better performance/watt (without significantly lowering overall performance) in most circumstances. There are 2 types - P-States, which attempt to manage power while the processor is active, and C-States which work while the processor is idle. In some BIOSes, both of these features are combined into one option which you should enable. In other cases they are separated. If they are separate, here's what to look for:

    • Intel EIST (or "Enhanced Intel Speedstep" or "Intel Speedstep" or "GV3" on older platforms) - This is the P-State power management that works while the processor is active. Leave it enabled unless directed to change it by an Intel representative.

    • Intel C-States - If you have this option or something similar, it is referring to the power management used when the processor is idle. Enable all C-States unless directed by an Intel representative.

  • Hardware Prefetch or Adjacent Sector Prefetch - These options try to lower overall latencies in your platform by bringing data into the caches from memory before it is needed (so the application does not have to wait for the data to be read). In many situations the prefetchers increase performance, but there are some cases where they may not. If you don't have time to test these options, then go with the default. Intel tests the prefetch options on a variety of server workloads with each new processor and makes a recommendation to our platform partners on how they should be set. If, however, you are tuning and you have the time to experiment, try measuring performance using each of the prefetch setting combinations.

 

 

 

 

There are several other options that might affect performance on specific platforms. Some examples might be a snoop filter enable/disable switch, a setting to emphasize either bandwidth or latency for memory transactions, or a setting to enable or disable multi-threading. In these cases, if you don't have time to test, use your Intel or OEM representative's suggestion or go with the default setting.

 

 

Being familiar with how your system's BIOS is configured is another basic component of system tuning.

 

 

Keep watching The Server Room for information on the other 6 habits in the coming weeks.

Kirk was out at the Microsoft Server 2008 and talked about the "data center of the future".    He discussed his thoughts on the data center of the future with some particularly interesting tidbits on the predictive enterprise, the world of Tera, emerging technologies and goings-on in Intel's IT shop.

 

 

Have you heard of the "predictive enterprise".?     If you want to know more let me know as it is a very interesting topic.

Dynamic Power Management Has Significant Values - a Baidu Case Study

Jackson He, Intel Corporation

We have just completed a proof of concept (POC) project with Baidu.com, the biggest search portal company in China (60+% market share in China), using the Intel® Dynamic Power Node Manager Technology (Node Manager) to dynamically optimize server performance and power consumption to maximize the server density of a rack. We used Node Manager to identify optimal control points, which became the basis to set power optimization policies at the node level. A management console - Intel® Datacenter Manager (Datacenter Manager) was used to manage servers at rack-level to coordinate power and performance optimization between servers to ensure maximum server density and perform yield for given power envelope for the rack. We have shown significant benefit from the POC and the customer like the results:

 

  • At a single node level, up to 40W savings / system without performance impact when a optimal power management policy is applied

  • At rack level, up to 20% additional capacity increase could be achieved within the same rack-level power envelope when aggregated optimal power management policy is applied

  • Comparing with today's datacenter operation at Baidu, by using Intel Node Manager, there could be a rack density increase 20~40% improvement

 

Some background of the technologies tested in this POC:

 

Intel® Dynamic Power Node Manager (Node Manager)

 

 

Node Manager is an out-of-band (OOB) power management policy engine that is embedded in Intel server chipset. It works with BIOS and OS power management (OSPM) to dynamically adjust platform power to achieve maximum performance/power at node (server) level. Node Manager has the following features:

 

 

  • Dynamic Power Monitoring: Measures actual power consumption of a server platform within acceptable error margin of +/- 10%. Node Manager gathers information from PSMI instrumented power supply, provides real-time power consumption data (point in time, or average over an interval), and reports through IPMI interface.

  • Platform Power Capping: Sets platform power to a targeted power budget while maintaining maximum performance for the given power level. Node Manager receives power policy from an external management console through IPMI interface and maintains power at targeted level by dynamically adjusting CPU p-states.

  • Power Threshold Alerting: Node Manager monitors platform power against targeted power budget. When the target power budget cannot be maintained, Node Manager sends out alerts to the management console

 

More detailed findings from this POC are published in Intel Dynamic Power Node Manager POC with Baidu. We'd love to hear your comments and questions about this POc and Intel Dynamic Power Management Technology.

This is part three - the implication being that it is a sequel to part one and part two. It is. That said, each of the sections have their own messages and may or may not help your data center. The first part talked about the benefits of bringing in the latest hardware. Intel has been delivering performance increases at a pace beyond "Moore's Law". Getting rid of old, slow, inefficient servers can give you 2-12 times the capacity instantly. The second "episode" talked about getting everything you can from each server. Use virtualization and consolidation to make sure your servers are full and busy. The most efficient bus is a full bus ( this is a metaphor, I am talking about the big yellow things carrying students, not the circuitry in the box )

 

My focus in part three is on density. My operating premise is that the data center manager wants to get everything out of the current data center and avoid, or at least defer, construction of a new data center. If your in the data center construction business, this is not for you.

 

 

To get the most out of our data center we want to pack every server we can power into the space. You can do this by executing three actions. 1) Use every watt, 2) Build the right servers, and 3) Optimize HVAC. In many cases twice the servers can be crammed into the existing rack space even without adding power. If you are able to redirect your hvac power savings to your racks, your results could be even better.

 

 

So, we potentially got 5x capacity from new quad core servers, 5x capacity from boosting utilization with consolidation, and 2x capacity with higher density. My math says 5x * 5x * 2x = 50x the capacity ( in the same space and power!) video

 

 

Here's a good primer animation on Virtualization.

 

In part one of this "series" ( ok, mini-series) I spoke about the benefits of Server refresh. It is pretty huge for most installed servers. In many cases an IT manager could see a 5x jump in compute capacity by replacing depreciated servers. If these are older single core processor based servers, the number is probably even greater. Hopefully a 5x increase in capacity can push out your data center construction needs.

 

My next recommendation revolves around virtualization, or more specifically consolidation through virtualization. You can skip the words now and jump to the video below.... but since you are still reading, here is an intro to the video. I have seen a lot different data on "enterprise server utilization" but most of it pegs the meter at 10-15% utilization for volume landscape servers. ( By the way, that is a low number, not something to be proud of) Now, if you follow my advice and replace all these less-efficient older servers with cutting edge high efficiency Intel quad core machines, on a one for one basis, you are going to see some pretty un-pleasant utilization. Think single digit. In a nutshell, it is time to virtualize and consolidate. If you both virtualize and carefully manage and balance your workloads, it is reasonable to expect another 5x capacity boost through improved utilization. AND 5x5x=25x* more capacity ( in the same space and power!) (Try out the Intel consolidation calculator) vid 2

 

 

InfoWorld recently published some pretty scary data on the data center crunch: exerpt: "Forty-two percent of the respondents said their datacenters would exceed power capacity within 12 to 24 months unless they carried out expansion. Another 23 percent said it would take 24 to 60 months to run out of power capacity. The managers reported similar figures for cooling: 39 percent said they would exceed cooling capacity in 12 to 24 months, and 21 percent said it would take 24 to 60 months. "

 

I have done a series of blog entries on the topic: Almost Free Data Center Capacity and Big Numbers in the Data Center - The Data Tsunami

 

In these I have focused the solution ( or at least treatment) for data center pain on three strategies - Refresh, Virtualize, and Densification. I don't think I have used the word densification in a sentence before, but spell-check says it is real... For those who prefer a mixed media message, I agreed to record a series of short videos talking about the each approach and benefits for these strategies. Starting with the video on refresh.

 

 

 

The next two - virtualization and densification, will be posted soon.

 

Thanks for tuning in.

Filter Blog

By author:
By date:
By tag: