The use models and benefits of Server Virtualization are as diverse as the number of poses in the art of Yoga.  Virtualization is boosting server utilization while creating more flexible use models that feature live application migration from server to server. 

Over the past year, I found myself constantly talking with IT professionals about how they are using virtualization and how it was transforming their business.  And despite a more challenging economic environment for business and IT on the horizon, several industry analysts continue to predict investment growth in server virtualization during 2009.  It is easy to understand why. 

The many customers I have talked with and the many of the case studies I’ve read articulate that virtualization is lowering TCO through CapEx avoidance (data center construction, staff hiring) and OpEx reductions (power/cooling, management and maintenance savings).  In addition, virtualization is improving time to service, simplifying management of server infrastructure, boosting server utilization, and accelerating ROI of new hardware investment.  Beyond these “foundational” usages, I also found a set of new flexible use models that IT was considering that are based on moving applications dynamically from server to server … real time. 

For Lesson 1 of a 3 part series on virtualization, I interviewed Sudip Chahal from Intel IT where he explained to me the variety of terms, buzz words and use models of virtualization that are delivering the benefits above.  

I invite you to view this short video (~5 min) and comment about how you are using or intending to use virtualization in your business.

Namasté

Chris

Here’s the 7th follow-up post in my 10 Habits of Great Server Performance Tuners series. This one focuses on the seventh habit: Document and Archive.

IMG_3773_200x300.jpg

 

I hope the reason why you need to document and retain data for any performance project is understood, so I won’t go into it. Nor will I recommend particular documentation solutions – just find a database or filing solution you like that gets the job done. What I will do is list what needs to be documented.

Normally, performance tuning consists of iterating through experiments. So, for each experiment, it is important to document:

  • What changes were made – hopefully you weren’t trying too many things at once!
  • The purpose – why you tried this particular thing (including who requested it, if appropriate
  • General information – date & location of testing, person conducting the test
  • Hardware configuration:
    • Platform hardware and version, BIOS version, relevant BIOS option settings
    • CPU model used, number of physical processors, number of cores per processor, frequency, cache size information, whether Hyper-Threading was used (cpu-z can help document all this)
    • Memory configuration – number of DIMMs and capacity per DIMM, model number of DIMMs used
    • I/O interfaces – model number of all add-in cards, slot number for all add-in cards, driver version for all devices (on Windows*, msinfo can help with this, on Linux*, lspci)
    • Any other relevant hardware information, such as NIC settings, external storage configuration, external clients used, etc if it affects your workload
  • Software configuration:
    • Operating System used, version, and service pack/update information (use msinfo on Windows systems, uname on Linux systems)
    • Version information for all applications relevant to your workload
    • Compiler version and flags used to build your application (if you are doing software optimization)
    • Any other relevant software information, such as third-party libraries, O/S power utilization settings, pagefile size, etc if it affects your workload
  • Workload configuration:
    • Anything relevant to how your experiment/application was run, for example, your application’s startup flags, your virtualization configuration, benchmark information, etc
  • Results and data - naturally you would store all the above information along with the results and data that accompany your experiment

This blog entry is also the appropriate place to for me to mention the role of automation in your tuning efforts. If you are going to be doing a significant number of experiments, invest the energy needed to set up an automation infrastructure – a way to run your tests and collect the appropriate data without human attention. I included links to automated ways to gather the above data where appropriate.

 

Keep watching The Server Room for information on the other 3 habits in the coming months.

When it comes to implementing virtualization, the saying ‘if you fail to plan, you plan to fail’ is just simply true. Upfront and detailed planning for virtualization implementation in the datacenter can get the right results both financially and operationally.

 

 

In the document below, I explain a simple six phase approach to implementing virtualization with a lot of emphasis on how to plan for such implementation. These phases are generalized best known methods and are not an exact blue print to implement virtualization. While it is true that one size fits all approach will not work for virtualization implementation, knowing the basics will help guide through the process or ask the right questions if you are using a solutions provider.   

http://communities.intel.com/docs/DOC-2355

 

-RK Hiremane

Here's a cool new site I came across where you can contribute to defining what Server Virtualization is all about: Virtualization Conversation

 

You can also listen in to some Webcasts coming this month with Iddo Kadim, Director of Virtualization Technologies at Intel and Bob Zuber of IBM:

Register Here

 

Check it out, there's also a cool new widget that let's you draw your ideas on a whiteboard: Share Your Definition

 

 

virtualization whiteboard.bmp

 

These new widgets are really getting cool 

Javed Lodhi

SSD - Risky Business?

Posted by Javed Lodhi Dec 2, 2008

Now before we kick off on this topic, first let us decide what risk factor are we talking about - Performance, Stability, Durability, and Price?

After coming across Mario Apicella's (Storage Advisor) article, I am not sure I do agree to all of it since my experience over SSDs is a bit different but you see this varies from situation to situation. I do agree that SSD costs a lot more than the typical mechanical drives at the moment but then when you go for an SSD, price is not your primary concern due to one very simple reason - you are aiming for a better and far more stable solution. You see, Apicella has benchmarked with SATA SSD drives but did not provide any feedback over SAS based SSDs. Now as you may know that SAS drives perform better as compared to IDE, SATA or SATA II since SATA is half duplex while SAS drives are based on Serial Attached SCSI where we all know that SCSI drives perform RW in a duplex mode, outclassing SATA/II in performance. I hope you are getting my point. See! You just cannot say that SSD drives are a risky trade because of a fact that the price difference between SSD and the conventional mechanical drives is significant but my point is that, not everyone would order an SSD. People who ask for an SSD in their solution are actually aiming at a far more stable and LOW-RISK solution and price to them does matter, to the extent where they would compromise stability over price.

Now I wouldn’t say that by plugging in a SSD, you would get phenomenal increase in IO but it does perform quite decently as compared to the other members of storage medium family. Yet again, an SSD is not just of one type and depending on the type of SSD, performance varies.

One interesting point that I see a lot of people reviewing SSD miss out on is, Virtualization. You see, plugging in a 2.5” SATA or SAS based SSD drive in an Intel Blade Server will give you a much better performance as compared to mechanical SATA or SAS but when we talk of virtualized storage especially coming to Intel’s Modular Servers, how well will these drives perform? See, it’s not a discussion we can wrap up in a couple of paragraphs and jump onto conclusions. In continuation of this blog, I will post one of my researches over SSD technology and the performance benchmarks considering the price factor in mind but for now, I would say that SSD is expensive but then the performance gain and stability is better than the mechanical drives which a lot of customers actually aim for. Adding more to it, price of SSD drives is dropping faster than those of mechanical drives; and as such the days of the mechanical hard drive are numbered. We are constantly hearing about the fruition of solid state memory technologies (such as MRAM which has been theorized since the 1970's) that provide more density and reliability, lower power, and faster write times. Mechanical hard drives are now once again bottle-necked by technology since perpendicular heads have become main-stream as have the corresponding aerial density increase that accompanied perpendicular heads.

These are a few simple facts and I think it would not be fair to say that solid state upgrade is a risky business at all. Perhaps the subject should have been, “Solid State Upgrades - Cost or Peace of Mind?"

--

Javed Lodhi

Filter Blog

By author:
By date:
By tag: