Imho, efficiency initiatives (e.g., server consolidation) can be "Green" initiatives since they help to reduce data center power and cooling. We (technologists) have a tendency to often seek creative, sexy solutions. Focusing on simple conservation initiatives like server consolidation, proactively monitoring and running chillers and CRAH units optimally, etc. can reduced data center power consumption significantly....
You're so right. Which is where I've been trying to go with this. But as you may also know, once the industry gets a new tag line, it's difficult for them to change directions quickly. We've been touting efficiency for awhile, but all of a sudden "Green" came in. I'm not trying to fight it, just wanting to ensure we all understand.
Totally agree. Whatever works. Call it green, purple, or Canada (sorry, i've watched too many MASH reruns)... Just as long people start to focus on reducing energy consumption.
Server consolidation/virtualization and leveraging latest power efficient chips/technology have allowed us to reduce our data center power consumption by 50%. We continue to look for other opportunities. We were surprised to see that disk storage and network devices are consuming a high percent (~40%) of total data center power now. Some disk vendors have recently announced sub-systems with flash drives, but very expensive. We are working on optimizing storage (leverage De-dup, larger disk drives, delete or archive data, etc.). I haven't heard of any power initiatives from the network switch/router vendors (e.g., Cisco). I was wondering if you heard of any ideas to help reduce power consumption in the storage and network layer 2 & 3 areas.
So in that thread of consolidation, virtualization and compact application design...what type of systems would you propose for a company to effectively monitor utilization? I mean, knowing what you consume, how much you have left, and what you can safely load up can help you figure out an effective solution for shared hosting.
Or is it easier to simply take a hardware set, load it with multiple virtual instances, and keep stacking until you are approaching the ceiling. Then waterfall over to the next set of hardware?
From what I've seen, we need to do a better job at load balancing and utilization. Sometimes the problem is not knowing.
In what detail would you like for me to share? I could do the "you know you need to buy an Intel server, with this much RAM, and this many cores. Or I could do the Virtualization system types and generically share what we're using. To maintain my credibility, I'd like to give you the generic version. I'm not here to sell, but to help us all out.
Let's pretend that I work inside your company, within an engineering server support team. How do I work with our current customers to effectively monitor and shuffle hosted applications around to ensure that at their peaks, they don't see a reduction in availability? Is the best way through load balancing, through automated virtualization, are there tools to help out? What can I do to ensure that I do the right thing with regards to the "green space" without severely impacting critical and/or non-critical applications?
Like I said. Part of the problem I see, every day, is the lack of effective methods to monitor and properly load up current hosting environments and thus prevent custom single-point solutions for hosting. The more applications you can safely (and effectively) load onto one host, the cheaper it should be (both monetary cost and carbon footprint).
I've read several articles and books lately that paint a doomed picture of the future. One in which our computing needs become restricted by our power supply capacity. This is one that can be easily addressed through consolidation.
However, in large companies (like Intel), it's not as easy as simply stacking books on the shelf until it's full. As you know, each book is variable in size and has a varying amount of people crowding in front of the shelf to view it. At any given point in time.
My apologies. I don't work in the "Virtualization" area, was merely looking at ways to better consolidate our application footprint and thus reduce power consumption (thus -- more green).
How about recycling heat?
Has anything ever been looked at to recycle the heat generated from data centers?
It's rather funny when you think about heating one part of the building and cooling another. Why not implement measures to exchange some portion of the air?
Maybe some greenhouses?
OK, I'll go quiet for now.
Don't go quiet!! Come back
Now you're talking our speak, and as for the servers I didn't understand where you were going. Reclaiming the heat...? Here's our latest information on that for one of our data centers.
I know we are mostly IT focused, however, has any consideration every been given to co-locate these type of reclamation systems with the chilled-water systems cooling towers used at manufacturing sites (or large office buildings)? There may be some low-power-generation potentials with recycling heat or storing it for later use. My prior life was US Navy Nuclear Power, and we used every BTU of heat generated. First to create high-temperature, high pressure water, then steam, then lower pressure/temperature steam, all for something. Whether power generation, heat generation, water purification, cooking -- something.
Every time I drive by a factory and see a cooling tower, I see energy escaping and not being used. When we cool a room instead of recycling the heat, we are doing the same thing -- losing energy.
I am a big believer in doing the right thing environmentally, and this includes what we do in the data center. The Green label has been pretty wounded by the massive use and calling almost everything green. http://en.wikipedia.org/wiki/Greenwash To that extent I prefer the term efficient. If the data center is "as efficient as it can be" I am reducing power, cooling, costs, people, etc. The reality is that this efficient data center can be a nasty roaring furnace, and still be very efficient - the key is to maximize your return on the space and equipment. Like a bus, which uses lots of energy, iff you fill it up it is way more efficient than the best "Prius*" on the road. Getting the best efficiency from your data center means squeezing everything you can from every square foot and every watt. I wrote a few entries ( http://communities.intel.com/openport/blogs/server/2008/01/29/almost-free-data-center-capacity ) on how to use Intel bits to squeze the most from your data center and will be posting some videos on the topic later this month. ( Maximizing data center ROI)
I had a shorthaired pointer bird dog, that dog just had an ingrained desire to find birds, most engineers have a desire to find better and more efficient ways of doing things and that includes reducing energy.
"Waste Not, Want Not" was a saying my Dad always said. It can be applied to so many different things and the meaning is much more intricate than most people think. Conservation is something everyone can do. Instead of buying something new, why not repair something old. Not only do you save the landfill, you also save the energy required to build a new product.
You can do lots of little things to save energy and resources, nickels add up into dollars. We have been doing lots of little things to reduce energy use in DCs, these are starting to add up now.
The latest is energy reuse, where heat is collected from the DC and used to heat other buildings or potable water systems resulting in a net savings of energy.
So I agree, any way of reducing energy use to the minimum is Green.
The bigger question is: Do we do everything possible to enable engineers to use their natural ability to be efficient and frugal, or are their hindrances that detract from this? The answer is much more complicated than you may think...
Hi Brently, I don't see virtualization and green as orthoganal but rather related. I see virtualizaiton as an approach to improve utilization of servers (good for green) and reduce server footprint (also good for green) .. in the end green, effient and virtualization are all related and complementary.