What a difference a few months can make.  Late last year, the data center technology market was reeling from one of the worst downturns ever, and there was a raging debate on the relevance and meaning of cloud computing.  High profile industry executives and IT customers were public with their disdain for the hype of cloud computing, while others evangelized their view of cloud computing with ferocious zealotry.  The range of opinions on the definition of cloud computing was almost comical.    Some viewed cloud as a data center architecture approach, focusing on application and server virtualization.  Others viewed it as a sourcing model, where applications would be purchased as a service on the internet than via packaged apps and self-hosted infrastructure.   One partner told me that any company who manually migrated a live virtual machine already had a cloud, while the visionaries suggested that anything short of a fully automated, fully federated, dynamically scalable multi-tenant environment was just business as usual. Cloud computing was marked by a cacophony of disparate and contradictory voices – in short, a mess.

 

While there is still a measure of cynicism and disparity of opinion on definitions of cloud computing, it’s amazing to me how much convergence I’ve observed over the past few quarters.  As a community of technology providers, solution providers, and leading IT users, we may still be wrong but at least we’re no longer in doubt.  First, there is widespread acknowledgement that cloud computing is transformative, and first and foremost about an architecture for delivering IT.    The distinctions between cloud computing architectures and traditional methods of IT deliver are usually described in 3 areas:  1) Automation:  policies rather than people optimize the infrastructure for cost and performance; 2) Dynamic scaling:  the infrastructure can adapt, scaling up or down, to the demands of the workload; and 3) Multi-tenancy: a common infrastructure hosts the applications and services of multiple different internal or external customers.  It’s now common to refer to private clouds as infrastructure built by a company for their own use, and public clouds as infrastructure made available as a service over the internet.  The terms between SaaS, PaaS, and IaaS are used with reasonable consistency to describe cloud offerings. Of course there are tons of combinations of services that blur the lines between public and private clouds, but they all strive to offer the significant cost saving and scale advantages of cloud computing.

 

It’s possible I’m a bit optimistic on the degree of convergence of opinion on cloud computing, but even if I’m right, we have a long way to go before it’s practical and easy to deploy.   At a vision and definition level, you can now virtually interchange the logos of various companies in our industry, but when it comes to implementation, the recommendations will look even more diverse and contradictory.  The overwhelming concern most organizations and even individuals have relative to cloud computing is security – how can you trust your data and critical business processes to infrastructure that is hosting someone else’s service as well (even if the “someone else” is in a different division or function at your own company).  Some may view this fear as unfounded, but everyone acknowledges the concern.  Customers are also worried about the tradeoff between a fully integrated but proprietary approach, and an interoperable multi-vendor approach that may take more integration.   It’s an exciting, but daunting time, both for technology providers and IT customers.  CIO's recent article on cloud computing, presents one interesting view of how to navigate these challenges.

 

We’re in a unique position at Intel.  Our technology is prevalent in most data centers, representing a consistent foundation for servers and increasingly storage devices.  At the same time, we’re one of the only significant players in the cloud that doesn’t focus on full business solutions.   We remain committed to our role as an essential ingredient provider in the data center, which means our only option is to work with the entire industry to define and deliver solutions to make the promise of cloud computing a reality a bit sooner.

 

We’d love your input on the biggest needs for your organization – where are you in the development of your strategy for taking advantage of the cloud, and what would help you move faster?

There is a time-tested truism from golf that states: “drive for show, and putt for dough”.  The basic premise is that while the big, seemingly impressive parts of golf—the big, booming long drives, pale in importance to the little things that complete the game—namely the final job of rolling the ball into the hole efficiently to score well.  Yes, I’m stealing and modifying this because of some parallels that I’ve seen as we market technology.  Let me explain.

Since we live and breathe in a high technology environment and work with some of the best and brightest technical minds, we tend to get “wowed” by big technology breakthroughs. These may be the technical equivalents of 450 yard golf drives--impressive, but only a part of the game and only useful if the drive lands on the fairway. What we sometimes underestimate is that much of the universe is not going to have the deep immersion in our technologies, and may not have the context to see how breakthrough it may be.  They have businesses to run and grow, they don’t necessarily need or want to be technologist just to appreciate our “big drives”.

As marketers (and for all of us as a technology solutions company), we need to be focused on the context of the game. It is not about the big technology breakthroughs, it is about the way customers can use these technologies. In this scenario, our equivalent of solid and efficient putting is compelling and well-enabled use models and the solution stacks.  I’ve personally experienced the power of the use model to transform the technology discussion into the solution discussion as we rolled out Intel Trusted Execution Technology (Intel® TXT).

One would think that a technology to reduce malware would need little explanation or justification.  Everyone knows that malware is bad and should want to keep it off their platform.  But this discussion of Intel TXT on these terms was surprisingly (to me) very flat and unexciting. But beginning at IDF San Francisco in Sept 2009 we changed the dialog.  We began to talk about how TXT could be used to support the types of activities that already want/need to do. The change in response to our messages was eye-opening. The levels of interest and engagement became materially different virtually overnight.

Being able to show the technology to customers as part of useful business solutions comprised of hardware and software sold and supported by a numbers of vendors was inherently more interesting. No longer was the dialog about cool anti-malware technology (which Intel TXT still is!), but it is now about solutions to help control and secure dynamic virtual environments or provide security and compliance enhancements to make cloud computing more suitable for sensitive workloads.

Here is an example how we team up with our ecosystem for solutions demonstrations that actually minimize the technology discussion of Intel TXT but provide a very well-received use model that has generated significant customer interest.  Having a compelling use model focused on business needs and demonstrating this capability with companies like VMware and RSA really changed the equation—as earlier demonstrations of “trusted pools” with VMware and HyTrust had at IDF 2009.  As a result of this shift in focus and these activities, we’re in the process of engaging customers with proof-of-concept implementations of these very use models now.

While we all still clearly love technology and Intel’s ability for technical innovation, I’ve seen a growing number of my peers taking a similar increased appreciation for rich solutions-driven discussions.  This means having focused efforts to put the right ecosystem support around our technology building blocks and for engaging customers to vet and refine use models.  With these changes, we’re helping move Intel from the big hitter on the driving range to a more complete player capable of staying on top of the leaderboard of companies that provide real business solutions.  We will be using upcoming venues such as VMworld, IDF and others to continue to extend this use-model centric dialog to better align Intel innovations to real business needs.  Come by the Intel booth (#509) at VMworld Aug 30-Sept 2 for a first-hand look at what we have in store as we complement our booming long drive technology leadership games with the finesse “putting” together of solutions.

IDF, Intel Developer Forum, is almost here. In fact, the event is less than 4 weeks away in beautiful San Francisco, and we hope you're making plans for your visit. While at IDF, Intel's Data Center Group, would like to meet all of the Server Room community members and data center customers. We've put together a great class lineup, Data Center Zone, and opportunities for you to meet our experts, as well as, a chance to win some awesome prizes!

 

Capture Your Experience at IDF 2010

Get a Flip. Win an Ultimate Home system. Become an IDF legend.

 

Our main event this year is Capture Your Experience, an opportunity for you to film your IDF experience and compete for one of two Ultimate Home systems. What’s the Ultimate Home system? A Boxee Box, 55" HDTV, Atom™ based home server, and more!

 

Don't have a video camera? Have no fear! We've parterned with Cisco to provide a Flip MinoHD™ video camera for your use, and to keep after your contest entry submission. Take the camera and head into IDF to Capture Your Experience! Who’s presenting in the keynotes? What’s groundbreaking on the showcase floor? Once your video shoot is complete, visit our welcome desk or showcase booth and turn your video into something legendary at our video editing workstations.

 

To sign up, visit us at our Capture Your Experience Welcome Desk on Monday, Sept. 13th or Tuesday, Sept. 14th from 11am-5pm. Submissions are due Tuesday, Sept 14th by 7pm. And don't delay to sign up, our Flip camera inventory is limited and it's first come, first serve!

 

UPDATE: More details available for you.

 

Data Center Dude comes to IDF

If you've noticed lately, we recently launched a video series to share with you our thoughts and experiences with the Data Center world. Get ready to meet the Data Center Dude, and follow @IntelXeon on Twitter to learn how you can be one of six to receive an Intel® SSD. More details will be shared soon.

.

Meet the Data Center Experts

As part of our “Data Center Experience” at IDF 2010, the Data Center Group would like to invite you to an informal evening of networking on the latest technologies Tuesday, Sept 14th 7:30pm-10:30pm. To receive an invitation, stop by our Data Center Zone or the Capture Your Experience Welcome Desk on Monday or Tuesday before 5pm. Just mentioned the Server Room community, and we'll provide you with an invitation.

itc_cs_ieem_itanium_carousel_preview.jpgTo ensure prompt, accurate processing of votes, Instituto Electoral del Estado de Mexico (IEEM)  needed to streamline and modernize its IT infrastructure.  IEEM had been managing elections using two Sun Ultra* 2 servers running Solaris* 2.6 and Oracle* 8i. But the old system lacked processing power—plus, maintenance and operating system costs were prohibitively high.

 

IEEE moved to new HP Integrity* servers based on the Intel® Itanium® processor with the cost-effective Red Hat Linux Advanced* operating system running an Oracle* 10g database and custom election processing software.

 

With a smaller footprint, the new system helps reduce maintenance, power, and cooling costs. IEEE can now count election votes more quickly and reliably to ensure there are no questions about the results. The new system also lets IEEM’s 170 dispersed offices communicate reliably during elections.

 

“The HP Integrity* platform with [Intel] Itanium processors, from a price/performance and reliability standpoint, is far superior to the Sun and IBM alternatives we considered,” explained Pablo Carmona, head chief of IT and statistics for IEEM.

 

To learn more, read our new IEEM success story. As always, you can find this one, and many more, in the Intel.com Reference Room and IT Center.

 


*Other names and brands may be claimed as the property of others.

If you follow Intel you know that efficiency is kind of a big deal for us.  Mantra might be the right word.  Years ago the company took a "right hand turn"righthandturn.jpg

in product design and raised efficiency up with pure performance as a design determinent for our platforms. If you think about this for a second you realize that this is a very big change in thinking...kind of like turning Ferarri engineers into Prius designers.  But wait...we need to continue to deliver the performance of a Ferarri while providing the efficiency of a Prius - no small task. (If you want to learn more about the right hand turn you can watch my friend John Skinner explain the change in detail.)

 

If you are a Chip Chat listener you also know that we like to talk about efficiency on the program.  It's a personal passion of mine, and I have spent a good part of my "day job" at Intel on efficiency related programs.  When I'm out talking about what Intel has delivered in the realm of computer efficiency I'm often asked a single question:

 

What about software?

 

It's a good question.  The efficiency conversation has historically focused on how hardware consumes watts, but that is slowly changing.  Data centers are more focused on utilizing a combination of system instrumentation and management tools to more accutely control watts...and more recently Intel released a tool called Intel Energy Checker to help measure how apps are utilizing energy to complete work, and how software developers can utilize this data to optimize the efficiency of their code.  A simple tool perhaps, but if you think of the implications of software developers making their own right hand turn in design, we may have an ability to drive even more efficiency into tomorrow's computing solutions.  Wow!

 

I couldn't wait to talk with one of our Energy Checker gurus, Kevin Bross, about this nifty little tool.  His insights into the challenges of delivering a tool that can be universally useful uncovered some of the unique challenges shaping this area of innovation.  I hope you enjoy this episode,and look forward to your comments.

 

If you like Chip Chat please let us know!  Join our Facebook site, follow us on Twitter, or subscribe to the show on iTunes.

itc_cs_artandfact_xeon_carousel_preview.jpgArt & Fact is a small French company that produces 3-D video content for its clients, which include pharmaceutical giants like Merck and Boehringer Ingelheim. Used for seminars and events and the Internet, these videos were taking considerable time to produce. To speed it up, Art & Fact added the processing muscle of the Intel® Xeon® processor 5600 series.

 

Using three Intel white-box servers powered by Intel Xeon processor 5600 series cut the time to create a three-minute video by half—from 56 hours to about 27. Now Art & Fact plans to add seven or eight new server systems to provide cloud services so its freelance staff can more easily work on 3-D content rendering.

 

“This Intel Xeon processor provides us with a robust foundation from which to strengthen client relationships and grow our business,” said Guillaume Philippon, production manager for Art & Fact.

For the whole story, read our new Art & Fact success story. As always, you can find this one, and many others, in the Intel.com Reference Room and IT Center.

The short answer is... it’s me: Greg Wagnon and my stuff.

 

The longer and perhaps more accurate answer is that it is me and my offer to you a collection of insights and perspectives that you will not get anywhere else in the world.  The Data Center Dude is who I am as I show you the people I know around work and the things we are all working on.

 

Why am I 'qualified' for such a title?

Well, I am not sure “Dude” is much of a title, but the “Data Center” part is certainly pertinent to my experiences.  I've been at Intel for over 11 years now, and that time has been spent on Intel’s server products.  I've worked in server validation teams, performance benchmarking, competitive analysis, board design, and the marketing of these products to various customers around the world.  I've administered several servers along the way, supported just about every family member with their client computing needs, and played the IT Administrator role by helping a small business manage their systems as well.  I have not managed a large Data Center, but I know people that do, and we'll see if we can get them to share their experiences with us.

 

I am by no means an expert on all things.  I do know a LOT of the experts though.  Data Center’s utilize servers, and knowing the various components, and people who define what Servers and Data Centers are, is what I am here to offer you.  So, between what I know and who I know around here, that makes me a good guide for you to learn more about Intel Servers and the Data Center.

 

S7300195 (Small).JPG

Sometimes measuring the bottleneck can be interpreted in so many ways.

 

What is a Data Center?

Intel holds the view that a Data Center is not just a giant building with thousands of servers racked and blinking away in the dark cool environment that they need.  These buildings essentially run large business applications.  They grind through amazing amounts of work that many of us will not appreciate until years later when, for instance… a product that is being modeled by all these servers would the market (Intel’s Green Field Data Center video as an example of an Intel Data Center).  This is however, only one type of Data Center.

 

A Data Center can also be described as that single server box under the desk in the back of the office of a law firm, or eye doctor's office.  That one computer is the center of their computing world and may be running all the mail, billing, web access, and application sharing that the entire office uses from their desktops.  It is the center of their computing needs and thus, a Data Center for them.  The Data Center are these two things and every other implementation in between that you can think of where a server system, or many, many, many, many systems do work.

 

What the Data Center Dude will bring to you.

Essentially, it will be videos, BLOG's and perspectives that you may not get otherwise.  I am going to various events and will report 'Live From' to you with pictures, video, and information about what is going on.  You will see the server product demonstrations going on from both Intel and others that are key to our industry.  All in an effort to offer you a unique perspective and insights into what Intel is doing to enable the vast, yet often quiet world of Data Center’s and all the components involved.

 

So, sit back, check back frequently, watch what I have to offer and be sure to let us know what you think.  We’re not looking for video gold or uber viral videos, so be kind on the quality of our video production.  We’re doing this on our own, without much paid expertise, so look at the content more than the delivery.  This is just from me to you, nothing fancy.

 

Our first video highlights Total Cost of Ownership on servers.  This video was done with a coworker of mine, who wrote a BLOG post discussing how server performance drives down IT costs and as you will see, he sits right across from me.  These videos are of me and my coworkers at work, so you will get to see more than just the content of the day, you get to see where we work and a little about how we spend our time.

 

I will be monitoring Data Center Dude video comments and responding as well, so leave comments wherever you can.

 

Thanks,

Greg

At a recent show I demonstrated the new Intel Xeon 5600 Series Processors and Intel Intelligent Power Node Manager Technology and active power capping.  The Xeon 5600 Series processors are already highly efficient and showcase a 15:1 ratio in comparison to servers from 4-5 years ago.

 

Using SPECpower as our workload, you can see how power can be managed directly using Intel Node Manager Technology in this server to server comparison.  In a scale scenario, with thousands of servers, this demo showcases the tremendous amount of power that could be saved across an entire datacenter.

 

 

Let me know your questions and feedback on this short demonstration of the Intel Xeon 5600 Series and Node Manager technology.

itc_cs_slumberland_xeon_carousel_preview.jpgSlumberland Furniture needed to streamline its business and make its IT more efficient, so it decided to virtualize its environment and standardize on the Intel® Xeon® processor 5500 series.   The result? Slumberland was able to consolidate its servers by 20:1. Now it’s moving to Cisco UCS* platforms based on the Intel Xeon processor 5670 and expects to run 60 virtual machines on a single full-width physical blade.

 

“We’re putting 20 virtual machines on a half-width blade with the Intel Xeon processor 5500 series and expect the full-width UCS blades with the Intel Xeon processor 5600 series to triple that number,” explained Seth Mitchell, Infrastructure Team manager for Slumberland.

 

Read the whole story in our new Slumberland Furniture success story. As always, you can find this one, and many others, in the Intel.com Reference Room and IT Center.

 

 

 

*Other names and brands may be claimed as the property of others.

Last blog I talked generally about the relationship of Computation Output to Efficiency.  In a world where energy growth is bounded, continued exponential growth in compute output will REQUIRE revolutionary changes in efficiency (the effectiveness of how energy is used).

 

The Industry has already made some huge progress in addressing efficiency of the data center. As James Hamilton recently wrote, the great thing about this industry is that when we focus, we get results. For instance The Green Grid (*) has been instrumental in driving among its partners on the use of PUE as a metric of data center efficiency has been astounding. While no metric is perfect, if used in the right way (as an incentive to decrease the a sizeable fraction of energy used to maintain the environmental conditions in a data center) it can drive substantial changes in data center energy consumption. Examples from  Intel IT Microsoft ,  Data Center 2020, and many other show how this has been put to use.

 

Another area where significant progress has been made is through the creation of the SPECpower_ssj2008 benchmark. The was really "a first" benchmark of its kind to look at the efficiency of computation in the way computer servers are actually used. Early on both "TDP" (often misunderstood - it is actually a thermal design power spec) and "Idle" (equally mistunderstood - what is an "idle" server?) were used as metrics of efficiency. While these are somewhat useful (there is no such thing as a perfect metric), SPECpower, which measures energy consumption at finite, but not full, workloads, does a much better job mimicking the energy a real user would see consumed.

 

A significant conributor to improving platform energy efficiency is the efficiency of power delivery. A few years ago, power delivery efficiency approaching 50% was common practice. Today, thanks to the effort of the Climate Savers Computing, some substantial progress has been acheived (for servers and for client systems). In 2010 alone, almost 6 Billion kWh will be saved because of this effort.

 

So we have a good start!

 

But the question is where can we go? I believe that ultimately optimizing the efficiency of data centers will involve optimization far beyond these two metrics. Below is a picture I drew showing a "STACK OF ENERGY USES" in the data center. Each block represents the "next level of loss" in the chain of energy consumption all they way down to the gates of the tranististors doing the calculations (the work output of the data center).

 

Stack of Energy Uses.bmp

 

 

This is a crude picture and I am sure people will have comments. I've highlighted the energy usages that the two metrics I mention above seek to optimize.  As I stare at this, I am wondering what other kinds of metrics should be put into place to improve energy usage, and why? Do the duo of SPECpower and PUE provide ample indicators to drive the right behaviors?

 

In my next blog I'll weigh in with some ideas.

 

 

*as a disclaimer, I am an active participant in The Green Grid and an alternate on that organizations Board of Directors.

We are hosting a class @ Intel Developers forum.  Here's a quick preview of what is to come.

 

 

Please see the IDF website for more information and class schedule. 

itc_cs_hennecke_xeon_carousel_preview.jpg

It’s easy for medium-sized companies to be caught up in IT complexity as they add systems to keep up with demands for processing power and storage. Virtualization is the ideal way to simplify IT and make the data center a driver for enterprise efficiency. Hennecke, a leader in processing equipment for polyurethane, recently put this lesson into practice.

 

Known for its technological innovation, Hennecke wanted to create a modern and environmentally sustainable IT infrastructure.

 

“Thanks to our virtualized Dell PowerEdge servers with Energy Smart technology and Intel® Xeon® processor 5520, we have reduced our server count by one-third, cutting power consumption by approximately 30 percent,” explained Peter Ruttka, network and server systems administrator for Hennecke. “This is important not just because it lowers our costs, but because it strengthens our standing as an environmentally aware manufacturer.”

 

For the whole story, read our new Hennecke case study.  As always, you can find this one, and many more, in the Intel.com Reference Room and IT Center.

Vernon Turner, the Senior Vice President of IDC's Enterprise Infrastructure, Consumer and Telecom research, told us something really interesting the other day.  “I think TCO is top of mind when customers evaluate server infrastructure,” he said. “Performance is important because that drives how many servers are required to achieve the customer’s productivity goals.  And that drives the downstream costs associated with software, power & cooling, rack/floor space, networking and maintenance fees.”

 

This resonates, and directly relates to questions I’ve heard from IT folks lately:

 

  • How much is it going to cost to deploy a key enterprise application?
  • What are all the key cost drivers and how can I improve my overall TCO?

 

I wanted to share with you how I’ve been answering these questions lately.  To illustrate this, I’ll compare a couple of bigger servers and show you how to maximize the key cost drivers and lower TCO when you need to deploy a mission critical enterprise database solution.  Next week I’ll compare a “value 4P” vs. a high-performance two-processor server with the goal of achieving the lowest total cost of ownership for server consolidation using virtualization software.

 

Kennedy Brown wrote last week about the importance of server performance for database projects.  Enterprise database customers demand a server platform that delivers performance, scalability, large memory capacity and advanced reliability.  So let’s compare the costs of deploying Microsoft SQL Server 2008 R2 on two different 4 socket industry standard servers from HP.   Pricing derived from HP’s on-line system configuration tool on July 30th, 2010

 

 

Licensing costs (plus yearly Software Assurance) for Microsoft SQL Server 2008 R2 Enterprise Edition on 4-Socket servers is $136k.  (Licensing costs are ~$27k per processor; Software Assurance is ~$7k per processor.)

 

Given the above pricing, the Intel Xeon based DL580 G5 server has a lower acquisition price so it must deliver lower TCO to IT, right?  Let me show you why it doesn’t…and it’s all because performance matters!  It matters so much that I had these t-shirts made that I plan to start giving out on customer visits.

Got Performance?

 

High performance means you can get more done with less, and doing so has a huge impact on downstream costs.  Our estimate of OLTP (Online Transaction Processing) performance on a typical 4S Xeon 7500 based server is expected to be ~3X the performance of the previous generation 4S Xeon 7400 based server.  That means 3x  more database transactions.  To achieve the same number of transactions as 2 new servers you’d need 6 of the older ones.  If we apply this 3:1 ratio to comparing the two different HP servers, the downstream cost savings in deploying fewer of the new DL580 G7s is startling.

 

 

Equivalent Database Transaction Processing

6 HP ProLiant DL580 G5s
(4 x X7460 processors, 128GB Memory)

2 HP ProLiant DL580 G7s

(4 x X7560 processors, 256GB Memory)

Networking

$360

$120

Rack/Floor Space

$7,500

$2,500

Power/Cooling

$42,000

$14,000

Server Maintenance

$3,000

$1,000

Microsoft SQL Server 2008 R2 Enterprise Edition Costs

(Licensing is $27,495 x 4cpus per server, Software Assurance is $6,874 x 4cpus per server x 4 years)

$1,332,000

$444,000

Windows Server 2008 R2  Enterprise
(Licensing is $3,999 per server.)

$24,000

$8,000

Server Hardware Costs

$169,900

$85,700

Total Estimated Costs

$1,578,760

$555,320

 

 

The new DL580 G7 solution would deliver a lower TCO by over $1M, and comes with new advanced reliability features that enable data integrity and high availability.

 

What drives record breaking server performance of the Xeon 7500?  Well, as I mentioned back in my March blog, bringing the Nehalem Architecture to big servers, the Intel QuickPath Technology (QPI) that brings point-to-point connections between processors and I/O hub, 8-Cores / 16 threads, a whopping 24MB of shared cache, and up to 1 Terabyte of memory with 16 memory DIMMs slots per processor socket sure help.

 

One thing that doesn’t drive the record performance: core count.  2 of these new DL580 G7s have a combined 64 cores (8-core processors x 4 processors per server x 2 systems), whereas the older 6 DL580 G5s have a combined 144 cores (6-core processors x 4 processors per system x 6 systems).   According to IDC’s Vernon Turner “customers tend to look at application performance at the system level and not the number of cores in the processor.”  The new Intel® Data Center Dude video demonstrates the concept of server performance quite well.

 

 

 

 

Hopefully this helps you understand how performance drives TCO for a large back-end server workload like enterprise database.  However the majority of server deployments occur on lesser expensive mainstream infrastructure servers.  Stay tuned next week when I’ll show you how to reduce your business costs by deploying high performing 2S Servers for your infrastructure consolidation projects using virtualization software.

Filter Blog

By date:
By tag: