The conventional view is that society has an insatiable appetite for energy. But that's not quite right; we have an insatiable appetite for the things that energy does for us.  Nowhere is this more apparent than in the growth of the internet and data centers. Driven by growing demand for computation, multi-MegaWatt data centers, which barely existed a decade ago, today account for 2-5% of US energy consumption, according to the EPA (Koomey).

 

 

Since this is a first blog entry in this series, I'll start by introducing myself, Winston Saunders. My job in the Data Center Group at Intel involves keeping our data center energy efficiency initiatives on track. One reason I love my job is because the scope is immense; optimizing compute efficiency requires saving energy on time scales from nanoseconds to years. To put that in perspective, that spans almost seventeen orders-of-magnitude, about the same as the ratio of one second to the age of the Universe. Wow!!

 

So why does Data Center Efficiency Matter? To keep pace with the growth of the internet and data demand, compute needs to keep growing - and growing exponentially. What cannot grow exponentially is energy consumption. Already at 2-5% of US production, server and data center energy consumption cannot grow much more without incurring huge costs of additional generation capacity, not to mention risking regulatory scrutiny, carbon taxes, higher energy costs, etc. It’s in everyone’s interest to keep a lid on it. We must "Turn the Tide." The question is, how?


It's just a matter of some algebra to figure out the problems we need to solve. Normally we think of efficiency as the ratio of computations to energy. Since computations are the real deliverable, let's rewrite this as:


Computation = Efficiency * Energy


It's the same equation but an entirely different philosophy. If computation must continue to grow exponentially, and energy is constrained, our only lever is efficiency! Stated another way, to continue to deliver exponential growth of computation into the next decades, we MUST, from this perspective, drive evolutionary and revolutionary changes in our approach to Energy Efficiency.

 

Of course, the transistor scaling inherent in Moore's Law will continue to play a huge role here, but more and more, what we do architecturally within and beyond the CPU, and all the way up to the data center itself, will come into play.

 

What do you think? Can efficiency really help us to "turn the tide?"

 

In coming blogs I’ll talk about some perspectives on why I think it may be possible.

Even the oldest bank in the world can benefit from today’s newest green technologies. That’s what Italy’s Banca Monte dei Paschi di Siena, founded in 1472, learned when it replaced its Siena data center’s old servers with new ones powered by the Intel® Xeon® processor X5570. The bank was able to cut energy consumption by 1.3 million KWh per year and save EUR 261,000 (USD 320,000) while reducing carbon emissions by 648 tons.

 

“We are committed to reducing energy consumption, not only in our data centers but also across the whole corporate network,” explained Gianluca Pancaccini, general manager of the Consorzio Operativo Gruppo Montepaschi, which owns the bank. “The Intel® Xeon® processor 5500 series has enabled us to save 1.3 million kWh in one year—the equivalent of more than EUR 261,000 (USD 320,000).”

 

Read the whole story in our new Banca Monte dei Paschi di Siena case study. As always, you can find this one, and many more, in the Intel.com Reference Room and IT Center.

Last week, Chris Graham, Jason Davidson, Bill Law & I went to WPC to share out our new platform, the Intel Hybrid Cloud.    If you are interested to learn more check out http://www.intel.com/go/hybridcloud.

 

Managing data is like herding cats.  There’s so much of it flying around your enterprise that it’s really hard to get your arms around it.  With growth in “everything internet-connected” it’s only going to get worse.  But it’s your job to control it, secure it, and manage it and databases can help.


One big challenge with every database project is – How do I keep up with unpredictable growth and control costs at the same time?


Well I’m here to tell you that the answer lies in one simple word – PERFORMANCE.  It’s kind of obvious that deploying higher performing servers gives you the headroom to handle database growth:  More horsepower = More transactions per minute = Higher productivity and faster response times.  With higher performing servers you can put in place more headroom with fewer servers to handle peak demands.


But you can’t have your cake and eat it too, can you?  Well I’m here to tell you that in this case the answer is – yes.  This one is a little less obvious.  You usually have to pay for that performance in the form of higher-priced servers.  But if you look at total cost of ownership (TCO) you will find that higher performing servers reduce TCO.  Because you can get by with fewer servers, you get additional savings in rack space, power, cooling and software licensing costs.  Whether you are replacing your aging servers or comparing alternatives, in many cases you will find that higher performing database servers reduce the complexity in your data center and save you money.


Let’s take a look at some of the highest performing database servers in the industry.  In case you haven’t heard, Intel introduced two new server processors just four months ago and OEMs are setting new bars with industry standard database benchmarks.


First, The Big Kahuna – Intel® Xeon® processor 7500 series – is designed for expandable servers that support 2, 4, or even 8 or more processors with loads of memory (up to 1 Terabyte in a 4-socket server!).  This platform delivered the biggest server performance leap in Intel’s history with ~3X the performance vs. the previous generation.  That level of capability and performance was unheard of in any previous X86 server, and is perfect for large mission critical databases.  These workloads demand lots of efficient processing threads, maximum cache and memory capacity, as well as reliability features for data integrity and 24x7 operation.


Fujitsu recently published a 4-socket Database benchmark result using TPC* Benchmark* E which simulates online database transactions in a brokerage firm.  Their new PRIMERGY RX600 S5 rack server using the Xeon® 7500 set a new bar with 2,046 Transactions per second (TpsE) @ 193.68 $/TpsE (4P/32C/64T), 36% better $/TpsE than the next best X86 4-socket result.  Imagine the TCO you can save with this baby!


The other processor launched – Intel® Xeon® processor 5600 – is targeted at energy efficient 2-socket servers.  This processor was a refresh to the Nehalem architecture servers introduced in 2009 based on the Xeon® 5500 (pdf).  With the Xeon® 5600 you can optimize for server performance and energy efficiency in your mid-sized database deployments.  They can increase performance by up to 36%1 at the same power level, or reduce power by up to 30%2 at the same performance level compared to Xeon® 5500 based servers.  The Xeon® 5600 also includes new security features designed to speed up database encryption, which is great for e-commerce transactions.


Hewlett Packard posted multiple record-setting 2-socket database performance results using the Xeon® 5600 with their new ProLiant DL380 G7 rack server.  The TPC Benchmark* C simulates order-entry transactions at a wholesale supplier managing stock from multiple warehouses. HP’s result of 803,068 Transactions per minute (TpmC) @ 0.68 $/TpmC (2P/12C/24T) delivered 14% better performance than the next best X86 2-socket result.  They also set a new 2-socket TPC Benchmark* E record with 1,110 TpsE @ 294 $/TpsE (2P/12C/24T).

 

Where do you go from here?


Systems based on these new processors are available today from the server vendor of your choice.  They’re the perfect solution for your next database challenge delivering more headroom, greater efficiency, and lower TCO.  For server refresh they offer a quick Return on Investment.  Check it out using the Refresh Savings Estimator Tool.


Let me know if you agree, and stay tuned to the server room for more on performance, efficiency and lower TCO.


1 Comparing HP’s new TPC-E result of 1,110 TpsE @$294/TpsE (2P/12C/24T) to IBM’s TPC-E result using Xeon X5570 of 817 TpsE @ $319.15/TpsE (2P/8C/16T).

2 Source: Fujitsu Performance measurements comparing Xeon L5650 vs. X5570 SKUs using SPECint_rate_base2006, http://docs.ts.fujitsu.com/dl.aspx?id=0140b19d-56e3-4b24-a01e-26b8a80cfe53, http://docs.ts.fujitsu.com/dl.aspx?id=4af74e10-24b1-4cf8-bb3b-9c4f5f177389

I live close to the Columbia Gorge, one of the most beautiful places in the world in my humble opinion.  From the banks of the Columbia river, green mountains rise up on both sides

 

…and data centers dot the landscape.  Yep, I said data centers, some of the largest ones in the world.  They’re here because of the attractive low cost of hydro-electric power, and inside these buildings data center managers are deploying the cutting edge technologies that are defining the first wave of cloud computing.  I have often wondered what exactly is going on in these massive structures, so when I recently had a chance to chat with Jason Waxman, our lead on high density data center computing, I had to ask him about the differences between these massive data centers and the typical “enterprise” data center and how public cloud data center key learnings will impact broader enterprise computing as we know it.  Jason painted a picture of future enterprise computing that while built on the core tenets of computing today (Moore’s law, industry standard technology innovation, insatiable user demand for new capabilities) the delivery of computing will look a bit different…Listen to his vision in this week's Chip Chat interview: The Future of the Cloud.

 

If you like this interview please find all of our episodes here...and I'd love to hear from you about what you'd like to hear about in future episodes.  Send me a note here on the Server Room or visit the Chip Chat Facebook page and become a fan.

If you had to summarize your data center needs in just one word, what would that word be? For two Korean companies, NHN Corporation and SK Communications, the word was performance—and the solution was Intel® Xeon® processor 5500 series-based servers.

 

NHN Corporation operates popular portal services including game and search engine portals. It was looking for servers that delivered higher performance that could translate into both faster response time for customers and lower operating costs for the company.

 

SK Communications, an Internet service provider, had a similar need—better server performance and response times and lower costs.

 

Both companies found the same solution.

 

“The Intel Xeon processor 5500 series platform gave us about 50 percent improvement in throughput and over 300 percent reduction in response time over the previous-generation platform,” explained Chan Song, chief of performance architecture for NHN Corporation.

 

For SK Communications, Intel‘s optimization tools were a big asset, helping  the engineers locate inefficient code and correct bad programming habits.

 

“The optimization was a great help,” said Jeong-Gon Kim, an SK Communications developer who worked on the project, “and we were able to achieve an over five times performance improvement.”

 

For the whole story, read our new NHN Corporation and SK Communications success stories. As always, you can find these, and many more, in the Intel.com Reference Room and IT Center.

“Extreme” is one of those words that can be either bad or good, depending on the context. But if you’re talking about IT performance, “extreme” isn’t just good, it’s great. That’s what Korean learning center Kstudy and online game hosting service GameServers both found out when they looked at the Intel® Xeon® processor 5500 series.

 

Kstudy needs to store and analyze, in real time, vast amounts of data ranging from students’ credit applications to their grades. The company improved its database server performance by 25 times using Intel Xeon processors 5500 series with Intel® Solid-State Drives.

 

“Choosing servers with Intel® X25-E SATA Solid-State Drive and Intel Xeon processor x5560 not only improved our processing performance, it also enhanced our overall organizational efficiency and cut costs,” explained Choi Won Seok, CEO of Kstudy.

 

For GameServers, a key player in the highly competitive online gaming marketplace, the top priority is to attract and retain customers. That means offering an outstanding gaming experience at the lowest possible price. After refreshing its data centers with servers based on the Intel Xeon processor 5500 series, GameServers can now support four times as many customers as with its old servers. Even better, its data center footprint has shrunk by two-thirds.

 

“The exceptional raw compute performance and increased memory bandwidth of the Intel Xeon processor 5500 series make this series a perfect fit for online gaming,” explained Dave Aninowsky, CEO of GameServers. “Everything we purchase now has Intel Xeon processor 5500 series.”

 

For the whole story, read our new Kstudy and GameServers business success stories. As always, you can find both of these, and many more, in the Intel.com Reference Room and IT Center.

OSCON is just days away. A team of us from Intel are preparing for the event. Intel has too much to offer in the open source space so we just have to pick and choose the most relevant topics for the anticipated audience while painfully cutting out tons of attractive stuff.

 

Out of the spectrum of repertoires Intel has, we picked subjects around MeeGo and servers.The burst in numbers and choices we have for client and mobile technologies, application availability, and data variation and volume puts us in pressure to deliver the most powerful, efficient, and reliable server platforms.   I, a member of Data Center Group, as usual am focused on messages around servers and data center technologies. Please do find time to stop by and find me to have conversations on servers in open source space, addressing above said opportunities and challenges, on top of browsing our server presence such as: 

 

  • A technical session “RAS for Intel® Xeon® processors” by Tony Luck 
  • A technical session “Open source compliance meets supply chain management” by Andy Wilson 
  • Demo at Intel exhibit “Novell SLES 11 SP1 and Intel Xeon processor RAS"
  • Demo at Intel exhibit “Red Hat RHEL 6 virtualization and RAS" 
  • Chalk talk at Intel exhibit “Novell – Intel Xeon processor 7500 & SLES SP1, Platespin" 
  • Chalk talk at Intel exhibit “Red Hat Enterprise Virtualization – virtual I/O and KVM" 
  • Chalk talk at Intel exhibit “Intel Xeon processor RAS”  
  • Chalk talk at Intel exhibit “Oracle MySQL" 
  • Chalk talk at Intel exhibit “Oracle OpenSolaris"

 

and much more...

Great storytelling isn’t the only kind of power that’s essential to DreamWorks Animation, the studio behind blockbuster films like Shrek Forever After. For its state-of-the-art animation, DreamWorks needed to boost its rendering throughput—at the same time minimizing power consumption and improving backup and archiving service capabilities.

 

The solution it found was HP ProLiant* servers with Intel® Xeon® processor 5500 series, which improved throughput by more than 60 percent to help DreamWorks break new ground faster than ever.

 

“We had an unprecedented three movie releases in 2010 that were all stereoscopic 3D, and we couldn’t have done them without the Intel Xeon processor 5500 series-based HP ProLiant servers,” explained  Ed Leonard, CTO of  DreamWorks Animation.

 

You can read the whole story in the new HP DreamWorks Animation case study. As always, you can find this one, and many more, in the Intel.com Reference Room and IT Center.

 

 

 


*Other names and brands may be claimed as the property of others.

Products going EOL – forced to pay more for services or forced to upgrade to the current rev of platforms – with fee, performance suffers, inefficient servers, uncertainty in roadmap, cost continues to go up…very common feedback from customers using proprietary RISC/UNIX platforms.  Everybody understands this and vendors from OEMs to SIs and VARs are offering services to provide the most efficient ways to migrate.  For this particular problem, Intel is the platform of choice.  It’s a win-win, customers obtain higher performance, efficiency, and better cost structure in server investments, and service providers make money off the services provided.  This will continue as long as the problem is there and the team of Intel and Red Hat are committed to continue providing a solution in this space.

 

 

Especially with greater roadmap uncertainty, customers’ urgency was more on Oracle/Sun SPARC/Solaris platforms, so we worked with customers to immediately start tackling on the problem and get the customers to performance increase and cost saving mode instead of trapped in stagnating performance and cost increase mode.

 

Red Hat’s Solaris to RHEL Strategic Migration Planning Guide worked well to provide a framework to get the migration project going.

 

 

The same Intel and RHEL principle also applies to AIX environment.  Increasingly customers are interested and started moving from AIX to Intel/RHEL.  Like it was for Solaris, Red Hat is now making the AIX to RHEL Strategic Migration Planning Guide available for download.

 

Intel is teaming up with Red Hat to deliver migration guidance via workshops.  We are scheduling to visit the following cities in the upcoming weeks.  Please contact your Intel and/or Red Hat sales teams for more details and how to register for the workshops.

 

 

Dates                          Location

July 20, 2010            Dallas, TX

July 21, 2010            Las Vegas, NV

August 12, 2010       Washington, DC

The way for an electronic stock exchange to stand out in a crowded marketplace is with its trading platform—which is something you only migrate to a new processor architecture when the performance advantages are just too compelling to resist.

 

That's what happened for National Stock Exchange, Inc. (NSX). When it premiered its NSX BLADE* trading platform in 2006, the platform ran on non-Intel-based infrastructure. Now NSX is shifting to the Intel® Xeon® processor 5570 because the performance was impossible to resist. By its own estimate, NSX is gaining end-to-end speed-ups of 20 percent to 25 percent and higher. It can also handle more customers, higher trade volumes, and steeper peaks of market activity. Plus, the new infrastructure reduces floor space requirements and mitigates power constraints, enabling NSX to cut overhead costs and generate savings for customers while continuing to expand its business.

 

“We are experiencing significant benefits in performance, uptime, stability, and headroom that will enable NSX to continue to be the most innovative exchange from a service availability perspective and the exchange with the lowest latency,” explains Saro Jahani, chief information officer for NSX.

 

For the whole story, read our new NSX case study. As always, you can find this one, and many more, in the Intel.com Reference Room and IT Center.

Processing muscle equals marketplace muscle. That’s what iMDsoft found out when it switched its MetaVision Suite*—which gives hospitals a single platform to automate hospital workflows in acute-care environments—to servers based on Intel® Xeon® processor 7500 series.

 

iMDsoft was offering its customers servers powered by Intel Xeon processors 5300 and 5400 series running Microsoft SQL Server*. Since making the switch to the Xeon processor 7500 series, iMDsoft can easily scale the MetaVision Suite to 16,000 users, far exceeding requirements. The new platform also delivers eight times the memory bandwidth of previous Intel Xeon processor generations and provides compelling virtualization capability, enabling iMDsoft to bundle this option into its customer offering.

 

“The Intel Xeon processor 7500 series has provided us with leading performance and scalability, which are enabling us to build our business out into large-scale hospital organizations,” explains Eran David, chief technical officer for iMDsoft. “In turn, doctors are also able to gain an entire patient history, even if that patient was admitted to another hospital in the group. This helps doctors make quick and informed decisions when a patient enters the hospital.”

 

For the whole story, read our new iMDsoft case study. As always, you can find this one, and many more, in the Intel.com Reference Room and IT Center.

Filter Blog

By author:
By date:
By tag: