Intel IT began our cloud roadmap implementation back in 2006, which started with our Grid computing environment in Silicon Design. Since then, we expanded that early roll-out to our Office and Enterprise environments. As we plan forward, we anticipate new bottle necks in overall system performance, along with a shift away from compute to storage and networking I/O.  This is primarily driven by the increased capabilities of the latest generation Xeon processors that enabled a greater number of denser VMs.

 

cloud roadmap.JPG

 

After speaking to my peers who manage similar IT enterprises, my observations are that they seem focused on capacity management in terms of CPU, memory and storage capacity without adequate focus on IO capacity.  Properly monitoring and controlling network and storage IO bottlenecks is paramount to maintaining overall system performance.

 

In a recent podcast on technlogy for tomorrow's cloud, I address the technologies Intel IT has on its cloud roadmap, and outline building a cloud for an organization. What challenges and solutions do you have for maintaining overall system performance?

Steve Jobs in 1985 with Mac Computers

The most compelling reason for most people to buy a computer for the home will be to link it to a nationwide communications network. We’re just in the beginning stages of what will be a truly remarkable breakthrough for most people––as remarkable as the telephone”

 

– Steve Jobs, 1985, Legendary Technology Rock Star

 

 

As companies grow older, mature technologists (i.e Technology Rock Stars) can become more skeptical of the value created in the lives of individuals throughout the world. My personal skepticism has lead me to question my own work, question my own management chain and question the experts (including myself) regarding the benefits of technology to society at large. As an aspiring Technology Rock Star (TRS), part of product development progress means that I question everything that has been done in the past.

 

Skepticism begets optimism, and the industry moves forward once the hype cycle "crosses the chasm". Cloud Computing has become the core to growth in the financial markets for technology companies. Bad ideas with "Cloud" associated get funds, while more technically aspiring projects are left on the drawing board. Social Networking and Cloud have become the "arbiters of innovation" without an actual creation of technology; rather they are "arbiters of content" and brilliant marketers.

 

In 1996, legendary Technology Rock Star Bill Gates wrote a letter stating that "content is king" and the future of computing is in content. An AOL-Time Warner merger and 15 years later, after several billionaires made/lost substantial amounts...he may still be right. However, this time around, it is not structured content but unstructured data across a plethora of devices from the handheld to the data center that drive the explosion of content.

 

For Intel, the explosion of unstructured/structured data, which is on track to increase 44 times in the next decade, presents a challenge that forced us to create a new vision for our company. This vision, led by Kirk Skaugen, provides us with a depth of capabilities and technologies that are new to a traditionally silicon company. At Intel, we have a dilemma in this age of Cloud Computing and Social Networking. How do we design technologies, solutions, and industry partnerships that solve the new world’s most difficult technology concerns?

 

If “content is king” but context is value, where does Intel fit?

 

You now understand some of the dilemmas and concerns we have when we debate the funding of our next technology platforms. Does anyone care? The answer is yes…but context is everything.

 

We recently announced the Intel Xeon E7 family of processors. This new platform delivers breathtaking 2,4 and 8+ socket performance for enterprise class applications, with 10-Cores and 20 threads of performance per socket. For many, it reminds us of a "Cluster on a Chip". I had the opportunity to work with the product management and engineering team of this platform on a regular basis to discuss Cloud Computing, virtualization, and to a lesser degree, security. This is the start of Intel’s Socially Reliable future. A future committed to virtualization, reliability, security, and performance delivered in a platform.

 

At VMWorld 2011, I have the unique pleasure to co-present a session on Intel's vision for Cloud Computing with a dear friend and Technology Rock Star, Billy Cox, Director of Intel's Cloudbuilders program. We will discuss the changing landscape of Open Data Centers, the trends for the industry, and how Intel invests in our cloud computing future.

 

Many argue that Cloud Computing is a business model, not a technology. I respectfully disagree. Cloud Computing provides a unique opportunity to deliver reliable breakthrough technologies to an ever changing content ecosystem in a socially responsible manner. If you plan to be at VMWorld this year, come by the Intel Booth/Supersession and tell us your story.

 

If you cannot make VMWorld this year in Las Vegas, please watch on Wikibon where we discuss Intel’s initiative in the Open Data Center or send a comment.

 

August 31, 2011 - VMWorld: 8:30am

IT Rock Stars Unite: Find Out What It Takes to Lead Your Organization Into Cloud Computing

Venetian • Palazzo Ballroom

For many people in IT, the road to cloud computing has one huge roadblock: security concerns. To get to the cloud, you’ve got to first get beyond all your security questions. And to do that, you’ve got to “get to know” your cloud.

 

One place where this can begin is at the hardware level. That’s where you can establish the integrity of the server and use that knowledge to create trusted compute pools. Like the concrete footers on a house, trusted compute pools provide a foundation for a secure environment.

 

So how do you go about creating this foundation? A good first step is to watch a new Intel video on YouTube. This video, “Securing a Cloud Infrastructure with Intel, HyTrust and VMware,” walks you through the process of configuration, policy creation, and implementation of a trusted compute environment.

 

The demonstration is based on an Intel® Cloud Builders reference architecture that was put into action in a lab setting. In the demo, we established a five-server configuration and then activated Intel® Trusted Executive Technology (Intel® TXT) on four VMware vSphere* hosts.

 

We configured one of the hosts as a management server, and then created three virtual machines on the server, with the following roles:

  • Infrastructure server
  • VMware vCenter* Server
  • HyTrust* Appliance

 

After we configured these VMs, we used VMware vCenter Server to create a VMware vSphere cluster with the three remaining hosts. We then used the HyTrust Appliance to set up a trusted compute pool consisting of two of the three servers in the cluster. We left the third server untrusted so we could demonstrate how HyTrust Appliance can garner platform trust status, use that for defining security policies, and then enforce those policies.

 

Along with the step-by-step demonstration, this video offers some great tips for building a trusted compute platform. So if you’re thinking about a cloud, you’ll definitely want to tune in to this video. It gives you a fast and easy way to see how you can get to know more about your cloud infrastructure, and then use that information to better protect your critical data and workloads.

 

For a deeper dive:

Watch Now

 

NCH.jpgNCH Corporation, a major international marketer of maintenance products, expects to save USD 5.5 million over five years in hard costs like maintenance. How? By using servers based on the Intel® Xeon® processor 7500 series.


“One of the problems we had at NCH was really convincing our executives that yes, not only is it less expensive, but it’s also going to perform better,” explained David Kennedy, director of infrastructure for NCH. “The model of our company is definitely [to] drive costs down. We replaced our RISC-based systems with Intel Xeon processor 7500 series. It’s so much less expensive, the maintenance on our old equipment actually paid for a new rack of Intel Xeon processor-based servers. And when the project was completed and the system went live for the folks in Europe, we actually got the rarest call in the history of the IT Department, which was, ‘What did you do? Why is it so fast?’”


For the whole story, watch our new NCH Corporation video or read the NCH business success story. As always, you can find these, and many others, in the Intel.com Reference Room and IT Center.

Back in November, we told you about our IT Tune Up Contest designed to give small businesses an IT makeover. We sorted through contest entries from around the country, and chose 3 deserving winners. Each small business received a new Intel® Xeon® processor-based server, $5,000 worth of additional hardware and software, and integration help from a local technology expert.

 

Let’s meet our first winner…Button Dodge of Kokomo, IN. Josh Shannonhouse, the IT director of Button Dodge, sent us this appeal on YouTube describing his need for an IT makeover:

 

 

As Josh described to us in the attached case study, “pretty much all our networking equipment had to be gutted and replaced. When I started in this position, they had no server or any kind of network management to speak of. Basically, everything was just in one big workgroup.  There were some network printers and static IPs, but that's about it. No real file sharing, no way to manage the PCs.  It was just a mess.”

 

With such a decentralized configuration, network breakdowns that happened almost weekly placed a heavy burden on operations. As Josh told us, “We had some processes in place if the PCs go down to do things the old fashioned way—by paper and by hand. But it's very inefficient, and it takes at least twice as long. It really slows us down. In some departments it completely brings us to a standstill."

 

After winning an IT makeover from Intel, Button Dodge received a real server – and a whole lot more. With its new hardware, Button Dodge immediately saw improvements to their business operations, and their bottom line. According to Josh, “We’ve gained file sharing, printer sharing, roaming profiles, and the ability to centrally manage our systems.”

 

Does Button Dodge’s “before” situation sound familiar to you? If so, consider contacting your local Intel® Technology Pro who can help support your business with the IT infrastructure you need and at a cost that fits your budget.

These days, "Cloud" represents more than just a visible mass of water droplets suspended in the atmosphere…It's more than just a weather pattern or a new technology buzz-word; it's the future of an industry.

 

However with new technology come new questions, which is why we would like to hear from you
What do you want to know about Cloud computing?

 

During VMworld 2011, there will be a livecast with Billy Cox, Intel Corporation’s Director of Cloud Software Strategy.

 

BillyCox.jpg

 

Since joining Intel in 2007, Billy now leads the Cloud strategy efforts for the Intel Software and Services Group. In addition to his strategy responsibilities, he  is one of the main driving forces behind the Cloud Builders program. As an avid  participant in Cloud Builders, Billy provides great insight on moving to the cloud, hybrid cloud, and usage models and technology. In his 30+ years of industry experience, Billy has led the design of compute, network, and storage  solutions and actively participated in multiple standards efforts.

 

You can listen in and participate in the live discussion with Billy Cox on Cloud computing and Intel Cloud Builders on Tuesday August 30th at 4:40 PM PST.

 

This is your chance to learn about Cloud computing from an Intel expert, and have your questions answered!

 

What should we ask Billy? You tell us!

 

Submit your question as a comment below, or reply to us on Twitter at @IntelChipChat!

OK, I said I’d update the story in a week, and it turns out to have been two months.  Fortunately, that delay has given me more stuff to talk about!

 

Previously, I introduced a database technology called a triple, how they’re stored, and Franz Technology’s goal of loading and querying a trillion triples on a large scale Intel server.  Triplestores are perfect for making sense out of extremely complex data.  However, a triplestore is only useful if massive quantities of information can be loaded, updated and effectively queried in a reasonable amount of time.

 

That is why Franz Technology’s announcement is so interesting.  Less than a month before the 6/7 announcement, Intel gave Franz access to one of our lab systems -  a high performance server from IBM.

 

  • The system was an IBM x3850 X5 8-socket system configured with:
  • 8 Xeon® E7-8870 processors, each with 10 cores,  30MB L3 cache, running at 2.4GHz per core
  • 2 terabytes of 1066MHz DDR3 DRAM
  • 22TB of fast Fibre Channel SAN storage
  • This particular system can get much larger in terms of both memory and storage, but we had to go with what we had in the lab at the time.

 

Before Franz had an opportunity to work with this system, the largest triplestore they'd been able to assemble contained roughly 50 billion entries.

 

Running on the 8-socket Xeon E7 system, Franz was able to load and efficiently query more than 320 billion triples, and the factor limiting scale wasn't memory or processors--it was the amount of disk space available.  With some additional spindles and memory, Franz is confident that they can achieve the previously unthinkable result of a trillion triples.

 

It's difficult for the human mind to grasp a trillion anything - dollars, stars, or triples.  The important thing to understand here is that the amount of processing that goes into loading and querying a trillion triples is enormous.  Unless you have a hardware platform that can deliver a corresponding amount of concentrated processing power at an affordable cost, it's all kind of pointless.

 

What Franz demonstrated was that such a hardware platform exists, it performs even better than expected, and it delivers a level of capacity that allows customers to think about putting the full potential of the Semantic Web to use in important and creative ways.

 

The other thing that's so interesting about this example is that triplestores are perfect for making 'fuzzy' (e.g. - probabilistic) decisions.  Combine a triplestore with a Bayesian Belief Network (BBN) reasoning/machine learning application, and you've got a very powerful combination.  Instead of just retrieving data that satisfies a predefined query, a BBN combined with a triplestore can 'discover' relationships in the data based on patterns of recurrence and feedback loops.

 

One of Franz' key customers, Amdocs, used Semtech to present their vision of how they plan to use this technique to anticipate what a telephony services subscriber might be calling about before the customer service rep picks up the phone.  If that actually works, to scale and affordably, which is what the Xeon E7 processor is all about, then I think it’ll be a pretty amazing breakthrough.

 

On August 16, Franz announced their semantic web breakthrough, that they  achieved the trillion triples mark.  This time, they did it on an Intel-based HPC cluster-in-the-cloud provided by Stillwater SuperComputing.  This achievement demonstrates that Franz’ graph-oriented database engine is capable of scaling out as well as scaling up.

 

It’s always good to have choices in terms of deployment architecture, and Franz’ approach to doing triplestores provides an ideal test bed for comparing the scale-out vs. scale-up approaches for this workload.  My bet is that a cluster of large machines will prove to be better-suited to the realities of processing the Semantic Web than will a larger cluster of smaller machines.

 

The reason I say that is that with triplestores, it’s virtually impossible to predict in advance the path that any particular query will take through the data.  So if the data is sharded across a large number of nodes, a select/join operation is very likely to bottleneck on the network connection between nodes.  If all of the data is stored locally to a single large machine, then joins process at the full speed of memory, which is always orders of magnitude greater than any network.

 

It will be interesting to monitor progress of the old scale-out vs. scale-up debate in this new arena of triplestores.  Watch this space for updates.  I promise to let you know when something interesting happens.

 

What do you think?  Are Triplestores interesting to your business?

Are you an IT professional that would like to get your name “out there?”  Or would you like to recognize someone for great work?  Intel would like to shine the spotlight on great work, and great people!  This is a chance to be, or nominate someone to be profiled on Intel.com as an IT Pro.

 

Recognition, on Intel.com – Sound interesting?  Good!  Here’s what it takes:

 

 

  • E-mail submission of a short questionnaire that asks for:

 

    • Contact/business info of nominee
    • Experience/background of nominee
    • Project specific questions based on categories listed above

 

  • Approximately 20 minutes of your time to answer the questionnaire

 

If this sounds interesting please e-mail me or send me a message here on the Server Room Community (login-required).

Headed to San Francisco for Intel Developer Forum 2011? Well, start some thumb exercises and sleep with your smart phone next to your pillow because Intel will give away two Netgear home servers at the event!

 

All you need to do is follow @IntelXeon on Twitter and you could win!

 

During IDF 2011, we will tweet a time and the exact location of the prize. Once you see that tweet, sprint on over to the location. The first person that gets to the designated location at the proper time will win a NetGear home server! It’s as simple as that!

 

Please be sure to read the official rules and information and start following @IntelXeon now!

 

Good luck!

Download Now


FXCM.jpgIn the foreign exchange market, a price change of one-hundredth of a penny can mean the difference between profit and loss for a trader. FXCM helps traders capitalize on these changes, which occur within milliseconds, by using technology that delivers rapid, intelligent trading executions and facilitates direct interactions with financial institutions. The company recently refreshed servers with the Intel® Xeon® processor 5600 and 7500 series to accelerate trades, accommodate periodic usage spikes, and ensure high application availability while controlling data center real estate.


“We do our homework,” explained Ivan Brightly, chief information officer for FXCM. “The performance, memory bandwidth, and energy efficiency of the Intel Xeon processor 5600 series made those processors a clear choice for our applications.”


To learn more, download our new FXCM business success story. As always, you can find this one, and many others, in the Intel.com Reference Room and Survival Kit.

Download Now

 

SK Telecom.jpgKorea's SK Telecom, a leading mobile communications company, enhanced its security by deploying hardware-based Intel® AES-NI powered by Intel® Xeon® processors and plans to extend the solution to its data centers to offer customers reliable and secured services.


To quickly respond to security threats, SK Telecom needed a security system that encrypts data while minimizing server slowdown. The hardware-based Intel AES-NI, powered by Intel Xeon processors, performs the encryption easily, quickly, and completely in the hardware without affecting overall system performance.


“With the launch of new cloud services, we needed a more powerful security system to protect the user’s private data,” explained Nam-Seuk Han, head of the Information Technology R&D Center at SK Telecom. “That is why we chose the hardware-based Intel AES-NI powered by Intel Xeon processors.”

 

For the whole story, download our new SK Telecom business success story.

Intel is using this week’s Hotchips  conference to disclose additional new information about its next generation  Itanium chip, codenamed Poulson.

 

The initial Poulson details (8 cores, 3.1 Billion  transistor, 32nm process) were disclosed at the International  Solid State Circuit Conference earlier this year.  While Itanium customers are always interested  in coming attractions, it’s also worthwhile for Intel Xeon Server customers to  also keep an eye on the evolution of Itanium, as many features originally  introduced on Itanium  often waterfall  down to subsequent generations of Xeon CPU chips.  Remember that Poulson, like the current Intel  Itanium 9300 processor shares many common platform ingredients with Xeon,  including the Intel QuickPath and Scalable Memory Interconnects, the Intel 7500  Scalable Memory Buffer and DDR3, and the Intel 7500 Chipset.

 

So, what’s new?  There  are three key feature areas.  The first  is Intel Instruction Reply Technology, which is a major RAS enhancement.  This is the first Intel processor with  Instruction Replay RAS capability, and it utilizes a new pipeline architecture  to expand error detection in order to capture transient errors in execution. Upon  error detection, instructions can then be re-executed from the instruction  buffer queue to automatically recover from severe errors to improve resiliency.

 

The same instruction buffer capability also enables the  second new feature, an improved Hyper-Threading Technology. It supports performance  enhancement with Dual Domain Multithreading support, which enables independent front and  backend pipeline execution to improve multi-thread efficiency. As EPIC  architecture is already known for its highly parallel nature, this enhancement  will help take Poulson’s overall parallelism to the next level.

 

Lastly, Poulson is adding new instructions in four key  areas.  First there are new Integer  operations (mpy4, mpyshl4, clz). In support of the higher parallelism and multithreading  capabilities, there is expanded Data Access Hints (mov dahr), Expanded Software  Prefetch (ifetch.count) and Thread Control (hint@priority). These new  instructions lay the foundation for the Itanium architecture to grow with  future needs.

 

As you can see, most of these features are designed to take  full advantage of the 8 core, 12-wide issue architecture by enabling the  maximum amount of parallel execution. Poulson is on track for 2012 delivery (if  you attended HP Discover you may have had a chance to actually see an active Poulson  system!)  and the follow-on future  Kittson processor is under development.

 

If you’d like to learn more details check out the full Hotchips presentation.

Download Now 

cloud4com.jpg

Cloud4Com delivers cloud computing solutions to medium and large enterprises in the Czech Republic. Its central offering is its Virtual Data Center* service, an infrastructure-as-a-service (IaaS) solution that provides customers with remote access to the resources typically found on demand in an enterprise data center.

 

Cloud4Com used reference architectures from Intel® Cloud Builders when planning the technology platform for its service. It identified the Cisco Unified Computing System* with 12 Intel® Xeon® processors 5600 series as the solution best able to offer the powerful, flexible, and energy-efficient server platform it needed. To support unified networking, Cloud4Com deployed a Hitachi Data Systems Adaptable Modular Storage* 2300 solution based on an Intel Cloud Builders reference architecture for unified networking based on 10 gigabit Ethernet.

 

“The commercial insights we gained through our membership in the Open Data Center Alliance and the reference architectures provided by Intel® Cloud Builders proved invaluable to us when developing our Virtual Data Center service,” explained Jaroslav Hulej, sales director of Cloud4Com. “Using the support provided  by Intel, Cisco, and others, we were able to implement an infrastructure platform that delivers all we—and our customers—need from the service.”

 

For the whole story, download our new Cloud4Com business success story.

 

 

*Other names and brands may be claimed as the property of others.

You’ve done the Proof of Concept (PoC) and now everyone clamors to put the system into production. Some see the application running after the PoC and ask you to link it in to the network.  Yet, you know that you put this system together, and the finishing touches were done with chewing gum and bailing wire.

 

Looking at the production system, everyone can see that the source application has been receiving data since you started the PoC.  How is that new data going to get into the application?  The bottom line is that it really can’t. The referential integrity issues alone would stop you and of course, you can’t go and apply the applications log files (unless they are in character form).

 

What are your options here?  You could export all of the data that has changed during the period, but how do you distinguish or even find the data that has changed?  Not to mention, when you apply the changed data, new data still comes in. What are you going to do with that?

 

The solution is a migration of the actual production system while it runs.  There are various techniques to do this but all require you to assure the boss that everything will be OK.  You know now that you can do the migration (the POC proved that) but how will it be done on the production system. Most importantly, how do you minimize downtime? How long is the downtime going to be?

 

So, now is the time to take those lessons learned in the PoC and apply them to the migration process.  What does this mean?  In the PoC process you probably moved a part of the system, and then moved another part.  You discovered you need to adjust the tuning of the server’s operating system and then the DBA told you that you forgot a link.  This was followed by problems in the execution of the application that required analysis and correction.  (See? bailing wire and chewing gum.)

 

Luckily, you documented all of the changes and determined the proper implementation sequence (or at least you think it is the proper implementation sequence). You also discovered that you can parallelize a lot of the steps and reduce the down time of the application.  Meanwhile, you have management asking when you can have the application migrated to the new platform.  For instance, at one site we were moving a retail program, RETEK, and Thanksgiving was fast approaching.  They needed the extra horsepower to handle the Christmas rush, and they had to input all of the Christmas marketing programs into the system.

 

To answer these questions and address the concerns, execute a rehearsal of what you plan to do for the actual migration.  Plan is the operative word here.

Take what you learned from the PoC and organize it into a coherent plan with the events sequenced correctly.  Plan on running as many tasks as you can in parallel.  Plan on doing as many tasks as you can before you have to shut down the production system for the cutover.

 

Back in the day when export/import was all that was available for the migration of an Oracle database, I would build the target database, create all of the production tablespaces as small I could, and create the users.  Then, I could ‘ADD DATA FILE’ in parallel until each tablespace was big enough for the data. After that, I would create all of the tables with enough extents to hold all the data (we don’t want Oracle to use time having to extend the tables while the production system is down). With the tables, I could create and compile all of the Packages and Procedures.  I then turned off all referential integrity constraints and Triggers. Only then was the production system put into single user mode for the export of the data.  All we needed to export were the rows of data; no indices.  Once the data was inserted, we fired off the index create statements, and ran them in parallel.  Once the indices were completed, the referential integrity constraints were ‘ENABLED’, thus setting off another round of index building.

Now that you've moved the application, you'll need to run your the application in your test harness for regression testing.  The testing will 'prove' that the migration is complete.

 

A final check of the database, and it opened up.    Here is an illustration of the process I just described.

 

Exp-imp Migration.jpg

 

Today, you can use GoldenGate or Streams for Oracle database migration or you can use Replication Server for Sybase. GoldenGate allows you to run the production and the new database in parallel, and even roll the migration back to the original database in the event that something goes wrong.

Wouldn’t it be better if that ‘something that goes wrong’ occurred when you executed the rehearsal of the migration of the Q/A database?   A little patience will result in greater likelihood of complete success.

 

Here are the rehearsal steps in a flow chart:

 

Production Rehearsal.jpg

 

 

Now that you’ve done the rehearsal, documented every step, and developed scripts to automate every process that can be automated, you are ready to plan the actual production migration and answer the question, ‘When is the application going to be ready on the cheaper platform?’

I recall a conversation not too long ago with my program manager who had embarked on his maiden LEAN - 6 sigma certification, and evangelized the omission of waste from a given process cycle.

 

This prompted me to think about organizational IT functions, and what this currently entails.  What in the process isn’t useful? Without impacting deliverables, what can be reduced without compromising deliverables?

 

Today’s cloud environment is hosted in mega data centers, and many companies host their private cloud in their enterprise data centers. Power efficiency is at the heart of every leading company.   Are our data centers designed correctly?

 

Walk into any data center and you will find that it is running at 21C to 22C. Data centers are over-cooled, and designed to run at maximum cooling temperatures.  ASHRAE (American Society of Heating, Refrigeration and Air conditioning Engineers) recommends temperatures in the high thirties centigrade range for different classes.  Since 2006, ASHRAE operating guidelines advise running data centers at a temperature of 27C (81F).  However, few Data Center operators adhere to these guidelines, as evidenced by Service Level Agreements (SLA) that state operating temperatures of 22C.  Clearly, a paradigm shift must precede any technical corrections.

 

For now, let’s get the basics right, starting with 27C operations.  What does running at 27C actually mean? It refers to the supply temperature of the server intake cold aisle.  So, is the solution as simple as increasing the operating temperature? Is that all that is necessary?

 

It is a misnomer that allowing the room to heat up will save on cooling costs.  Proper equipment cooling requires an engineered solution that adheres to Delta T specifications in terms of cooling tonnage and airflow. Data centers must be designed with proper cooling capacity for high density racks.  Air Flow needs to be segregated properly – hot air in the hot aisle and cold air in the cold aisle – for a closed loop cooling path.    It is essential to ensure sufficient airflow velocity to cool the equipment racks. The Computer Room Air Conditioning (CRAC) must match the equipment heat load and ensure the Delta T temperature is met. Exceeding the Delta T specs has operational consequences so understanding the specs and how your DC performs is crucial in this respect.

 

What is the next step?

Once IT equipment has cooled effectively and you’ve ensured your data center is designed in accordance with basic cooling principles, the next step is to raise the termperature from 21C to 27C.  This should be done progressively, raising the temperature one degree at a time.  You can change the sensor settings or switch off the Computer Fluid Dynamic (CFD) of Computer Room Air Conditioning (CRAC). Measure the CRAC supply and return temperature, equipment inlet and outlet temperature, under the raised floor and ceiling plenum temperature, and air pressure. Use a CFD tool to perform an analysis prior to re-calibrating the room. Your building management systems should measure key vital signs to ensure the CRAC, cooling loop supply and return temperature before and after the change.  This should be well documented and in line with your cooling equipment specifications.

 

What are the rewards?

To demonstrate the rewards, we describe a 2MW IT load data center - a model that closely parallels similar customer setup. For 21C operations, the total annual power consumption is assessed at 19.6 GWhr, Power Usage Effectiveness (PUE) of 1.60 and infrastructure overhead energy of 7.39GWhr.  When the operating temperature raises to 27C, overhead energy is reduced by 23%. There are greater advantages when an economizer is setup; there will be a 37% savings in overhead infrastructure energy and PUE is reached at 1.38. Assuming a power cost of $0.08 per kwhr, this translates to a conservative $212,000, which is no small change, and a carbon footprint reduction of 1,280 tons. All this can be accomplished with zero impact on applications.

 

This also adheres to the operations guidelines of 2008, which sets the operating temperature range at 18C to 27C. Can the temperature be set higher? Absolutely! That is why ASHRAE designed Class A1, Class A2, Class A3 and Class A4 with their ASHRAE 2011 DC white-paper, which moves the DC into 40C high ambient operations. This will be a topic that fuels a future discussion.

 

For many enterprises, there is a renewed interest in setting up a solid and efficient cloud infrastructure that runs like a utility – always available. This means setting up the private cloud, proper sizing and hosting the applications in the appropriate DC.

 

For cloud computing hosting providers and builders, there is a need to provide high uptime with a reasonable cost structure.  Operate your DC right, and the savings will flow to your bottom line. A 1C increase in operating temperature can translate to 4% savings in chiller power.  Run the data center efficiently, and your business will enjoy competitive edge.  It is critical to get the infrastructure right in order to ensure customer confidence in the cloud. This will have a multiplier effect, as customers often sign up with providers based on recommendations. Many customers are keen to sign up with hosting providers who are leaders in stewarding sustainability. Being Green is not a choice; it is a mandate and makes sound business sense.

download_arrow.pngStart your week right with the Data  Center Download here in the Server Room!   This post wraps up everything new in the Server Room, the Cloud  Builder Forum, and our Data Center Experts around the web.  This is your chance to catch up on all of the  blogs, podcasts, webcasts, and interesting items shared via Twitter from the  previous week.

 

Here’s our wrap-up of the best of data center news in and around Intel since July 21st:

 

 

*Please note there is a  change on The Server Room Forums.  Ask an  Expert is no longer located on the Server Room.   You can find the server technical  support forum by going to: http://communities.intel.com/community/tech/servers

Please update any bookmarks you may  have. 

We apologize for any inconvenience this may cause.

 

 

In the Blogs:

Raejeanne  Skillern asked what  is really holding back the cloud?

Brian  Yoshinaka explained In  the Data Center: Open FCoE Brings Integrated Fibre Channel over Ethernet to  VMware vSphere 5

Robert  Deutsche  shared on  Cloud Computing Strategy: It's All for Naught if the Dogs Won't Eat

Jennifer  Sanati  described that Information  Security for Small Businesses Evolves

Bruno  Domingues  explained cloud  Computing & Capacity Planning for SaaS – Part II

Cory  Klatik  shared that Cloud  Computing Technology and Research Gets a Boost

Jason  Waxman on extending  our Cloud “Field of Vision” beyond 2015

Winston Saunders   explained that Power  and Energy Efficiency: Double Your Benefit

Pauline  Nist  shared performance  & Benchmarks: Xeon E7 and DB2 tops TPC-C Performance Results

Emily  Hutson  told us about better  Information Security for Small Business: Intel Xeon Processor E3 Family-Based  Servers

Kibibi  Moseley  shared highlights  from Intel Mission Critical Solutions @ Cisco Live 2011: World of Solutions

 

Broadcasted Across the Web:

 

On Chip Chat – Episode #144 Energy  Efficiency in the Home

 

Rich Libby from the Eco-Technology program office discusses  how to make the world more efficient in the office and home using smart grid  technology with an upcoming technology called WEST - Wireless Energy Sensing  Technology.

 

On Conversations  in the Cloud – Episode #20 Building  Clouds with StackIQ

 

In this Conversations in the Cloud podcast Greg Bruno, VP of  Engineering from StackIQ, talks about the company’s focus in the HPC, big data,  and the cloud markets and using StackIQ Rocks for managing the end-to-end  stack.

 

In the Cloud Builders Webcast: Anticipating Unknown IT Needs  with Flexible Building Blocks

 

When going to the cloud what's important?  Security, reliability and flexibility. In  this webcast, NetApp and Intel come together to share key learning's on how they build an open and flexible cloud reference architecture using VMware's ESX,  NetApp Unified Storage Systems and Intel Unified 10 Gb Ethernet adapters.  Learn how VMware, NetApp and Intel are uniquely positioned to help you evolve your virtualized IT infrastructure to a cloud computing model.

 

On Channel Intel:

 

Intel  Lenovo Secure Cloud Access Overview

The  Intel® Server Products Edge -- Quality, Ingenuity & Commitment You Can  Trust

New  Intel Science and Tech Center for Cloud Computing

ISTC  Cloud: Carnegie Mellon University

ISTC  Embedded: Carnegie Mellon University

Cloud  computing at terminal velocity

Intelligent  Client in the Cloud with Intel Core vPro processors

Intel  Cloud 2015 Vision: Client Aware Cloud

Citrix  NetScaler Working with Intel and Microsoft Solutions

 

In the Social Stream:


Allyson Klein

  • #AMD #Nvidia losing market share to #Intel which now  commands 60% of GPU market - http://j.mp/opfuh9
  • Lightning takes down a #datacenter. Are solar explosions  next? http://ow.ly/6118w

 

 

Winston  Saunders

 

 

Dylan Larson

 

 

Raejeanne Skillern

  • Reading about Zynga's private cloud strategy - zCloud http://bit.ly/qIU9me #cloudstack &  #rightscale named as best of breed. Let's play!
  • Not a surprise > Solid State Drive Adoption Increasing:  via @eWeek.com http://bit.ly/pLoaB6
  • VMware starts to concede, PaaS on rise: VMware Licensing  Change Opens Doors for Competing Virtualization, Cloud Prvdrs http://bit.ly/obV97r
  • RT @bradshaf: Pretty good summary on value of OpenStack to  Rackspace « Data Center Knowledge: http://bit.ly/q9WrpX
  • A worthwhile read: Cloud Standards Get Customer Push via  @eWeek.com http://bit.ly/odl5nn
  • From #intel IT: "Our cloud investments have already  paid for themselves AND returned $17M in cash savings " http://bit.ly/nOEOqV

Download

 

dreamworks2.jpgWith the stakes higher for each new computer-generated (CG) animated 3D feature film, DreamWorks Animation has an ever-increasing need for computing performance. Using Intel® Xeon® processor 5600 series-based platforms, the studio is achieving more than a 60 percent performance increase over previous-generation systems. DreamWorks Animation is using that boost in performance to help deliver two stereoscopic 3D animated films in 2011: Kung Fu Panda 2* and Puss in Boots*.


“Taking advantage of technological innovation is the key to enabling our overall business ambitions,” explained Ed Leonard, CTO of DreamWorks Animation. “By working closely with technological leaders such as Intel, DreamWorks Animation is able to stay on the cutting edge. Our collaboration with Intel and its new technologies is allowing our artists to be more creative.”


To learn more, download our new DreamWorks Animation business success story. As always, you can find this one, and many others, in the Intel.com Reference Room and IT Center.

 

 

*Other names and brands may be claimed as the property of others.

I often mull over the challenges gating cloud adoption and how Intel can create better products to accelerate the efficiency and capability benefits the cloud delivers to businesses.  Security has always held the #1 position on my list based on customer discussion and supported by industry analyst surveys.  I recently changed my mind on this though and wrote a blog for Cloud Ave to discuss where I believe the industry needs to shift focus.  Interoperability of hardware and software solutions and cloud services as a whole is now top of my list and I've outlined a few ways for each of us to get involved and shape the evolution of cloud computing.  Please read and let me know if you agree!

 

Contact me via Twitter at @RaejeanneS for more info and questions.  - Raejeanne

On July 12th, VMware announced vSphere 5, the new version of its enterprise virtualization suite. There are many new features and capabilities in this product. From a networking point of view, one capability that’s pretty darn compelling is the native Fibre Channel over Ethernet (FCoE) support delivered through the integration of Open FCoE.

 

Not quite sure what that means? Read on.

 

In January, Intel announced the availability of certified Open FCoE support on our Intel Ethernet Server Adapter X520 family of 10GbE products as a free, standard feature. We had great support from our partners – Cisco, EMC, Dell, NetApp, and others – and we received plenty of positive press. Now, with the launch of vSphere 5, Intel and VMware have taken things a step further by the integration of Open FCoE in the industry’s leading server virtualization suite.

 

So what is Open FCoE and why is it important?

 

Open FCoE enables native support for FCoE in an operating system or hypervisor. Integrating a native storage initiator has some key benefits for customers who look to simplify their networks and converge LAN and storage traffic:

 

  • It enables storage over Ethernet support on a standard Ethernet adapter; no need for costly converged network adapters (CNAs) powered by hardware offload engines
  • Performance scales with server platform advancements, as opposed to CNA performance, which is limited by the capabilities of its offload processor
  • It enables FCoE on any compatible10 Gigabit Ethernet adapter, which helps prevent vendor lock-in.

 

Open FCoE support in vSphere 5 means VMware customers can now use a standard 10 Gigabit Ethernet adapter, such as the Intel Ethernet Server Adapter X520, for 10GbE LAN and storage traffic (including NAS, iSCSI, and FCoE), which ultimately simplifies infrastructures and reduces equipment costs.

 

The idea of integrating native storage over Ethernet support isn’t new; most operating systems and hypervisors have included a native iSCSI initiator for several years. We’ve watched as dedicated iSCSI adapters gave way to iSCSI running on standard Ethernet adapters with performance that increases with each bump in processor speed and platform architecture improvement. We expect Open FCoE to bring similar benefits to FCoE traffic.

 

Intel worked closely with VMware to integrate Open FCoE in vSphere and to qualify it with the industry’s leading storage vendors. We’re excited to see it incorporated into vSphere 5, and we feel confident that VMware customers will appreciate its benefits.

 

VMware’s Vijay Ramachandran, Group Manager of Infrastructure Product Management and Storage Virtualization, offered some thoughts on Open FCoE in vSphere and why it’s a good thing for VMware customers.

 

Is unified networking and combining LAN and storage traffic on Ethernet important to your customers?

Absolutely. Virtualization is a major driver of 10 Gigabit adoption, and network convergence on 10GbE is very important for our customers who look to increase bandwidth and simplify their infrastructures.

 

There are other ways to support FCoE in vSphere. Why is Open FCoE integration significant?

Integrating Open FCoE into vSphere is important because it makes FCoE available to all of our customers, just as iSCSI has been for years. When customers upgrade to vSphere 5, they get FCoE support on any compatible 10GbE adapter they have installed. That’s important because choice is a key pillar of VMware’s private cloud vision. With its support for standard 10GbE adapters and compatibility with FCoE-capable network devices, Open FCoE supports that vision. vSphere 5 has several new storage and networking features that increase performance and improve management, and with Open FCoE, we have a native solution with performance that will scale with advancements in vSphere and server platforms.


Can you tell us about the work Intel and VMware did to enable Open FCoE in vSphere?

We wanted to implement FCoE in a way that offered the best benefits to our customers. VMware worked closely with Intel for over two years to integrate Open FCoE into vSphere and to validate compatibility with their 10GbE adapters. It’s nice to see the results of that work in vSphere 5.

 

I’d like to thank Vijay for taking the time to answer these questions for us.

 

If you’re interested in learning more, see Intel and VMware: Enabling Open FCoE in VMware vSphere 5.

 

Follow us on Twitter for the latest updates: @IntelEthernet

I just finished reading a book by Bob Lutz called Car Guys vs. Bean Counters: The Battle for the Soul of American Business. Lutz is a former vice chairman of General Motors as well as an ex-fighter pilot (VMA-133) and genuine gear head. In one chapter, he talks about an imaginary dog food company where the “food chemistry” is brilliant and the product is optimized for healthy contents and costs. The company’s marketing and advertising functions are fine-tuned and its operations and supply chain systems are the envy of all its competitors. The line staff is highly motivated and management is filled with high-achieving graduates of the nation’s finest business and engineering schools. There’s just one thing the company forgot to consider: Do dogs like the end product? Maybe this is a fundamental truth the company should have recognized earlier. After all, if the dogs won’t eat the food, the company is yesterday’s kibble.

 

As the world trumpets the praises of all things cloud, it’s easy to ignore some fundamental truths that are similar to the dog who doesn’t like the food. In my latest post on Data Center Knowledge, we identified and started to discuss eight inviolable core truths of any corporate cloud strategy. As in previous posts, I base these truths on my architect’s view of the enterprise, and how factors both inside and outside your organization will impact industrial-level adoption of a cloud-based ecosystem. We discuss a cloud adoption cycle than seems much longer than the industry norm, suggest that cloud is actually a verb instead of a noun, and ask you to consider things like bandwidth, government policy, and other elements in your planning process.

 

As always, we welcome your feedback, especially on experiences with your own company’s cloud migration.

 

Until next time…

In the past, security meant that you locked up your business doors before heading home. Now, things have changed – and drastically. In an effort to stay competitive, aided by the explosion of e-commerce, businesses shift to an “always open” model where customers are serviced 24/7, long after the brick and mortar stores close.

 

New IT security risks have surfaced as a result of this evolving business model, and the need for small businesses to address these risks has grown due to an increased insurgence of security breaches. Research studies show that the impact of a security breach can not only be costly, but catastrophic. The data below shows the average cost incurred by activity as a result of a data breach.

 

DataBreach.png

 

A recent study conducted by the University of Texas revealed that 43% of small business IT that suffer a data breach never re-open. This study is one of many that demonstrate the fact that small businesses are not shielded from the risk of security breaches. Similar to large businesses, small businesses encounter threats and regulatory requirements; however, they often lack resources to secure their data and IT systems. This leaves security holes, and makes it even more important for small businesses to take steps to prevent security breaches. Small businesses need tools to help them protect their data, and Intel Xeon-based servers can help.

 

One common way to protect data is through encryption. Encryption works by taking data in plain, readable text form and running it through an algorithm to convert it into its encrypted form, unreadable without a special key. Historically, the encryption process was complex and computationally costly to execute, resulting in many businesses not encrypting their data. Today, with Intel® Advanced Encryption Standard New Instructions (AES-NI), this is no longer a problem. Intel AES-NI works because it speeds up the encryption process, which makes the use of encryption feasible where it was not before.

 

There are several usages for encryption in small businesses where Intel AES-NI can speed up the process, including the following:

 

  • Whole disk encryption. Hard drive encryption is important for businesses to protect critical business and customer data from being accessed by hackers in the event of theft, or when it's time to decommission the drive.
  • Internet security. Encryption is often employed for securing internet and cloud-based transactions. Examples include online banking and e-commerce.
  • Application level encryption. Sometimes it is useful to implement more fine-grained encryption policies. Application level encryption allows you to encrypt a subset of fields in your business/customer database, fields containing sensitive data such as social security numbers, billing/credit card information, etc.

 

The encryption mechanisms above are but a few example usages of how encryption can enhance your small business' security and offer peace of mind. And with Intel AES-NI, supported by Xeon E3 servers, you can expect up to 58% faster encryption. You no longer need to sacrifice computational resources for security - now you can have both!

 

Many things have changed since the days when a key to the building meant your business was secure, and it will only continue to evolve from here. Will your business be prepared?

As previously discussed in my last post about Capacity Planning for SaaS, I explained how to understand a user’s behavior, and translate that behavior into mathematic language. This same principle can be applied to web application as well.

 

o-grito.jpg

Analyze the Application


The second aspect in capacity planning is to understand application behavior under load. It is important to recognize how many computational resources will be required to execute a single “transaction”. To do it, you will need to setup a lab or use a homologation environment where you can install the application in an equivalent environment (i.e. smaller scale). With this, you can simulate user interaction on application with different load levels. There are several options available in the market for load testers. However, for an ad hoc test, I personally like the Microsoft Web Application Stress Tool (aka. WAST).

 

As soon you have the lab ready, you can start the load test. Whichever tool you decide to use, you will get a test report that shows how many requisitions there are, how many succeed, how long it takes for those to suceed, how many failed, etc. The most relevant metrics (considering a regular Microsoft IIS) are:

 

  • Web Services: Get Requests/sec
  • Web Services: Post Request/sec
  • System: % Total processor time
  • Active Server Pages: Request/sec

 

A fine tune in the load is important so that you can represent the user behavior. This should include latency between the requests that simulate a real user who navigates the website. At the end, you will reach the point where you do not get HTTP errors. These errors mean that something went wrong, such as the loader tester provides more requests than the website is able to manage, or application issues. Usually, I try it several times until the point where the HTTP Error 500 does not occur, but with a little more load it appears. Even if you identify that a target system has room under the load of the CPU, memory, I/O and network bandwidth, this does not mean that the system is able to handle heavier loads. At this point, you should stop, and start to work on tuning.

 

Let me give an example of why you may experience problems that relate to user demand with hardware capacity utilization. By default, IIS can allocate up to 25 threads per CPU core. If the web application requires a remote call for any external component (such as a database, middleware, logs, etc.) this thread will allocate until the external component call returns. This round-trip takes time and does not consume CPU. In situations like this, there are many strategies to minimize this latency impact, such as increase thread count per core, make faster calls, etc.

 

With a test report in hand, how can we interpret it? Let’s assume that in a hypothetical test, we got these numbers:

 

  • System: % Total processor time: 92% average during the load with few peaks of 100%. It’s enough to admit that there is no bottleneck with CPU.
  • Active Server Pages: Request/sec: 152, and a transaction is composed of 8 ASP requests (i.e. user opens the first page, selects a couple of options that call ASP pages, etc.), and in this case we discovered that from application standpoint, the system is able to handle in the lab hardware about 19 transaction per second (i.e. 152/8).

 

Calculating the response time


The Little Theorem (aka. LT) allows you to calculate the capacity to attend requests in a network based on user’s request rate over request process capability. This means that if the system reaches the point where the reason is 1 or greater, then the system is overcapacity, requests have been dropped and some users see the HTTP Error 500 in their web browser or wait in the queue to be processed.

 

The following equations represent the amount of requests that can wait in the queue, and how much time you should wait until it can be processed.

 

QueueEquations.PNG

Apply the Little Theorem (λ=rate requests from Poisson, μ=capability to process requests, p=).

 

At this point, Poisson’s probability becomes very useful. You can select the highest rate probability that multiple users access the system at same time, and its highest point in the Poisson curve. With this data, we can forecast not only how many users/transaction we can deal with but also the maximum time required for it to be processed.

 

Defining the Environment:

 

At this point, you know enough about the system and how it works under load, which leads us to the follow table and graphic:

 

QueueTable.png

 

QueueGraphic.png

 

As you know from my previous post on cloud computing, the probability to maintain 20 users or more is less than 2%. However, it exists, and so work on the code review or improvement of the computational capacity could be required.

 

 

Best Regards!

If you think cloud computing technology is moving fast now,  we have quite an announcement for you.   With today’s announcement about the ISTC’s (Intel Science and Technology Center)  announcement on its multi-university investment in cloud computing and  information security research, that development will accelerate beyond the  cloud ceiling to the stratosphere.

 

This announcement builds upon the Intel Cloud 2015 vision,  and will have a profound effect on cloud computing technology, from mobile to the  data center.  In this video, Kirk Skaugen  Intel VP and General Manager of Intel Data Center Group discuss the opening of  the new Intel  Science and Technology Center for Cloud Computing. They explain how the  impact that advanced academic research performed by leading universities with  Intel will solve key challenges and drive cloud innovation forward.

 

 

For more information and news from Intel research about this  announcement, follow Intel Labs on Twitter and Facebook.  Also, for more cloud computing news and  events follow IntelXeon on Twitter.  Be sure to keep up with the  Server Room on Facebook as well!

It’s been over a year since we started to talk about the Intel Cloud 2015 Vision to  enable secure  federation, automation of data centers and client-aware cloud services.  We originally articulated the  vision as a way to highlight the capabilities that we felt were needed for  cloud computing, and to serve as a rallying cry that drove our technology  development at Intel.

 

We introduced new virtualization, security, and management  technologies in support of the cloud computing vision, it is not enough.   It became clear that we would not see breakthroughs  in security, efficiency, data analytics and device innovation without a  concerted, forward-thinking research effort.  Today, we announced a $15M multi-university research commitment, called  the Intel  Science and Technology Center (ISTC) for Cloud Computing.  Combined with the recently announced $15M investment in the ISTC for Secure Computing, this will accelerate cloud  computing innovation. I think it’s a great start, and I want to highlight a few  of the outcomes I expect we’ll see.

 

My first anticipated outcome is a new type of “hybrid” cloud.  When most people think of hybrid cloud, they generally refer to the combination  of public and private clouds.   What I  refer to here is a cloud comprised of multiple, different types of computing  devices where workloads run on the device, and it delivers the capability most  efficiently.    Conventional wisdom today  is that a cloud should be as homogenous as possible. While that may be  effective for simplicity and consistency of service, it does not necessarily  produce the most cost or power efficient results. We deliver architectures for  highly parallelized workloads, for high performance threads, for lightweight or  IO bound workloads, amongst others. We need to develop the automation to bring  this best architecture for the job approach to cloud computing.   This is one of the focus areas for ISTC  under the umbrella of “specialization” – the ability to enable highly  specialized workload placement in a cloud environment.

 

Second, “exa-scale” clouds – Parallel analytics and  distributed databases were born out of the requirement for dealing with large  datasets in a cost effective manner.    Most data centers that deal with Peta-byte sized data bases still involve  programming challenges.    This is even  before we add the substantial growth of video and large scientific data sets  that we anticipate over the next several years.    To address the next order of magnitude in  datasets, the ISTC plans to research new tools that would facilitate debugging  of big data programs.

 

The third potential outcome is client-aware clouds. We’ve  talked about the goal of a client-aware cloud that adapts to the proliferation  and differing requirements of the 15 Billion connected devices we anticipate by  2015.   The ISTC effort wants to take the  effort to the next level by enabling greater real-time adaptation of the cloud  for mobile computing requirements.   Moreover, the centers will perform research on how to mitigate limited  uplink bandwidths.  The result is a cloud  that is aware of the needs; location and context of a device, and can deliver  the right computing and service to it.

 

I believe the research from the ISTC investment will benefit  the collective industry.  We know the  problems and limitations of cloud that exist in the ability to automate diverse  workloads, to manage large data and to adapt to devices across different  networks.   Since we will invest early  and work with leading universities, I believe we take the right steps to  deliver research that will provide the right breakthroughs to get us to Cloud  2020.

rekharaghu

Cooking 101

Posted by rekharaghu Aug 2, 2011

Hello from Penang! Last week, Intel hosted an APAC Cloud Summit in Penang, Malaysia where we invited several of our customers and educated them on Intel's cloud vision and what Intel IT is doing in the cloud computing space. In addition, we walked through the current status of Intel Cloud Builders, a reference architecture program that we have in place to help address IT pain points in cloud with our partners. To date, we have more than 40 reference architectures published.

 

Penang is a melting pot of cusine with blends of Indian, Thai, Chinese and others. Being in Penang and experiencing the flavors reminded me of an analogy a collegue of mine used to say - how a Cloud Builders reference architecture is similar to a cook book. A cook book contains several recipes with each recipe listing the ingredients and the steps required to make that recipe in your kitchen. Similarly, Intel Cloud Builders reference architectures (recipes) describes the detailed hardware and software configuration (ingredients) and details steps to reconstruct that cloud in your data center (kitchen).

 

During this event, we also demonstrated several such cloud recipes with our fellow partners including: HP with Canonical and Open Stack, Microsoft System Center Virtual Machine Manager with Intel servers, Fujitsu with a Hyper-V-based cloud solution, unified networking with Cisco, power management with Dell and Joulex, trusted compute pools with VMware and HyTrust, unified networking 10GE with VMware, scale-out storage with EMC, and cloud onboarding with Citrix. In addition, I walked through two of these reference architectures in detail in my session. Please visit the Intel Cloud Builders Reference Architecture Library to read each of these recipes.

 

Overall, it was a great event and we received great feedback from our customers. Stay tuned to hear more from my fellow bloggers. Sekian Lama!

Is there a difference between Power Efficiency and Energy Efficiency?

 

In many conversations here at Intel, we use the terms almost interchangeably. However, after I wrote a blog post about why the difference between Energy and Power matter to the CIO, I started to think about the implications for engineers. Just as the difference between power and energy should matter to the informed CIO, the difference should also matter to the engineers that define and execute strategies to improve server and data center efficiency.

 

A big breakthrough in thinking about server energy efficiency is “Efficiency Load Line” which gauges how well a system’s energy use scales with workload. The folks at Google introduced the concept several years ago, and labeled it energy proportional computing. The idea is that in an ideal world if your compute load halves, your energy use should also reduce by half.

 

Of course, in the real world, that’s not the case. There are a variety of reasons for this, but here at Intel we’ve taken the work of  “making every Watt compute” to heart. Below is a chart that shows the efficiency load lines of a server built in 2010 versus a server built in 2008. The data shows improvements in both the Power and Energy Efficiency.

 

Efficiency Load Line.jpg

 

The diagram below highlights how the Power and Energy efficiency improvement are manifest. Power Efficiency involves the upper end of the utilization curve. The less power you consume at peak, the more efficient you will be in delivering compute (for the same performance level). On the other hand, the average utilization of your server will typically be 15-30%. This is where proportionality contributes to reduce the amount of energy used by your server.

 

Power Versus Server Utilization.jpg

 

So, what is this worth? The answer is a surprise.

 

As I discussed in an earlier blog, the “cost of power” is the capital cost of the data center, which is typically about $10 Million per MegaWatt. When depreciated over a nominal ten year life, it is about $1/Watt/Year or $1/(Watt*Year).

 

The surprising answer is that the cost of energy based on $0.11 kWh works out to about $1/(Watt*Year).

 

Buck a Watt a Year.jpg

 

Of course, there can be wide variablility in the numbers. More expensive infrastructure will drive higher power costs, and energy costs vary widely around the globe.

 

Yet this analysis points to a key opportunity in the reduction of operating expenses in the data center. If you recognize and focus on both energy and power efficiency, in a sense you can double efficiency benefit and savings!

 

So, there you have it. Power efficiency and energy efficiency are different. Power efficiency is about doing more within a fixed capability whereas Energy Efficiency is about “making every kWh count” (to be precise).

 

Is one more important than the other? That, of course, depends on the end user’s needs. Nevertheless, I highly encourage you to consider both on your next server purchase.

Filter Blog

By author:
By date:
By tag: