Skip navigation

Download Now

taichung.jpgTaiwan’s Taichung Veterans General Hospital (TVGH) needed to boost the performance of its computer system to meet the demands of a growing number of patients and new applications that needed faster response times. The hospital’s old mainframes just couldn’t deliver. The solution was multiple servers powered by the Intel® Xeon® processor E7 family.

“TVGH chose to deploy the new system on servers based on the Intel Xeon processor E7 family because its multiple cores deliver the high performance we require in our applications,” explained Yang Qingwen, Information Center director for TVGH. “It also supports the use of high-capacity memory chips. The application systems used in the outpatient clinic demand these features. More importantly, servers based on the Intel Xeon processors E7 family also provide enhanced RAS capabilities.”

For all the details, download our new TVGH business success story. As always, you can find many more like this on the Business Success Stories for IT Managers page or the Business Success Stories for IT Managers channel on iTunes. And to keep up to date on the latest business success stories, follow ReferenceRoom on Twitter.

Download Now


3m.jpg3M is a science-based enterprise. Its high-performance computing (HPC) IT team works closely with the company’s scientists and engineers to provide optimal solutions to their computing needs. In April 2011, the HPC IT team installed an innovative Intel® Xeon® processor-based hybrid cluster that combines shared memory and cluster computing and a virtualization environment.

“We are a little different in how we support and provide services to our R&D organization,” explains Peter Bye, systems architect for 3M. “We don’t just give them off-the-shelf tools and say, ‘Go at it.’ We listen to them and try to come up with innovative ways to help them do their jobs better and faster. More often than not, we are able to help them solve what would have been an intractable problem maybe just a few months ago.”

For all the details, download our new 3M business success story. As always, you can find many more like this on the Business Success Stories for IT Managers page or the Business Success Stories for IT Managers channel on iTunes.  And to keep up to date on the latest business success stories, follow ReferenceRoom on Twitter.

Please note: This blog originally apeared on in the "Cloud" section as a sponsored blog.



With the rise of social media sites such as Facebook and YouTube, companies are getting hit with a blizzard of unstructured data. This data onslaught comes on top of rapid growth in the amount of customer data housed in enterprise systems. Companies now must find cost-effective ways to integrate and analyze the collective pool of big data to generate granular business insights.


A recent Intel white paper may have said it best: "We are in the midst of a revolution in the way companies access, manage, and use data. Simply keeping up with the explosive growth in data volumes will be an ongoing challenge, yet the true winners will be those that master the flow of information and the use of analytics throughout their value chain." ¹



To stay on top of it all, many companies are deploying Hadoop, an open-source parallel processing framework, to process and analyze social media data on distributed server clusters. They then integrate Hadoop with systems housing other customer data to gain rich insights. To support these efforts, solution providers are building Hadoop interfaces into database products to help with the performance of big data delivery, management, and usage.


What we have here is the intersection of the traditional methods of delivering, managing, and viewing information and a new approach that allows data of all types and formats to be quickly sorted for transactional and operational opportunity. This new era of data exchange requires next-generation compute, storage, and I/O technologies-like those found in the Intel® Xeon® processor E5 family.



This next-generation processor family is a great platform for running analytics on big data in private cloud solutions and enterprise data centers, as well as in cloud deployments. The raw compute power of the processors enables efficient, intelligent, and secure archiving, analysis, discovery, retrieval, and processing of critical data. And along with fast processing, the Intel Xeon processor E5 family accelerates throughput with PCIe 3.0 technology and Intel® Integrated I/O, which is designed to dramatically reduce I/O latency and eliminate data bottlenecks across the data center infrastructure.


On the storage side, the Intel Xeon processor E5 family incorporates accelerated RAID and Intel' AES New Instructions (Intel® AES-NI) to speed data encryption. This latter technology is particularly important in private cloud environments, where pervasive encryption is used to protect data from hackers and other threats.



The new Intel architecture also incorporates innovative technologies designed to reduce power and cooling costs and enable dense configurations with thousands of processors. This is a perfect chip for blade environments.


If your organization is moving ahead to the new architecture from the Intel® Xeon® processor 5600 series, Intel has a good ROI story to tell, on top of the performance story. And if your organization uses systems based on earlier-generation processors, the new Intel architecture can deliver spectacular ROI while moving you further along in your journey to the cloud.


While it's a great chip for cloud environments, the Intel Xeon processor E5 family is also ideal for private clouds and enterprise data centers looking to accelerate the processing of large datasets while driving down the cost of computing. It's up to the challenges of big data.

Costs are being closely watched in every division in an enterprise, but particularly in the IT department - controlling IT spending is an objective in nearly every organization. There are also corporations where what has been traditionally described as IT is now the basis of the business. So limiting costs will reduce the cost of operations. (Think of Amazon, Google, PayPal,, etc.) This transition point in IT is being driven by the appearance of clouds, whether dark or with a silver lining. The choice for IT administrators tends to fall into a.) use an internal, private cloud, b.) use an external, public cloud, or c.) use a hybrid cloud consisting of both private and public clouds.


It is readily apparent that running applications in the cloud reduces costs by providing IT customers with self-service provisioning. The paradigm shift in retailing that has significantly reduced staffing and customer service representatives is now reaching IT. Increasingly the individual, whether a staff member in a corporation or an entrepreneur starting a new Web based business, can self-select the computing resources required for an application. Businesses and governments are moving into the cloud.


Furthermore, the cloud has turned the economics of backup and recovery on its head. In the public cloud, the cloud vendors maintain backups for virtual servers. It would be just as easy for administrators of private clouds to maintain standby backups of the applications at significantly lower cost than the traditional duplicate architecture used in many DR schemes.


(I’ll admit that the vision I embrace here doesn’t exist without glitches but the hurdles are technical and will be overcome. The ODCA is working to develop solutions to address these issues.)


The cloud is great for new and dynamic applications. Developers can work in test environments in the cloud, shortening development cycles dramatically.  Because a virtual server in the cloud is dynamic in and of itself, testing performance parameters of applications - both new apps and upgrades - can result in previously unavailable accuracy in provisioning the eventual production server. Costs and time associated with application development can be significantly reduced.


Hybrid Clouds, discussed here by Billy Cox add to the mix by offering an alternative to building the entire infrastructure in-house. Many components of an application can be hosted outside the walls when the application does not use proprietary data.


The old consultant adage is "You want it right? You want it soon? You want it cheap? Choose only two." The cloud is allowing some managers to choose all three.


This is great news for those developing and deploying new applications. Those legacy applications running in Linux* or Windows* servers can likely take advantage of cloud economics. Migration to the cloud from Windows or Linux can be done carefully and efficiently using automated P2V tools from various vendors.


But what about all those applications running on AIX or Solaris* platforms, how can they take advantage of the economies of the cloud? This may require a complex migration as most clouds are built on servers from various manufacturers that are running on Intel Xeon processors. With this we’re back to the old RISC to Xeon processor migration issues (i.e. big endian Power* and SPARC* processors and little endian Xeon processors).


The first step in planning to move an application running on expensive legacy servers is to look at my migration spectrum. What sort of application is being targeted to run in the cloud? Where does it fit on the spectrum? Determining this will give you a 10,000 foot view of the effort that will be required.


Most applications running on expensive legacy servers are running in partitions. I suppose some of you were thinking, how does an application on a Power 5 server fit into a server carved out of the cloud? Isn’t that something like trying to fit the size 23 feet of Shaquille O’Neal into size 10.5 sneakers? But the applications are usually smaller than the entire server because they are running in a partition, or the utilization rate is near the data center norm of 15%.   The best news is that the performance difference between modern Intel Xeon processors and Power 5 servers is such that an application running in the legacy server can easily fit in a server carved out of the cloud.


Please share some of your experiences in migrating legacy UNIX* applications into the cloud. Did you find it difficult?

Wave 1 Family + C6220 w monitor sm.jpg


Think of today’s data center as a factory. IT is at the heart of productivity and revenue generation. That factory has to scale to deliver better performance for applications and business services to boost productivity, while at the same time constantly gain efficiencies to get more ROI from investments.


However, growing demands on IT, technological advancements, and increasing complexity make it progressively more challenging for IT to fuel productivity and generate greater revenue from existing assets. The IT department — the factory— is under heavy demand from every line of business to drive results in multiple ways.


That’s where Dell comes in.


Introducing innovations that deliver results faster

Our strategy to keep your IT factory humming along is grounded in listening to you. This direct model of customer interaction and feedback has been our foundation for more than 27 years. Frequent and deep interactions give us a unique perspective on bringing innovations to market to meet your needs.


In designing the next generation of Dell PowerEdge servers, we participated in more than 7,700 customer conversations worldwide to get input on what you need to power your business. We’ve focused everything we do, everything we invest in, everything we build to deliver innovations that provide the uncompromising performance and capabilities you need when and where it matters. The result is a portfolio of systems that help answer the biggest challenges IT faces. We think of the PowerEdge 12th generation servers as systems designed by you, and engineered by Dell.


Maximize efficiency

Data centers, under constant fiscal pressure, need to make more efficient use of IT resources, streamline and automate operational tasks and leverage their existing investments.  Dell 12th Generation PowerEdge servers help IT organizations improve efficiency, boost productivity and get the most out of every dollar.


Dell and Intel have made great strides to improve energy efficiency. Our servers are equipped with the Dell OpenManage systems management portfolio.  We are now introducing OpenManage Power Center which provides greater data center efficiency by letting you control power at the server, rack, row or room level. It allows you to collect and sum aggregate power use – rack, row, room and implement policies for power reduction in response. With Intel Node Manager technology it also provides fast power capping to proactively prevent outages. Power efficiency and standards based innovation are becoming increasingly important to our enterprise customers.  Intel and Dell have been working together for to develop and deliver standards based power management. Dell is the first enterprise server vendor with support for Intel Node Manager technology broadly across their server portfolio.


Our focus is to be on the forefront of driving standard in power management. This is part of our open-standards approach and ensures management tool interaction is effortless and non-proprietary. Dell does offer additional features that enhance the standard Node Manager experience, but through the same interface and communication protocols. 


Turn data into insights for faster results

Better performance is always important, and on that front Dell and Intel have got you covered. In addition to the massive performance gains provided by the new Intel® Xeon® Processor E5-2600 product family, Dell PowerEdge servers power business applications more effectively with several new innovations Dell PowerEdge servers power business applications more effectively to help you achieve more by introducing several new innovations that improve system performance, increase throughput capacity, and speed access to data for faster results. Data is useless unless you can convert it into insights. We are extending Fluid Data Architecture storage technology to the server to give you the information you need in a flash. New innovations in PCIe solid state disks, data accelerators and scalable internal storage will give you significant improvements in your ERP, Business Intelligence and database performance. We’re talking about increases of up to 18X more Microsoft SQL Server transactions per second and 28 times faster query response time running Oracle Database 11g.


Ensure business continuity

Dell is committed to providing secure, continual access to the IT services that power your business. In addition to the virus protection with Intel Trusted Execution Technology and pervasive encryption with Intel AES New Instructions, Dell provides reliability, availability, and serviceability features like hot swappable fans, disks, and PSUs, Dell PowerEdge servers keep your data center running with rock-solid reliability.


Security in the virtual age is a complex, multi-faceted challenge. PowerEdge servers protect your data from accidental loss or malicious intrusions, not only with innovative security technology, but also with services from experienced, trusted advisors in the security realm. Dell now provides a wide spectrum of security and disaster recovery services, including assessment, consulting, design, and delivery.


Our highest performing, most manageable, most innovative servers ever

With Dell PowerEdge 12th generation servers as the foundation of your intelligent infrastructure, your IT factory is ready to increase its output and bring a return on your investment. New innovations will help you maximize efficiency by streamlining and automating operations, achieve more by turning data into insights for faster results, and ensure business continuity with security, availability, and reliability features.


I invite you to take a closer look at the new servers we are introducing. We are excited to launch our servers with the Intel Xeon E5 processors today and will introduce additional products in our PowerEdge 12th generation server portfolio with Intel in the coming months.

Visionsense.jpgCompanies everywhere are already learning how the new Intel® Xeon® processor E5 family can help make their IT more efficient, productive, and secure. Download these real-life business success stories to help you make your own IT decisions:

You can find more like these here. As always, you can find many more business success stories on the Business Success Stories for IT Managers page or the Business Success Stories for IT Managers channel on iTunes. And to keep up to date on the latest business success stories, follow ReferenceRoom on Twitter.

virtalis.jpgTwo new business success stories explain how industry-leading companies have put the power of the Intel® Xeon® processor 5600 series to work in their enterprises, improving performance and reducing lead times:

As always, you can find many more business success stories like these on the Business Success Stories for IT Managers page or the Business Success Stories for IT Managers channel on iTunes. And to keep up to date on the latest business success stories, follow ReferenceRoom on Twitter.

One interesting aspect of working with real world implementations of database technology is that they often raise interesting questions about how that technology is used. One such recent question concerned the real performance benefits of the Intel Advanced Encryption Standard New Instructions (Intel AES-NI) set on Intel Xeon processors for improving Oracle database encryption performance with Oracle Transparent Data Encryption (TDE).


Oracle TDE provides security where data is automatically encrypted and decrypted when written to and read from the physical media. If we look in the Oracle TDE FAQ it clarifies what happens to the data once it has already been read and is held in memory. With TDE, column encryption data always remains encrypted in the Oracle SGA so the benefits of encryption acceleration are clear. However, with TDE tablespace encryption, data is already decrypted in the SGA so once it is cached it is in clear text. This makes the question more precise:


If I am using TDE tablespace encryption for query-based data and have a large SGA to cache most of the data in decrypted form, what gains will AES-NI really bring me?


The best way to answer this question is to put it to the test, so I tested a system equipped with Intel Xeon processor E5-2680 running Red Hat Enterprise Linux* Server release 5.6. I know these processors have AES-NI, but in the flags section of "/proc/cpuinfo" the flag "aes" confirms it for me. I installed Oracle and applied the patch 10080579 to enable TDE to use AES-NI by default.


To set up TDE I then created an encryption wallet directory in my admin directory as follows:





I ensured that the permissions were set correctly to keep the directory secure.  I then created a sqlnet.ora file in my network admin directory,





and added the following line to this file (keeping the entry all on the same line):




Then I created (you can see the file created in your wallet directory) and opened the wallet as follows:


[oracle@sandep1 ~]$ sqlplus / as sysdba
SQL> alter system set encryption key authenticated by "oraclepassword";
System altered.
WALLET created


Next, I created an unencrypted tablespace as normal and an encrypted tablespace,


SQL> create bigfile tablespace TPCH_ENCRYPT datafile '+DATA' size 50g encryption using 'AES256' default storage(encrypt);
Tablespace created.


and checked the tablespace was indeed encrypted.


SQL> select tablespace_name, encrypted from dba_tablespaces;
------------------------------ ---
7 rows selected.


I then used Hammerora to create identical Scale Factor 10, 10GB schemas based on the TPC-H specification in both clear-text and encrypted forms, ensuring that with my 40GB buffer cache in the Oracle SGA there would be plenty of memory to cache the data.  I also used Hammerora to run and capture an example query to use against this data and keep the predicates the same so the query run would be identical each time. I used autotrace and timing to test my query performance. First I took a look at the clear text schema with the following query (which is TPC-H Query 1).


SQL> connect tpch/tpch
SQL> set autotrace on;
SQL> set timing on;
SQL> select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedprice) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_quantity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_disc, count(*) as count_order from lineitem where l_shipdate <= date '1998-12-01' - interval '119' day (3) group by l_returnflag, l_linestatus order by l_returnflag, l_linestatus;


The query returned 4 rows and the timing value showed it took almost 29 seconds.


Elapsed: 00:00:28.80


Our Execution plan shows "TABLE ACCESS FULL" on "LINEITEM" which is a full table scan on our biggest table. Our statistics show that these were physical reads which means that the data was not cached in memory but instead read from disk.

1038269 physical reads


I then ran the same query again with the result showing the following timing.




Elapsed: 00:00:28.74


And the same execution plan value for physical reads.


1038269 physical reads


The first time, Oracle decided not to cache the LINEITEM table in the buffer cache SGA. When I ran the query a second time, it fetched the data from disk again and continued to do this for every time the query was run with consistent timing each time.


What about the same query on encrypted data? Oracle TDE is controlled by hidden parameters, so we need a query to see the parameters:


SQL> li
1* select a .ksppinm "Parameter", b.ksppstvl "Session Value", c.ksppstvl "Instance Value" from x$ksppi a, x$ksppcv b, x$ksppsv c where a.indx = b.indx AND a.indx = c.indx AND ksppinm like '%encryption%'
SQL> /
Session Value
Instance Value


To use AES-NI  the parameter _use_platform_encryption_lib needs to be set to TRUE. To use AES-NI for both encryption and decryption _use_hybrid_encryption_mode needs to be set to FALSE. We can test the impact of hardware encryption acceleration performance by turning these parameters on and off, for example:


SQL> alter system set "_use_platform_encryption_lib"=FALSE scope=both;
System altered.


I restarted the database and opened the wallet before running queries against the encrypted data without AES-NI for hardware encryption acceleration. In this case Oracle is using software only to do the decryption and is not using the AES-NI at all.


SQL> alter system set wallet open identified by "oraclepassword";
System altered.



The query again returned 4 rows and the timing value is 136 seconds, including the time to do the software only decryption.


Elapsed: 00:02:16.41


Our Execution plan shows "TABLE ACCESS FULL" on "LINEITEM" and again our statistics show that the full table scan was based on physical reads. Each time that the same statement was re-executed, the data was read from disk. This means that each time the data was decrypted with software acceleration it took approximately 4.7x longer than the same query on clear text.


Next, I restarted the database but this time enabled hardware accelerated encryption to use AES-NI.


SQL> alter system set "_use_platform_encryption_lib"=TRUE scope=both;
System altered.


I re-ran the same query with the following timing value:


Elapsed: 00:00:45.30


As expected, our Execution plan again shows "TABLE ACESS FULL" on "LINEITEM" with the full table scan based on physical reads. The same query was run each each time (the data was read from disk and decrypted). In this case the difference was making use of AES-NI for acceleration.



Timing the same query consistently gave us the following results:


  • Query on Clear Text = 00:00:28.80
  • Software Only Encryption = 00:02:16.41
  • Accelerated with AES-NI = 00:00:45.30



This means that with AES-NI the query completed 3x faster than that using software only encryption and 1.55x slower compared to using no encryption at all. Clearly if you are using Oracle TDE for encryption then you are going to see significant performance gains from using AES-NI for acceleration.


If you are curious why Oracle decided not to cache the data at all and instead chose to read from disk and decrypt exactly the same query each time, we'll look into that in Part II of this post.

Download Now 


wedos.jpgBased in the Czech Republic, WEDOS is a Web hosting and virtual server (VPS) provider and online consultancy. It planned to build a new data that needed to support a wide range of environments, applications, and databases. Demand for capacity and compute power varies significantly from customer to customer, so WEDOS needed a flexible and easily scalable infrastructure. At the same time, to keep service prices competitive, it had to ensure its own capital and operating expenses were under control. After assessing the infrastructure solutions available, WEDOS chose a Fujitsu* server fleet powered by Intel® Xeon® processors L5640.

Petr Štastný, CEO at WEDOS, explains: “It is important to us that we offer our customers the very best IT services. This means that both energy efficiency and performance were essential when building our new data center. By optimizing our power usage from the outset, we wanted to ensure that we not only demonstrated strong green credentials, but also that we could keep costs low and response time fast.”

For all the details, download our new WEDOS business success story. As always, you can find many more like this on the Business Success Stories for IT Managers page or the Business Success Stories for IT Managers channel on iTunes. And to keep up to date on the latest business success stories, follow ReferenceRoom on Twitter.


*Other names and brands may be claimed as the property of others.

Please note: This blog originally apeard as a sponsored blog post in the Cloud area on

I often get asked for my thoughts on cloud computing and other data center trends. While I'll stop short of calling anything a prediction, I can tell you what is top of mind for me and many of my colleagues this year.


Unrelenting data growth will continue.


There's no stopping data growth. IDC predicts that by 2015, the amount of information managed by enterprise data centers will grow by a factor of 50, and the number of files the data center will have to deal with will grow by a factor of 75. Mobile data traffic alone will increase 26 times between 2010 and 2015, reaching 6.3 exabytes per month by 2015, when nearly 70 percent of Internet users will use more than five network-connected devices.


As enterprises face an avalanche of data triggered by social media, application growth, and a proliferation of mobile devices, they need cost-effective ways to turn bits and bytes into meaningful information. Moreover, with 15 billion connected devices by 2015, the amount of data for manufacturing, retail, supply chain, smart grid, and many other applications will require new approaches to both batch and real-time analytics. Driven by this need, many organizations are developing distributed analytics platforms based on frameworks such as Hadoop.


An open-source framework for the distributed processing of large datasets across server clusters, Hadoop enables fast performance for complex analytics through massively parallel processing. It also allows database capacity and performance to be scaled incrementally through the addition of more server and storage nodes. This approach is not without challenges as the usability and scalability of distributed analytics frameworks currently inhibit broad adoption.


Best practices will drive efficiency gains.


Cloud computing is one of the keys to dealing with massive amounts of data in a cost-effective manner while creating a more agile IT infrastructure.


That's the case at Intel IT, where our enterprise private cloud is up and running and has already realized $9 million in net savings to date. More than 50 percent of our servers are now virtualized. We've reduced provisioning time from 90 days to 3 hours, and we see the day coming where provisioning will take place in minutes.


Efficiency isn't important only in the software and compute layers; it's also a focus for best practices at the infrastructure and facility levels. One such best practice is high ambient temperature (HTA) data center operation. HTA raises the operating temperature within a data center to decrease operational and capital costs for cooling and enable energy savings to be used to power servers.


Unfortunately, however, it's not as simple as turning off the air-conditioning. The system design, rack and facility controls, and even technology component choices are critical and part of the reason that we've developed a blueprint of best practices that we share openly.


Client-aware computing will become essential.


In response to the proliferation and wide-array of devices, client-aware computing will be key focus in cloud data centers. In a client-aware environment, cloud-based applications both recognize and take advantage of the capabilities of the client device.


Rather than providing services that are dumbed down to a lowest common denominator-or the capabilities of the most basic client devices-the cloud service adapts to deliver optimal service based on the device at hand, making full use of the capabilities of both the client and the server. Understanding the compute, graphics, battery life, security, and other attributes of the device can greatly improve the user experience while efficiently using data center and network bandwidth.


Technology refresh will reinvigorate data centers.


Organizations will refresh data center technology to pack more computing power into each square foot, drive down power and cooling costs, and increase the security of data and applications.


With those goals in mind, I'm excited by the technology we're delivering in our new Intel® Xeon® processor E5 platforms. We're introducing new technology for the performance and scale of big data, power management for data center efficiency, and Intel® Trusted Execution Technology (TXT) to address some of the security requirements of cloud datacenters.


I believe 2012 is going to be a year of tremendous growth and innovation enabled by cloud computing. At Intel, we are thrilled to be a part of it.


Follow @IntelITS for more news.

Download Now


thesys.jpgThesys Technologies delivers a customizable, end-to-end trading solution that helps banks, brokerage houses, and other investment groups turn trading strategies into profits. Thesys runs its high-frequency trading (HFT) platform exclusively on Intel® Xeon® processors and says each round of advances from Intel helps Thesys enable its customers to make more money with less cost. Upgrading to the six-core Intel Xeon processor X5690 gave the Thesys HFT* platform  a 50 percent performance increase over its Intel Xeon processor 5500 series predecessors, and the company is looking to the Intel Xeon processor E5 family for its next performance leap.

“We push our infrastructure very hard,” explained Michael Beller, managing partner for Thesys Technologies. “When we survey the marketplace, we see Intel out in front of what’s available and providing a processor that can support what we’re doing. We see their continual advancements keeping them on the leading edge--and thereby helping keep us on the leading edge.”

For all the details, download our new Thesys Technologies business success story. As always, you can find many more like this on the Business Success Stories for IT Managers page or the Business Success Stories for IT Managers channel on iTunes. And to keep up to date on the latest business success stories, follow ReferenceRoom on Twitter.


*Other names and brands may be claimed as the property of others.

One of the Intel® Cloud Builder Reference Architectures outlines Data Center Efficiency using Intel® Node Manger on Dell PowerEdge* C servers with JouleX Energy Manager* (JEM) and a VMware virtulization technology plugin.  In this use case, the primary goal is to optimize server power consumption in regards to the power load placed on the server by the virtual machines (VMs) found on each server.  When you balance the power across your servers, you can optimize the end-user experience as well as mitigate rising costs in the data center by monitoring (and potentially reducing) energy consumption.


In this scenario, mutiple servers host multiple VMs much like a modern "virtualized" data center.  Each virtual machine's CPU utilization information is collected via VMware vSphere*, and the power information is collected from each server node by the JEM console - which uses the Intel® Data Center Manager SDK using IPMI communication and pairs that power information with the server asset in the JEM console.


Here's the video showcasing the setup of the reference architecture:


Power policies, which are set by the data center administrator, allow the VMs to be migrated (using VMware vMotion*) between server systems to optimize and balance the power used between the systems as a group.  This helps to mitigate any power spikes and also reduces overload on servers with high VM loads, and helps to distribute the work to other server systems that have more capacity to handle the VM loads.


By using the power data in conjunction with the VMware vSphere information of CPU utilzation on the virtual machines, the administrator can migrate VMs based on power recommendations set by the policies in the plugin to optimize the workloads across the servers and the data center.


You can download the detailed Intel Cloud Builders Reference Architecture to build your own configuration and see the power efficiency model at work in your own lab or data center.

Last week saw the official release of the Intel Xeon processor E5 family. Adoption among OEMs is already strong with over 400 design wins registered.  This is twice the number for the Intel® Xeon® processor 5500 series (Nehalem) when it was released in 2009. This interest signals a rapid take up for what is effectively a ground-breaking processer that’s packed full of features and is destined to lay the foundation for next-generation data centres. Below I explore some of the innovations which make this a true landmark for data centres and the cloud, while in my blog over on, I examine the broader industry story.


Xeon E5 is aimed squarely at midrange servers with two sockets used in cloud and high-performance computing (HPC) environments.  Automotive giant and long-standing Intel customer BMW tested it over 18 months ago. The company was particularly interested in support for the SIMD Intel Advanced Vector Extensions (Intel AVX) instruction set and its potential for significantly enhancing floating point-intensive calculations in multimedia, scientific and financial applications.


Following extensive tests BMW was so impressed it’s now planning to use the platform to build three new high-performance (HPC) clusters to add to its existing supercomputing muscle.  And it’s not alone. The PRACE Research Institute, which is responsible for developing HPC capacity across Europe, has developed a two petaflop cluster – the CURIE supercomputer – using Xeon E5.


CURIE has 92,000 processor cores and 80,000 are Xeon E5 processors. This has effectively doubled European HPC capacity, and according to PRACE, will help Europe lead the world “in the quest for suitable solutions to societal challenges such as population ageing, climate change and energy efficiency.”


If CURIE was to be ranked in the world’s Top 500 supercomputers it would be in the top five. That’s an impressive stat for the Xeon E5. However, in my view it’s in the area of cloud computing that the Xeon E5 will make its mark. Cloud computing is rapidly gaining currency across the entire computing sphere and consequently it’s set to become increasingly widespread.  But this growth requires next-generation data centres that have higher levels of energy efficiency, lower power consumption and greater performance than those of today.


This is where the Xeon E5 will really earn its stripes. I’m sure you’re already aware of it, but if you’re not, benchmark testing revealed that when compared to the Intel Xeon processor 5600 series the Xeon E5 can offer upto  an 80 per cent performance improvement. This is quite a performance leap.


Looking a little closer at some of the new features of the Xeon E5, it’s worth noting that the PCI-Express bus is integrated into the processor – supporting full integration of the PCI Express 3.0 specification.  As a result, the Xeon E5 can potentially double the interconnect bandwidth over the PCI Express 2.0 specification. In practical terms this means lower power consumption and higher server density implementations.


New fabric controllers also take advantage of the PCI Express 3.0 specification to allow more efficient scaling of performance and data transfer, again an important requirement for next-generation data centres.


New data direct I/O technology also increases I/O performance by up to 2.3 times of the Intel Xeon processor 5600 series, reduces latency and can allow system memory to remain in a low power state wilst data transfers take place directly to CPU cache memory. Intel AVX dramatically reduces compute time on large complex data sets and Intel Turbo Boost Technology 2.0 provides performance when it’s needed.


Taken together with embedded security features such as Intel Trusted Execution Technology and Intel AES New instructions, the Xeon E5 platform is probably one of the most phenomenal chips delivered by Intel and is clearly set to become the foundation for next-generation data centres and cloud computing.

Please note: This blog originally apeard as a sponsored blog post in the Cloud area of




Lately I’ve been getting a lot of questions on the recently announced Intel® Xeon® Processor E5 Family and how it will meet today’s evolving computing needs. To answer that question, I think it helps to take a step back and look at the big picture.


In the IT world, multiple trends are driving a whole new set of requirements for the data center. For starters, the Internet continues to grow. It has more users, more connected devices, and more types of interactions—whether it’s person-to-person, person-to-system, or system-to-system. In a parallel trend, global competition is increasing. Around the world, people can participate in commerce and interact in ways that were not possible in the past. And all the while, the pace of technology innovation is accelerating.


These trends are reframing the challenges for IT professionals. To keep the business competitive, the IT organization must transform itself to support cloud computing solutions, a wide variety of new user devices, and ever-larger amounts of data. This transformation requires an unprecedented level of scale to support not just hundreds or thousands of employees but millions of global customers.


The new Intel Xeon Processor E5 Family is built for this new world where scale is everything. It’s designed to help organizations accommodate today’s increasing numbers of customers, richer user experiences, and unpredictable demand.


Here are some of the ways the new Intel processors are built to scale:


  • Leadership performance. Performance is at the heart of the Intel heritage, and that remains true today. With the new generation of processors, we continue to increase server performance. For example, we have new instructions built into the processor to handle mathematical workloads that are important for things like simulation and analysis.


  • Breakthrough I/O innovation. It’s not just processor performance that we have improved. We are innovating throughout the system, including the integration of input/output (I/O) technologies onto the processor. Intel® Integrated I/O enables data to move in and out of the processor faster, which is one of the keys to scale. It helps ensure balanced system performance and balanced system scalability.


  • Trusted security. The Intel Xeon Processor E5 Family continues the Intel focus on security enhancements. To address today’s security challenges, it includes features that accelerate data encryption and enable the use of trusted computing pools—so you can be sure that the systems running your applications are exactly who they say they are.


  • Exceptional efficiency. As computing environments grow, power consumption becomes a bottleneck that can limit expansion. We’re helping organizations address this challenge with the most energy-efficient processor we have ever shipped. The new Intel Xeon processor E5 family offers better performance at the same power level. In addition, we’ve put more power management control into the hands of IT administrators with capabilities that enable them to set power policies.


The importance of the Intel Xeon Processor E5 Family is reflected in the response from the industry. It’s been truly remarkable. From cloud software providers to global hardware manufacturers, the industry is investing heavily in products and solutions derived from the Intel Xeon Processor E5 Family. That’s evident in both the number of systems and the range of solutions that incorporate the new processors.


The Intel technology, of course, is just a starting point. Where it really turns into a solution is when our hardware and software partners take our building blocks and create something that delivers business value.


To learn more about follow @IntelITS

Download Now


orcc.jpgOnline bill payment solutions from Online Resources (ORCC) power financial interactions among millions of consumers, financial institutions, and billing companies every day. The company needed to build a new database infrastructure to improve availability of its mission-critical solutions through clustering and reduce the hardware footprint without affecting performance. ORCC selected NEC* servers based on the Intel® Xeon® processor 7500 series. The new environment uses error correction capabilities built into the Intel Xeon processors, along with clustering software from Symantec, to help prevent outages. With a robust, dense processing architecture, the new environment is also helping ORCC consolidate resources and cut costs while boosting database performance.

“By using NEC servers based on the Intel Xeon processor 7500 series, we can run more, larger databases in less space,” explained  Peter Cuenco, director of systems operations for ORCC. “As a result, we have reduced power, hardware acquisition, maintenance, and other costs.”

For all the details, download our new ORCC business success story. As always, you can find many more like this on the Business Success Stories for IT Managers page or the Business Success Stories for IT Managers channel on iTunes. And to keep up to date on the latest business success stories, follow ReferenceRoom on Twitter.

*Other names and brands may be claimed as the property of others.

SGI is the trusted leader in technical computing, and the Intel® Xeon® processor E5-2600 family is the perfect engine under the hood. SGI has been shipping servers based on Intel® Xeon® processors E5-2600 family to our customers in the public cloud since September 2011.  Today we can ship more than 26 Teraflops in a single rack with our SGI® ICE™ X platform, and we’ll be doubling that capability in the coming months.


The SGI® Modular InfiniteStorage server, designed from the ground up using Intel® Xeon® E5-2600 microprocessors is one of the densest storage platforms in the market. Finally, our entire Rackable™ line is one of the most flexible rack mount servers in the business and has been upgraded to the new processor, allowing the use of Intel® Power Node Manager for fine-grained power management across the entire line.


Our SGI ICE line allows customers to start small with an architecture that can scale to 100s of racks, and then add, typically with no user interruption, additional racks with similar or later technology. NASA-Ames participated in a case study using ICE that exhibited this exact situation. Pleiades, the #7 system on the Top 500 list, had more modest beginnings but has been upgraded through three generations of Intel® Xeon® microprocessors, and in the next few months will be receiving another upgrade based on Intel® Xeon® E5-2600. This machine is used to design spacecraft; to do climate research; and to explore the universe. Recently a scientist ran a code of over 25,000 threads on the machine to explore ‘space weather,’ the effect of the sun on the earth. To see how NASA takes advantage of ‘live integration’ of new ICE racks into Pleiades, you can read the case study on supercomputing at NASA.


Please see our complete line of servers and storage products based on Intel® Xeon® processor E5-2600 family and check out our Chip Chat with Bill Thigpen from NASA, discussing the work we’ve done together with our ICE™ X platform.

Intel® Node Manager was initially launched with the Intel® 5500 and 5520 Chipsets and has evolved with the Intel® Xeon® processor E5 family to bring fine-grained monitoring capability and efficiency to the modern data center manager.  With the next generation of technology found in the Intel Xeon processor E5 family comes highly efficient energy management which is already built into the CPU, chipset, and other components in the server platform.  Processor efficiency, as well as integrated technologies on the server platform, bring new methods to monitor and manage power more efficiently in the data center.


The first release of Intel Node Manager provided power monitoring and control at the system (chassis) level.  The power was controlled via Performance (P) and Throttle (T) states in the CPU.  This gave customers the ability to monitor and dial-in specific power consumption via Intel Node Manager to limit the CPU P-states, which helped meet power requirements.



Figure 1: Intel® Node Manger version 1.5 (for Intel® Xeon® processor 5500/5600)


In this 2nd generation of Intel Node Manager - part of the Intel Xeon processor E5 family - the platform provides more granular levels of monitoring and control of the overall system power using various sensors on the platform. Intel Node Manager controls power utilizing not only the CPU states, but also Running Average Power Limiting (RAPL) to control memory in unison with the processor changes.



Figure 2: Intel® Node Manager 2.0 (for Intel® Xeon® processor E5)


Here’s what’s new for Intel® Node Manager in 2012:

Power & Thermal Monitoring – Intel Node Manager periodically queries the Power Supply Unit (PSU), Hot Swap Controller (HSC), or an external Baseboard Management Controller (BMC), CPU and memory about power consumption. It can also query about inlet temperature. Based on this data, Intel Node Manager calculates various statistics. Intel Node Manager can simultaneously monitor total system power, CPU power, and memory power.


Platform Level Power Monitoring - monitoring and statistics (min, max, avg) of the total board power. Intel Node Manager reads this either directly from PSU or HSC using the PMBus protocol (PMBus v1.2 preferred) or from an external BMC using IPMI.


  • Processor subsystem power monitoring - monitoring and statistics (min, max, avg) of the processor subsystem power
  • Processor power limiting - ability to enforce a policy to limit the power consumption of the processor package (socket) subsystem
  • Memory subsystem power monitoring - monitoring and statistics (min, max, avg) on the memory subsystem power
  • Memory power limiting - ability to enforce a policy to limit the power consumption of the memory subsystem
  • Inlet air temperature monitoring - monitoring and statistics (min, max, avg) of the inlet air temperature



Simple and Multi-Policy Power Limiting - an external BMC is able to set a single power limit, or a set of power limiting policies, to be enforced by the Intel Node Manager firmware. Up to 16 active policies can be maintained on the server at any time.  The firmware measures power consumption and implements the Get Node Manager Statistics IPMI command used by BMC to read the power consumption.


Dynamic Core Allocation (also known as core idling) – Intel Node Manager can disable one or more CPU cores in runtime, based on policy power limiting requirements. (Note: This feature is dependent on the software support on the host – available only on platforms running an OS that supports core idling such as Windows Server* 8 BETA.)


Keep watching the Intel website and these communities for vendors providing Intel Xeon processor E5 family based servers with Intel Node Manager technology.

Coming into today’s announcement of the Intel® Xeon® processor E5-2600/1600 product family, Intel has been sharing some striking statistics for computing trends that point to a need to improve server networking, bandwidth, and overall performance.


According to Intel projections, by 2015, the number of networked devices will be in the neighborhood of 15 billion, with more than 3 billion users. Worldwide mobile data traffic alone is expected to have increased 18-fold by that time, driven by a jump in streamed content, mobile connections, enhanced computing of devices, faster mobile speeds and the proliferation of mobile video, according to the recently released Cisco Visual Networking Index (VNI) Global Mobile Data Traffic Forecast for 2011 to 2016.


All of this means IT departments will need to increase data center bandwidth and enable faster data flow to servers if they want to meet the needs of demanding applications and avoid bandwidth meltdowns. Fortunately, a number of I/O innovations are here to help with that endeavor.


As part of the Intel Xeon processor E5 product family launch, Intel announced two significant I/O innovations: the Intel® Ethernet Controller X540 and an important server platform feature, Intel® Data Direct I/O Technology.


The Intel Ethernet Controller X540 is the industry’s first fully integrated 10GBASE-T controller and was designed specifically for low-cost, low-power 10 Gigabit Ethernet (10GbE) LAN on motherboard (LOM) and converged network adapter (CNA) designs. I’ve been dropping hints about this product for a while now, and I’m thrilled to say that 10 Gigabit LOM is finally here.



The Intel Ethernet Controller X540



With the Intel Ethernet Controller X540, Intel is delivering on its commitment to drive down the costs of 10GbE. We’ve ditched two-chip 10GBASE-T designs of the past in favor of integrating the media access controller (MAC) and physical layer (PHY) controller into a single chip. The result is a dual-port 10GBASE-T controller that’s not only cost-effective, but also energy-efficient and small enough to be included on mainstream server motherboards. Several server OEMs are already lined up to offer Intel Ethernet Controller X540-based LOM connections for their Intel Xeon processor E5-2600 product family-based servers. This new controller also powers the Intel® Ethernet Converged Network Adapter X540, a PCI Express* adapter for rack and tower servers.



The Intel Ethernet Converged Network Adapter X540-T2



10GBASE-T solutions based on the Intel Ethernet Controller X540 are backward-compatible with Gigabit Ethernet networks, giving customers an easy upgrade path to 10GbE by allowing them to deploy 10GBASE-T adapters in servers today and add switches when they’re ready. Another major benefit of 10GBASE-T is its ability to use the cost-effective, twisted-pair copper cabling that most data centers use today, meaning an expensive cabling upgrade won’t be required, and 10GBASE-T’s support for cable distances of up to 100 meters provides flexibility for top-of-rack or end-of-row deployments in data centers.


The Intel Ethernet Controller X540, like other members of the 10 Gigabit Intel Ethernet product family, supports advanced I/O virtualization and unified networking, including NFS, iSCSI and Fibre Channel over Ethernet (FCoE). Intel Ethernet controllers and converged network adapters are also optimized for new I/O advancements in the Intel Xeon processor E5 product family. One key related feature that should generate some attention is Intel® Data Direct I/O Technology (Intel® DDIO).


Intel Data Direct I/O Technology


Intel DDIO allows Intel Ethernet controllers and adapters to talk directly to processor cache, avoiding the numerous memory transactions required by the previous-generation systems. Eliminating these trips to and from system memory delivers improvements in server bandwidth, power consumption, and latency.


Intel DDIO is a key component of Intel® Integrated I/O, which integrates the PCI Express* controller directly into the processor to further reduce I/O latency. The level of improvement enabled by these technologies when combined with Intel Ethernet 10 Gigabit Controllers can be pretty extraordinary. Together they can deliver more than three times the bandwidth of a previous generation server, and tests in Intel labs have shown even higher peak performance levels[1].


There’s much more to say about the Intel Ethernet Controller X540 and Intel DDIO, so I’m going to spend my next couple of posts doing just that. Next week I’ll take a closer look at the Intel Ethernet Controller X540 and 10GBASE-T, and the following week, I will dig deeper into Intel DDIO and Intel Integrated I/O.


For the latest updates, follow us on Twitter: @IntelEthernet




[1] (I/O Bandwidth) Source: Intel internal measurements of maximum achievable I/O R/W bandwidth (512B transactions, 50% reads, 50% writes) comparing Intel® Xeon® processor E5-2680 based platform with 64 lanes of PCIe* 3.0 (66 GB/s) vs. Intel® Xeon® processor X5670 based platform with 32 lanes of PCIe* 2.0 (18 GB/s). Baseline Configuration: Green City system with two Intel® Xeon® processor X5670 (2.93 GHz, 6C), 24GB memory @ 1333, 4 x8 Intel internal PCIe* 2.0 test cards. New Configuration: Rose City system with two Intel® Xeon processor E5-2680 (2.7GHz, 8C), 64GB memory @1600 MHz, 2 x16 Intel internal PCIe* 3.0 test cards on each node (all traffic sent to local nodes).

AutoVAZ.jpgDownload Now


AutoVAZ is one of Russia’s largest auto manufacturers. Founded in the 1960s after a collaboration with Italian manufacturer Fiat, the company has grown into a worldwide player in the automotive market segment. In early 2011, the Renault-Nissan alliance increased its stake in AutoVAZ to 25 percent. In practical terms, this meant more information sharing among the three manufacturers, as each looks to use designs and specifications from the others to manufacture cars. When AutoVAZ wanted to upgrade its servers and information systems that ran its mission-critical enterprise resource planning (ERP) applications, chose the Intel® Xeon® processor E7-4860.

“Thanks to the Intel Xeon processor E7-4860 and Intel® 10GbE Server Adapter, we can achieve our objectives by implementing higher-performing servers, converged storage, and server networks while also lowering total cost of ownership,” explained Yury Katyanov, CIO of AutoVAZ.

For all the details, download our new AutoVAZ business success story. As always, you can find many more like this on the Business Success Stories for IT Managers page and the Business Success Stories for IT Managers channel on iTunes.  To keep up to date on the latest business success stories, you can also follow ReferenceRoom on Twitter.

Filter Blog

By date: By tag: