Skip navigation

Intel will be at Cloud Connect from March 7-10 at the Santa Clara Convention Center in Santa Clara, CA. If you are attending this event, we would love to hear from you. To keep up to date on all the Intel news and videos from this event, please follow us here: While at Cloud Connect, we will be doing the following:

  • Intel class titled: "End to End Secure Client to Cloud Access - SSO, Strong Authentication and Client Aware Security"
    • March 9, 11:15 AM - 12:15 PM
    • Location: Grand Ballroom G
    • Speakers:
      • Vikas Jain, Director, Product Management, Cloud Identity and Security, Intel
      • Chuck Myrick, Director, Global Services, Acumen Solutions
      • Sujay Sen, Practice Head, Consulting Services, L&T Infotech


  • Intel Cocktail Reception
    • March 9, 4:30 PM - 6 PM
    • Mingle with Intel experts to learn how Intel is driving cloud computing


  • Intel booth
    • March 8 - 9, 11 AM - 6 PM
    • Intel AES-NI Demo
    • Intel Studioway Access 360 and ID Protection Demo (see attached brochure)
    • Nordic Edge OTP Demo
    • Experts on Intel Cloud Builders Program


If you're interested in learning more about how Intel is shaping the open data center of today and tomorrow for cloud computing, please go to To learn more about our Intel Cloud Builders program, please go to:

Download Now


mariecurie.jpgPoland's Marie Curie-Skłodowska University recently deployed a high-performance computing (HPC) cluster based on SuperMicro SuperBlade* servers and powered by the Intel® Xeon® processor 5600 series. Intel® architecture provides the performance and flexibility required to support a diverse range of projects, tasks, and software codes. Intel® compilers were also used to optimize code and deliver even greater performance increases. The university is already evaluating the benefits of rolling out another HPC cluster based on the Intel® Xeon® processor E5 family.

“Here at the university, we work on a diverse range of projects and tasks that rely on many different types of codes,” explained Paweł Bryk, chemistry lecturer at Marie Curie-Skłodowska University. “For example, the parallel programs that run our molecular dynamics research scale well on multi-core architecture, while our other applications require great single-core performance.”

For all the details, download our new Marie Curie-Skłodowska University business success story. As always, you can find many more like this on the Business Success Stories for IT Managers page.  And to keep up to date on the latest business success stories, follow ReferenceRoom on Twitter.



*Other names and brands may be claimed as the property of others.

Download Now


nomura.jpgJapan's Nomura Research Institute, Ltd. (NRI) delivers solutions for finance, logistics, industry, and the public sector. Looking to develop a solution for high-speed, real-time processing of massive volumes of traffic information, Nomura conducted testing to verify the performance of the SAP HANA* in-memory analysis appliance working with a server powered by the Intel® Xeon® processor E7 family. The tests confirmed that 336 million pieces of data from approximately 13,000 taxis could be analyzed in just over one second.

“As the number of users of our service continues to grow, so too does the amount of position and speed data collected,” explained Aritaka Masuda, general manager of the Ubiqlink Department at Nomura. “Accordingly, we must now process this data faster than ever before. We look to the Intel Xeon processor E7 family to deliver ever higher levels of SAP HANA performance as advancements in processor technology are realized.”

For all the details, download our new Nomura business success story. As always, you can find many more like this on the Business Success Stories for IT Managers page.  And to keep up to date on the latest business success stories, follow ReferenceRoom on Twitter.



*Other names and brands may be claimed as the property of others.

My name is Pauline Nist (yes, the National  Institute  of Standards and Technology stole my name).  I've been involved in the design and delivery of Mission Critical server systems for most of my life (there was a brief stint in IT early in my career--good training).


I worked on a lot of Vaxes and Alphas for DEC, then moved to Tandem where I was responsible for the NonStop hardware and software - including the SQL MX database.  Then I moved into the "merger" phase of my career, were Tandem was acquired by Compaq, who then also bought DEC. Finally it was all swallowed by HP. There was a lot of indigestion to go around during these years.


Looking for a significant change of pace, I moved to Penguin Computing , a clustered Linux server startup.  Penguin sells to high performance computing and web customers, but I found that startups also teach you a lot about cash accounting.


Now I'm at Intel. Quite the change, but in many ways a logical progression to what is now the emerging way to deliver Mission Critical computing


It's been an exciting week here in the server world at Intel.  The International Solid State Circuits Conference was held here in San Francisco.  It is the place to get previews and hints about future chip products (without the vendors actually announcing availability etc.) from all the chip designers, including, Intel, AMD, IBM etc.


Intel presented a couple of exciting papers.


The first paper  is on the new next generation 8 core Itanium chip, code named Poulson.  It's a 32nm 3.1 Billion transistor chip with a new 12 wide-issue micro architecture.  I haven't managed to find the papers posted publically yet, so I'm including links to some  reports from attendees for now.  For an outside perspective on the Itanium-Poulson chip design, here is a great summary of the processor technology.  This chip is going to allow our OEM partners to deliver some powerful UNIX servers. It's socket compatible with the previous Tukwila (Itanium 9300 series) generation  which should provide an easy upgrade path.


We also presented the new Westmere generation of Xeon server chips which have 10 dual threaded cores, and will be  available in 2, 4 and 8 socket configurations as well as larger Xeon systems using node controller configurations.   They all run on Intel's 32nm process and are compatible with the Boxboro-EX platform.   This is going to provide a great server upgrade story for all of the OEMs who launched Nehalem servers approximately a year ago.  Westmere will  provide increased performance for workloads like database, ERP, BI, server consolidaton and cloud.   Here's a very pretty picture (literally)




Lastly, for extra credit, I wanted to make sure you caught up with President Obama's visit to our Hillsboro Oregon FAB (chip factory). Certainly the biggest thing to hit Hillsboro in awhile.



Intel CEO Paul Otellini also announced a new $5B+ Intel factory in Arizona.

itc_cs_hostway_xeon_carousel_preview.jpgBusiness leadership takes a lot of things—not the least of which is processing power. And that world-class processing power is exactly what three leading organizations get from Intel® Xeon® processor 7500 series.

A global leader in application hosting and infrastructure, Hostway reduces the complexity and cost of Web-based technologies for both small businesses and large enterprises. Hostway standardizes on Intel® server technologies and says Intel® Xeon® processors 5600 and 7500 series help the company stay ahead of rising demand and thrive in its dynamic market segment.

“We use Intel® technologies in our visible platforms because of the brand equity and the recognition of quality and reliability, and we don’t alter that when the systems are hidden from the customer,” explained Todd Benjamin, vice president of enterprise hosting for Hostway. “That says a lot about our confidence in the Intel® products.”

Academic leader Indiana University (IU) is committed to what its IT management calls “ruthless standardization.” It recently refreshed virtually all the university’s server infrastructure and now runs approximately 1,600 virtual machines on an internal cloud of 65 Dell PowerEdge* R810 servers with the Intel Xeon processor 7500 series and VMware vSphere* 4 Enterprise Plus. Accomplished in two months, the transition was invisible to users. The cloud includes 8 TB of DDR3 RAM, offers access to nearly half a petabyte of storage, and runs more than 100 Tier 1 Oracle Database* 11g instances.

“I have a mainframe background,” explained Mike Floyd, chief system architect for Indiana University, “so my expectations are high in the area of reliability, availability, and serviceability. IU is proof positive that Intel® X86 servers are ready for mission-critical applications and workloads. All our enterprise applications are virtualized and running on Intel Xeon processor 7500 series technology.”

NYSE Technologies, a  leading provider of electronic trading solutions, offers a Trading-in-a-Box solution that uses advanced technology from industry leaders Intel and IBM in a solution that consolidates multiple tiers of a trading firm’s servers into a single server, delivering lightning-fast messaging and analytics and providing dramatic gains in efficiency, reductions in latency, and increases in productivity.

“NYSE Technologies’ use of the 64 high-performance cores in the IBM System x3850 X5 server [with Intel Xeon processor 7500 series] is a new model for algorithmic trading. The x3850 X5 has been engineered to provide enterprise-class reliability, and compute and memory scalability, and we’re excited to have NYSE Technologies create a new reference platform with it,” says Dave Weber, program director for IBM Wall Street Center of Excellence.

To learn more, read our new Hostway, Indiana University, and NYSE Technologies business success stories. As always, you can find all of these, and many more, in the Reference Room and IT Center.

Quantitative easing, stimulus packages, rising oil prices, government spending and commodities costs rising globally have led many economists to predict wild swings in inflation on a global basis. Bloomberg reports that in some countries predicting double-digit inflationary scenarios over the coming years. These same economists are also predicting that greater austerity measures are required by the world's governments in order to preserve sovereign viability whilst simultaneously recommending huge loan packages to "bail out" troubled economies from "themselves."


Around the world, economists, government planners and politicians continue to insist on the investment in infrastructure and services (health and social) primarily funded through taxation and loans against sovereignty as the "path to prosperity". It is upon this backdrop that the technology industry has entered the largest cyclical growth in our industries history.


It was truly a humbling experience that President Obama visited Intel's Oregon manufacturing site to celebrate "American Innovation and American Manufacturing". I am proud to have been a part of a series of great teams at Intel who have led the industry in technology innovation, manufacturing and supply chain leadership. However, the single most important factor in this innovation is deflation. We have lowered the cost of manufacure, the cost of acquistion and the cost to manage the supply chain. Let me explain.....


Over the last 5 years, each generation of technology is better and less expensive than the last. In fact, the cost of a laptop computer has gone from $1600 in 2001, $1000 in 2005, approximately $600 today. 2.6x reduction in End User Cost in 10 years, not bad. The average price of an enterprise server has decreased over 50% in the last decade, while delivering over 10x the performance.  We have introduced new capabilities in smartphones, device storage, networking capacity, PC's, TV's, spectrum propagation and Server technology. Our data center technologies alone have reduced power consumption in the data center over 25% from previous generations just 3 years ago.  Virtualization technologies have improved on data center performance 8 times over the last 5 years. These improvements in efficiency, performance and virtualization have allowed us to realize billions of dollars, pounds, euros and yen in data center cost savings around the world. These costs are realized by IT departments whom budget have been "frozen" as a percentage of revenue for over 8 years.


In the last 5 years over 1 billion new users have joined the digital age and are regularly accessing the internet. The cost of the equipment to use, view and input their thoughts, ideas has gone down almost 20% in that same time frame. As an industry of service providers, technology manufacturers and software developers we have made it easier and more accessible for the world to consume our products. Yet ironically, we are climbing a mountain of uncertainty when it comes to realizing the benefits of our technological advancements in society at large. Social networking and communications have been at the heart of political change, yet globally economists are applying 19th Century economic theory to "manage the crisis". How can you apply 19th and Early 20th Century economic theory to world that relies on Facebook, Google, Baidu and Microsoft to sift through more data in one hour than these same theorists could digest in a lifetime?


Over the next 5 years, Intel is committed to delivering technologies in the data center that will reduce carbon emissions by as much as 45 coal plants worldwide. Our data center efficiency technologies can reduce power consumption per 1000 servers as much as 20% from previous levels just 18 months ago. Our Intelligent Power Node Manager technology is being developed into it's second generation to provide increased instrumentation for virtualization and cloud management software vendors to provide  interfaces for data center managers to optimize their workload and power consumption environments. We are delivering optimized virtualization silicon, with security (Intel TXT) across our entire server lineup in 2011, to insure trusted migration of workloads for virtualization and cloud computing implementations. We have introduced 10Gbe Ethernet technologies (x520) for unified networking to reduce the cable dependencies in Data Centers and hasten server, storage and networking consolidation. Further reducing the dependency on copper, nickel and gold requirements in the data center. We are delivering next generation storage devices which reduce power, consumption, mean time between failure and space requirements from previous generations by as much as 25%.


As importantly, we are committed to working with an industry of like-minded Data Center executives through the Open Data Center Alliance to develop the key usage models for future innovations across the data center  which drive simplification, security and efficiency in daily operations of some of the worlds most complex environments.


Perhaps it is just us "geeks of the industry" whom are committed to enter the "Age of Deflation", where efficiency, optimization and cost reduction are a requirement for survival, not a theory of "austerity". We have over a  billion people joining the digital discourse  and multiple billions of devices in the next 5 years, if we can extend  to them cost reduced technologies at the highest quality so that they can live in the most efficient manner possible, perhaps we can prove the "past" century economic theorists wrong. This isn't the 18th, 19th or 20th century, we are manufacturing better technology with faster innovation at the most unprecedented level in human history. Let's lead "Innovation in a Deflationary Economy" discourse over the next decade….it's challenge to us all living in the 21st century.



Let me know your thoughts....

Today Microsoft is shipping RemoteFX.  Microsoft’s RemoteFX, part of Remote Desktop Protocol (RDP) and included in Windows Server 2008 R2 SP1, is designed to provide a complete, rich, local-like desktop experience for hosted desktops and applications.


A big part of delivering that experience is the ability to virtualize GPUs.  Engineers at Intel and Microsoft have been working closely together to develop the technologies to make Virtualization experience better.

Microsoft’s RemoteFX technology makes use of new virtualization capabilities in the latest Intel Xeon processors, with Next-Generation Intel® Micro architecture (Nehalem).  These virtualization capabilities combined with Intel® QuickPath Technology and an integrated memory controller speed traffic between processors, GPUs and I/O controllers and reduce latency.

Servers built with the Intel® Xeon® processor 5600 series or the Intel® Xeon® processor 7500 series offer IT intelligent, scalable performance and cost-saving energy efficiency for Microsoft RemoteFX, plus a superior user experience for the virtual desktop.  Windows Server 2008 R2 SP1 also includes Hyper-V Dynamic Memory.  Dynamic Memory uses Next-Generation Intel Micro architecture to optimize memory management and maximize scalability, helping to deliver near-native performance for virtual desktop VMs. Read more details on Intel and Microsoft at the Intel Alliance site:


Hyper-Threading Technology requires a computer system with a processor supporting HT Technology and an HT Technology-enabled chipset, BIOS and operating system. Performance will vary depending on the specific hardware and software you use. For more information including details on which processors support HT Technology, see here.

Tom Kilroy, Senior Vice President and General Manager of Intel's Sales and Marketing Group is in Prague this week at an event sponsored by PRE (Prague Energy Utility Company), the third largest energy utility in the Czech Republic where he spoke about the energy efficiency efforts at Intel.


The objectives of the forum were to:


  • Inform the market about Intel’s relevance in the energy segment - Home Dashboard & Intel's technology advancements and decreased power consumption


  • PRE to inform the market about the challenges that the energy segment is facing and why they are interested in Home Dashboard and also about their initiatives to actively advise customers on energy savings


  • The announcement of the Intel / PRE cooperation with the Home Dashboard pilot - PRE will issue a press release on the HEMS (Home Energy Management System) pilot and their cooperation with Intel



While the main focus of the event was the Home Energy Management System pilot with Intel, Tom spent some time talking about the energy reductions we have achieved in the data center with our Xeon processors.Intel Data Center Efficiency at PRE Prague

Here's Tom in the room with some other folks. Our Intel Pillars on the screen in the background.




The big breakthrough in the energy efficiency of Xeon is in implementing Intel’s Energy Proportional strategy of scaling the platform power through the entire workload. When demand is high you benefit from the performance Xeon delivers, and when demand is low you get the benefit of Xeon’s energy saving features.



Intel Xeon Tick-tock Evolution

The evolution of Xeon platforms is shown in this graph, starting in 2004. Since then Intel has delivered on the tick-tock cadence of major micro-architecture advances, significant improvements in the energy efficiency of Xeon. At usual usage levels of 10-30% average utilization, we have delivered over 40% reduction in energy use in the last five years. While still delivering the performance gains of Moore's Law.



That’s energy savings you can take to the bank!



We’re very proud of our achievements. It is great to know that our international industrial partners, like PRE, are working with us to deliver the most efficient computing solutions to customers around the world.

itc_cs_sohu_ssd_carousel_preview.jpgWhen Sohu Company decided it needed to improve the storage capacity of its search servers, its solution was to repurpose three hard disks it was using as system disks for data storage. It also wanted to improve the performance stability of its search servers and the computing and data reading/writing ability of the data cache in its search servers during peak search hours.


Sohu found a  solution that paid off in a big way, replacing the three SAS* hard disk arrays in the search server with one Intel® X25-V Value Solid State Drive so that it could repurpose the three hard disks for data storage.

“We deployed Intel® SSD as the startup disk for the search servers in our data center,” explained Zhang Shuguang, senior manager, for Sohu. “This enabled us to release three hard disks per server...helping us reduce TCO on business applications up to 16 percent.”

For the whole story, read our new Sohu business success story. As always, you can find this one, and many more, in the Reference Room and IT Center.

itc_cs_turkcell_xeon_carousel_preview.jpgIt’s  not only the leading communications and technology company in Turkey, but Turkcell is also the third-largest mobile operator in Europe. To keep its competitive advantage, the company needed to ensure its business-critical transaction management application could operate at peak performance to guarantee licensing efficiency. It was also looking to reduce costs by finding an alternative to its high-end UNIX* platform. Finally, Turkcell was looking for an IT platform that was scalable so that it could handle business expansion without generating excessive hardware costs.

Turkcell tested a platform based on the Intel® Xeon® processor 5570 as an alternative to its high-end UNIX platform. Initial testing was verified on the replicated environment and supported by an in-house return on investment analysis.

The results? Turkcell expects the new platform to be 44 percent less costly to run than its original high-end UNIX platform. Application efficiency is also up, with performance increasing by as much as 35 percent.

“We carried out our own return on investment analysis prior to deployment of x86 architecture powered by Intel® technology and have calculated that we will see a 25 percent decrease in the system’s total cost of ownership in three years,” explained Zihni Uğurbil, division head of infrastructure operations and project sponsor at Turkcell.

To learn more, read our new Turkcell business success story. As always, you can find this one, and many more, in the Reference Room and IT Center.



*Other names and brands may be claimed as the property of others.

Recently I did a webcast with Intel where we focused on two areas:

  1. Cloud On-Boarding
  2. Cloud Bridging


During this webcast, Citrix and Intel covered these key topics around cloud computing:

  • Move applications and application components to a cloud without the need to re-architect any portion of the application stack
  • Dramatically reduce cost and complexity of moving to the cloud
  • Extend and secure access to applications hosted in the cloud


Let’s focus on Cloud Bridge for the moment:

So I was driving to work this morning and got thinking: each morning we get into a car, drive down a street, a highway or in some cases take public transportation.  At some point in your commute there is a river or you need to pass over another road and at this point, imagine the world without bridges.  If there were no bridges, you cannot get from point A to point B reliably.


Bridges on the way to work each day perform two simple functions:

Connects point A to point B so that whatever crosses it can traverse reliably and without interruption.

Does not care what crosses from point A to point B.


Now, let's consider this: What if the Florida Keys were never connected via a bridge or what if the Golden Gate or the Bay Bridges were never built?  These big bridges serve a big purpose.


A network bridge is very similar to the bridges we drive across every day.  Network bridging allows you to connect multiple network segments at the Layer 2 level (network A to network B).  Bridging is very simple where it does not require an IP address and does not care what traffic crosses it.  In most use-cases, bridging is typically leveraged in a LAN scenario.  Think of the datacenter and the cloud this way, a big gap that needs a big bridge.  Let's take this a step further and consider things like security, location and performance and that is where OpenCloud Bridge is more than just a bridge!


Remember, migration to the cloud is not just about the application.  We need to consider three core things:


Network/Security Transparency

Let's simplify network transparency a bit and think about when building a bridge, you need a way to transport traffic (asphalt) but also a way to secure it (guard rails).  Image bridges with a road but no guard rails to keep cars secured.  A network bridge is very similar; you can establish a network bridge with and without security.  A network tunnel is great for transporting traffic but lacks security. Internet Protocol Security (IPsec) is great for securing traffic but does not always allow all traffic to flow across the bridge.  Combining these two protocols will ensure the bridge is secured and will allow for adequate traffic flow.  Network bridging makes the entire hybrid cloud appear as one contiguous network.  A Layer 2 tunnel provides a routable connection between datacenters, while an IPSec VPN tunnel is used to ensure the connection is secured.  In summary, by combining a network bridge with the security of IPSec, OpenCloud Bridge enables seamless network connectivity from the datacenter across to the cloud enabling a true hybrid cloud.


Location Transparency

When you cross a bridge, do you worry about how to get to the other side?  Of course you do not!  You just drive and know that you will get to the other side.  When migrating applications to the cloud, should access to those applications be as seamless as possible?  I don't know about you, but I think it should!  When combining Citrix® NetScaler® with OpenCloud Bridge, Netscaler® provides a consistent, seamless end-user access to cloud-based services no matter where within the hybrid cloud they're hosted.  Citrix® NetScaler® ensures that end-users access applications the same way, even as workloads are moved around within hybrid clouds.


What if the bridge is being repair or closed?  Typically there is a detour route in place where you can still get from point A to point B.  Like a detour, in the event of network or datacenter outages, NetScaler ® transparently redirects requests to available datacenters. Global server load balancing improves performance by routing user sessions to the closest or best performing datacenter that is available at any given time.  Site capacity global server load balancing transparently redirects user requests to the cloud or datacenter that is least busy in terms of concurrent connections, datacenter response time, packets handled or bandwidth consumed.


Performance Transparency

Finally imagine a bridge with only one lane or going from four lanes down to two lanes.  Traffic would get pretty snarled up and the bridge would become your primary bottleneck.  The more lanes on the bridge means a better traffic flow which in turn means a much faster commute (hopefully)!  Citrix® Branch Repeater™ addresses the high latency and low bandwidth of WAN links between datacenters WAN Optimization ensures reliable network performance across the hybrid cloud, even over severely congested networks.  Citrix® Branch Repeater™ provides breakthrough adaptive compression technology to reduce WAN bandwidth requirements. In turn, it reduces traffic for bandwidth hungry applications such as file transfers, software distribution, backups and data replication.


So again, imagine a world without bridges.  It would be pretty hard to get from point A to point B reliably and without interruption!  The Citrix OpenCloud Bridge solution increases interoperability between your on-premise datacenters and off-premise clouds.  This interoperability increases your flexibility, enabling more choice around what applications you can move to the cloud.  Since the OpenCloud Bridge solution supports multiple virtualization environments, you also have more choice in cloud providers, enabling you to drive down costs.  In short, OpenCloud Bridge is the big bridge to fill the big gap between your datacenter and the cloud!

Five years ago cries of a catastrophe in the data center were rampant as people vocally worried about the data center meltdown due to unbridled energy use, inefficient buildings, and escalating server power use.


That, I think, is exactly the kind of challenge serious engineers enjoy! Today the industry has responded with efficiency metrics like Power Usage Effectiveness, improved uses of UPS systems that minimize power loss, Improvements in data center design, higher efficiency power supplies, and energy propoational computing, among other serious improvements.


  • At the outset of the effort, PUE of 1.5 was thought state of the art. Today we see the efficient data center reaching PUE of 1.1 regularly.



The  problem is, even, with that major progress, the demands have only grown larger.


Energy efficient data centers and IT have moved from a good idea to a key area of differentiation for some businesses. What is the state of the art in energy efficiency metrics and technology?


Sustainable IT is emerging as a business necessity.. What metrics for carbon and water usage can a data center manager use to measure and drive improvement?


California and the UK have adopted strict carbon regulations, such as the CRC. What is the data center  industry doing to respond to these challenges?


What activity is going on in Europe and Japan in state of the art data center efficiency?


How do sustainability criteria affect data center site selection?


These kinds of questions will be addressed at the upcoming Green Grid Technical Forum in Santa Clara, CA. The Green Grid Technical Forum will provide new technical content, training, and discussions on industry trends.


I plan on attending, so should you.


Register to secure your spot at this important industry event on March 1-2, 2011 in Santa Clara, CA. For more info and the full agenda go to the event page for The Green Grid Technical Forum or contact The Green Grid Administration via e-mail.

itc_cs_nomuraeng_xeon_carousel_preview.jpgWhen it comes to adding the processing performance critical HPC computing applications demand, companies worldwide are turning to the Intel® Xeon® processor 7500 series.

Consider, for example, French company AREVA, a leader in the energy sector. AREVA has experimented with a new computational method for the upper internals of nuclear reactors. ESI’s SYSTUS* solvers, running on servers with the Intel® Xeon® processor 7500 series, can simulate models of nuclear components 10 times larger and four times faster than AREVA’s previous systems, with greater precision.

“Some high-performance computing (HPC) workloads rely on large data sets and complex calculations that are not easily distributed across large numbers of smaller servers,” explained Laurent Duhem, software engineer for HPC at Intel. “Intel Xeon processor 7500 series-based servers and their large shared memory capabilities are ideal for these demanding applications. This super HPC node delivers the necessary compute power, large memory capacity and memory bandwidth performance to solve big science faster.”

DuPont, one of the world’s premier science companies, also chose the powerful processor. It wanted to provide the company’s scientists with the power and flexibility of cloud-enhanced computing for its scientific application portfolio, choosing the Intel Xeon processor 7500 series as the foundation of its science HPC cloud. Now the processor’s performance, memory capacity, and virtualization capabilities are providing up to a 20x speed-up for critical applications compared to DuPont’s legacy systems. This enhanced capability helps DuPont to create a more dynamic research environment which, in the words of Tim Mueller, CIO for CR&D, “is transformational.”

Andd then there's Japan’s Nomura Research Institute, Ltd., a leader in the financial services industry. The company needed to deal with processing capacity limitations of its existing servers to cope with a rapid increase in transaction volume, implementing a platform with excellent expandability that would be able to deliver HPC reliably over the long term.

After adding four-way server with Intel Xeon processor 7500 series, the company now has performance that can scale linearly as additional processors are added. Nomura is also expecting major cost reductions and a greater choice of platforms for its mission-critical applications.

For the whole story, read our new AREVA, DuPont, and Nomura business success stories. As always, you can find these, and many more, in the Reference Room and IT Center.



*Other names and brands may be claimed as the property of others.

When we launched our Cloud Builders community in Oct 2010, we had about 20 cloud builders reference architectures. Since then we have been adding several new reference architectures to our reference architecture library. One quick primer. Lets see if some one can answer "what is in a reference architecture?" Hint: You will find the answer in the Cloud Builders Reference Architecture Library page


The reason for this blog is to introduce our brand new RAs - Huawei and Nimbula. Let me go over each in detail:


Huawei: Huawei SingleCLOUD* solution is designed for the cloud computing data centers. Using SingleCLOUD solution, Cloud Service Providers can construct network-based office environments which provide “pay as you go” server and storage services for enterprises, especially for small and medium enterprises. This reference architecture discusses the Huawei SingleCLOUD solution optimized on Intel Xeon® processor-based platforms and describes how to implement a base-solution to build a more elastic and complex environment of cloud computing.


Nimbula: Based on Nimbula’s Cloud Operating Sytems, Nimbula Director allows customers to efficiently manage both on- and off-premise resources by transforming under-utilized private data centers into easily configurable compute capacity and providing controlled access to extern clouds. For this paper, Nimbula and Intel have worked together to prototype a private cloud computing test bed running a private cloud on a cluster of 12 Intel® Xeon® processor-based servers. Using the Nimbula Director API, via the command line as well as the Web-based interface, accounts were created based on organization hierarchies, assigned appropriate permissions to users and groups, monitored and managed resources, demonstrated the elastic nature of resource scaling, and configured the use of cloud federation to enable bursting into external cloud during times of high demand.


Visit the ever growing reference architecture library to learn more about these RAs. Also please visit the Fellow Travellers page to learn more about these partners.


Stay tuned to hear from me on the new RAs that we will be announcing soon

itc_cs_chinatelecom_xeon_carousel_preview.jpgTwo Chinese companies are climbing to cloud computing success thanks to Intel® Xeon™ processor 5600 series.


Focused on R&D and business applications for cloud computing services, 21ViaNet launched China’s first business computing service platform, providing Internet infrastructure services to customers. To improve its service and reduce operating costs, 21ViaNet upgraded to servers with Intel Xeo processor 5600 series with a cloud computing model using Intel® Virtualization Technology. The advanced virtualization features have doubled the computing performance of older-generation processors.

“The significant advantage and superior performance provided by Intel® Virtualization Technology and the computing performance of the Intel Xeon processor 5620 meet our needs to upgrade our cloud computing platform servers and enable us to deliver an Internet infrastructure service to our customers with a cloud computing mode while reducing costs and improving services at the same time,” explained Jiang Jianping, CTO of CloudEx Technology Co., Ltd., the division that operates the business cloud computing service platform.

China Telecom urgently needed to implement cloud computing research and optimize its data center to reduce operating costs and energy consumption. Incorporating Intel® Data Center Manager into its cloud computing experimental platform, which was based on the Intel Xeon processor 5600 series, laid a solid foundation.

“Built on the Intel Xeon processor and Intel Data Center Manager, China Telecom’s cloud computing prototype experimental platform utilized the improved computing and energy-efficient features of the Intel Xeon processor 5600 series and provided reliable testing data for the scaled deployment of China Telecom’s cloud computing framework,” said Ding Shengyong, project manager for China Telecom.

For the whole story, read our new  21ViaNet and China Telecom business success stories. As always, you can find these, and many others, in the Reference Room and IT Center.

itc_cs_eurocar_xeon_carousel_preview.jpgHow can you grow your business while shrinking the size of your data center? Just ask Europcar. This South African rental car firm was able to create a new productivity-boosting IT infrastructure based on Intel® Xeon® processor 5500 series to support unified communications -- after consolidating servers by 46 percent with virtualization.


“Key goals included bringing the physical IT infrastructure back in-house, which meant building a new data center and virtualizing both servers and storage,” explained Shaun Phillips, general manager of IT operations and infrastructure for Europcar. “Furthermore, we wanted to establish a platform for unified communications so that we could enable real-time communication in the future.”


With the virtualization, end users now have access to new servers in 45 minutes and easier management has increased IT productivity by around 85 percent.


For the whole story, read our new Europcar business success story. As always, you can find this one, and many more, in the Reference Room and IT Center.


Pushing Apps Uphill

Posted by peted Feb 8, 2011

Recently I did a webcast with Intel where we focused on two areas:

  1. Cloud On-Boarding
  2. Cloud Bridging


Let’s focus on On-Boarding for the moment:


Hey Systems Administrators: Does cloud computing keep you awake at night making you wonder how the heck you can leverage Cloud Computing? Are you currently running VMware in your datacenter and wonder, can the cloud support the many vmdks you have? Do you have resources back in the datacenter that applications in the cloud will need access to? In the end, the cloud does not look to promising for you!


Hey CTOs/CIOs: Are you trying to find ways to save money in your infrastructure but need to understand what applications can move to the cloud? Are you afraid of vendor lock-in? What to move to Cloud Computing but need to understand the ROI and CTO?


Recently, I co-authored a whitepaper for the Intel Cloud Builders program talking about this topic! Check it out:


Moving applications to the cloud can be complex, and depending upon the application the target cloud environment, may require re-architecting the application and network stack.  Many factors must be considered when moving an application to the cloud: application components, network stack, management, security and orchestration.  The solution helps solve all these issues by leveraging technologies around a robust virtual platform, virtual machine migration, open virtualization format (OVF) and key cloud technologies and leading cloud providers. In short, the desire to move application workloads to the cloud should be seamless and with minimal manual effort making cloud computing a reality giving system administrators a solution to move to the cloud with ease.


A proper on-boarding solution will enable:


  • Migration and Virtualization Heterogeneity
  • Enabling Hybrid Cloud Computing
  • Powerful Cloud Partnerships


Put simply, on-boarding can be more than just moving an application to a cloud but rather the whole application stack; or an application workload.


The key areas that will be addressed are:


  1. Applications are on-boarded to the cloud with minimal effort without the need to re-architect the application and/or network stacks.
  2. Applications can be bundled into application workloads that encompass key components such LADP, storage, data access and web that the application may need once on-boarded to the cloud.
  3. There is no dependency on the premise datacenter virtualization platform when moving to the cloud, where various virtual machine formats are supported enabling heterogeneous format support.


Once on-boarded to a target cloud, application workloads will function as if they are still in the premise datacenter coupled with robust management capabilities leveraging Citrix’s OpenCloud Bridge.


Vendors like Citrix and Intel are trying to make moving to the cloud seamless!  The goal is for application workloads to move to the cloud without the need to re-architect the whole application and network stack making cloud migration a reality.


Introduction: Pete Downing

Posted by peted Feb 8, 2011

Hello!  My name is Pete Downing!


I joined Citrix in December 2006 with over 10 years of experience in technology.  With a diverse background in IT, desktop management, networking, virtualization, server based computing, application deployment, profile management, Microsoft Active Directory and systems administration, I bring a vast knowledge of the industry to my role as Principal Product Manager.


Starting out in IT, I worked a full time job while in college as a systems administrator for a Boys and Girls Club located in Fall River, Massachusetts.  Also while in college I worked various jobs with the campus networking team.  Post college I worked for a medium size biotechnology company TKT (now Shire) as an IT systems administrator.  From IT, I decided to enter the software world, joining ManageSoft (now Flexera Software) as a Pre-Sales Engineer.  After almost three years with ManageSoft, I decided to move on and join Ardence as a Senior Pre-Sales Engineer.  In December of 2006, Ardence, Inc. was acquired by Citrix and during the transition I took on the role as a Senior Product Manager thus beginning my career as a product manager.


Currently I am involved with Citrix’s cloud computing initiatives working specifically on the Citrix OpenCloud Bridge, the Citrix OpenCloud On-Boarding Solution stack and other key strategic cloud initiatives.

itc_cs_cggveritas_xeon_carousel_preview.jpgWhen three organizations recently investigated the best way to upgrade their high-performance computing, they all found the same answer: the Intel® Xeon® processor 5600 series.


Looking for more compute power to deliver improved seismic imaging services to its clients, international geophysical company CGGVeritas extensively tested its existing high-performance computing hardware against the latest processors from both Intel and other companies. It found the Intel Xeon processor 5600 series offered the best performance and value for the money.


“By upgrading our high-performance computing systems with the Intel Xeon processor 5600 series, we have been able to improve the quality of the seismic processing and imaging services we deliver to our clients,” explained Jean-Yves Blanc, chief IT architect for CGGVeritas.


Université Montpellier 2 wanted the most advanced technology available in its state-of-the-art, high-performance computing center to ensure high memory throughput for both parallel programming and future software optimization. It also needed low power consumption. Comparative benchmark tests showed the Intel Xeon processor 5600 series had both the power and efficiency it needed.


“In developing a new computing cluster, we are looking into the future to assess the direction of computing technologies and ensure we are keeping pace with new developments,” explained Anne Laurent, director of the HPC@LR Center. “The Intel Xeon processor 5600 series and IBM iDataPlex* platform certainly help us achieve this.”


For University of Erlangen-Nürnberg, the goal was to build a new HPC platform that could process applications faster and enable researchers to run more complex calculations. It also wanted to boost the overall capacity of its platform to allow more people to use it at the same time and replace the random mixture of hardware and processors in its existing HPC system with a homogeneous computing environment that would be easier to manage and easier for researchers to use.


“Upgrading our high-performance computing platform with the Intel Xeon processor 5600 series has enabled us to better support research activities within the University,” said Dr. Gerhard Wellein, head of the University’s HPC group. “Researchers can now perform more complex computing tasks in significantly reduced timeframes, allowing them quicker access to the information they need for their studies.”


To learn more, read our new CGGVeritas, Université Montpellier 2, and University of Erlangen-Nürnberg business success stories. As always, you can find these, and many more, in the Reference Room and IT Center.



*Other names and brands may be claimed as the property of others.

Filter Blog

By date: By tag: