Skip navigation

It is not late to register for a joint webinar delivered by Red Hat, Intel, and Dell.

This webinar delivers how to plan and execute your migration from SPARC/Solaris to Dell/Intel/Red Hat, a popular content also delivered in seminars the companies did in North America in October.

  • A clear roadmap showing the estimated timeframe and costs for your migration
  • Effective training for you and your IT staff
  • Proven best practices to ensure a smooth implementation
  • Join Red Hat, Intel, and Dell on December 2 2009 at 2pm ET to see how these open source pioneers can help you move from a RISC/UNIX environment to Red Hat Enterprise Linux.


Register here.



Nehalem-EX: Big Memory for Big Science



I was at SuperComputing’09 last week in Portland, Oregon. I talked with some brilliant people, and saw some fantastic stuff.


It was good timing on my part because last week Intel also announced that it would offer a 6-core, frequency-optimized version of its Nehalem-EX product due out next year. This part is intended for use in tackling some of the types of high performance computing (HPC) workloads prominently displayed at SC’09.


Most people know that the majority of HPC workloads today are based on clusters of relatively small-memory, 2-socket systems. That is because most HPC workloads may be broken into smaller, discrete units of work that can be efficiently processed using such clusters. For these workloads the primary hardware capability selection criterion is typically a balance of both memory bandwidth and compute FLOPs (floating point operations per second).


But there are other types of HPC workloads. Specifically, those that deal with very large datasets (some as large as a terabyte) or those that have to deal with non-sequential memory access.   This means the workloads simply aren’t easily divisible--or it is inefficient to do so-- into the relatively small memory footprints used in traditional clustered 2-socket HPC solutions. Examples of these types of bigger memory applications can be found in a variety of fields such as weather prediction, manufacturing structure analysis, and financial services.


The high-speed processing requirements and size of these workloads put a greater premium on system memory capacity/bandwidth than on compute FLOPs.


If the larger dataset won’t fit into available memory, and dividing up the dataset to spread across multiple nodes cannot easily be done, then data has to be moved in and out memory to hard disk.  But using hard disk drives (which are many times slower than RAM memory) can drastically impair performance. 


There are now two better alternatives to the use of hard drivers. One is SSDs and the other is having a larger memory footprint. Solid State Drives have fairly high data density vs RAM, but much faster access than hard-disk drives--albeit still markedly slower than RAM. Another solution is to simply have more capacity of the faster RAM. This last one is what the Nehalem-EX HPC part is aimed at.


Nehalem-EX is the Expandable Class of Nehalem. The Expandable Class brings all the goodness of the Nehalem architecture (Xeon 5500 product line) to the HPC market, but in the form of a “super node” that has greater:




  • Core/tread count


  • Socket scaling


  • I/O and memory capacity (up to 1 terabyte in a 4 socket system)


  • Bandwidth at capacity reliability features


  • Other features


The 6-core frequency-optimized Nehalem-EX part has also been tuned to offer the highest core frequency possible for this chip.   In creating this part, Intel is meeting the needs of the HPC community that want higher scalar performance along with the benefit of large memory capacity and bandwidth per core.


Of course the 8-core version of NHM-EX is still an option for those HPC workloads that scale well with more cores while still looking for the high memory capacity of the expandable class.


By having both 8-core and frequency optimized 6-core versions of the NHM-EX class of processors means HPC researchers have greater choice in selecting the processor best suited for their specific workloads.


After talking with some of the researchers at SC’09 last week I’m really excited to see how the Nehalem-EX “super node” will deliver the necessary compute and memory capabilities to help those researchers solve some of their biggest challenges.




Change is hard, but it can be done and the benefits of change usually outweigh the concerns which were on our minds before we made the change.


When making the change from running your solution on a RISC architecture to running that solution on a Xeon architecture, the biggest concern usually relates to whether that solution will run at the same level as on the previous architecture. I'm not talking about performance specifically, but usually the question is around whether operating systems like Linux, Windows, and Solaris on Xeon will meet your business needs for yourmission critical solutions.


Like the underlying improvements in the microprocessor, I believe that there have also been major fundamental improvements in the operating systems that run on both today's and the soon to come next generation microprocessors (sorry, my obligatory Nehalem-EX advertisement... coming soon in 2010). A decision made many years ago to run your solution on Unix/RISC was made based on comparing all the different variables at that time to pick what was right for your business. At that time you likely decided that your solution would not run on these operating systems, these operating systems were not suitable for your mission critical workloads etc. Probably right decision at that point, but like everything else decisions get revisited based upon the here and now and what may have been the right solution in the past (and right decision) may not be the right solution for your needs now.


I wanted to share some thoughts specifically on Redhat Linux today. Lets take a little look at Redhat Enterprise Linux. Current versions of Redhat can deliver what is required for your critical solutions. RHEL is ready and here are some of the reasons cited by Redhat in recent webinars on this topic and my interpretation of their comments


  • Hosts real-time global mission-critical infrastructures and operations 24 X 7 - its tried and tested by other Enterprises
  • Enables 5x9s availability in highly secure environments - pretty important to most critical solutions
  • Contributes measurable reductions to TCO and enables, agile, standardized, and virtualized infrastructures - TCO benefits through standardization
  • Has major ISVs on-board with the majority of 3rd party Unix applications have Linux and/or Windows versions available - the ISVs that traditionally delivered applications to you based on Unix, also have versions supported on Linux/Windows
  • Many customer unique applications are developed with programming languages such as C, C++, JAVA, or J2EE and can be migrated to Linux and / or Windows - your applications can be moved
  • Hosts most major database systems standard for your infrastructure - all the major databases run and run well on Linux



One of the other things we encounter a lot is around whether the technical considerations to move from one operating system environment are too high to overcome and outweigh the benefits of moving. There are always technical considerations and things that you need to know to move from one environment to another. However you are not alone in trying to understand these technical considerations. Redhat have done a phenomenal job of documenting the challenges of moving from say Solaris to Linux and have developed a great Strategic Migration Planning Guide. This is available on request. In recent webinars Redhat outline some of the things that you need to consider for the following technical categories



  • Development Environment
  • Kernel tuning
  • Security
  • Filesystems
  • Debugging
  • tracing, Profiling;
  • Command Differences
  • Deployment methods
  • Software Management
  • Virtualization
  • Application considerations



In addition to the current versions of Redhat running on Intel architecture, we are also working very closely on future versions that will take advantage of the 20+ new RAS features that are planned for Nehalem-EX - more on that in a future blog


You are not alone, resources, tools and expertize exist to help you make that move and reap the business benefits while still delivering to the requirements of your business. Check out Redhat online tools for more information that dives deeper into all the areas for consideration

We think Redhat Linux and Xeon are ready to run your mission critical workloads and solutions...What do you think?

I have to admit, I have never had the opportunity to be involved in HPC, Super Computing, or the communities that have evolved around such things.  My first real experience with it was yesterday at SC09 in Portland, Oregon.  A conference like any other was my thinking.  But, when I started walking around the exhibition area (booths), I was amazed at the number of Universities and education based solutions that were represented.


Here is a quick montage of images that I put together of the educational facilities I saw and took a picture of...


I am sure there are many, many more, but I only captured the few that are represented here.


The one on the bottom left is not really an educational institution, but rather a company that I stopped and talked with for a few minutes.  They essentially offer up their datacenter and supercomputer infrastructure for all the education facilities in the state of Alabama... K-12 and Universities too.


Here is another picture I took of a scout troop that was visiting the event.  What a great opportunity for them to see and hear all about the most powerful computers in the world.

SC09 038.jpg


Here they are again listening (attentively) to a speaker from NASA talkiing about the expansion of the universe and how we study it.

SC09 054.jpg


Education is and should remain a priority for all of us.  This is hopefully a good reminder of that.  It certainly was for me.




10 Gigabit iWARP @ SC’09!

Posted by tstachura Nov 19, 2009


What is iWARP?  (Click here to find out)


Ethernet Tom here.  I’ve recently come into the Intel Ethernet group – marketing Ethernet products in the HPC, Financial, and Cloud verticals. 


And very HPC relevant -  I’ve been very busy getting things lined up for Super Computing ’09 and wanted to share what we have happening:

  • 3 demos:
    • Intel Booth:  Two 6-node clusters (96 total cores!) running NYSE’s Data Fabric* middleware – showing iWARP vs. non-iWARP
    • Supermicro Booth:  4-node cluster running Fluent
    • EA Booth:  4-node cluster running Linpack in a converged Ethernet environment
  • 5 presentations:
    • Data Transportation at iWARP Speed – Feargal O’Sullivan – NYSE Technologies
    • Memory Virtualization over iWARP for  Radical Application Acceleration – Tom Matson – RNA Networks:
    • A 10Gigabit  Ethernet iWARP Storage Appliance – Paul Grun – System Fabric Works
    • CD-adapco Benchmarking Performance using Intel iWARP – William Meigs – Intel Corporation
    • iWARP – What & Why? – Tom Stachura – Intel Corporation

If you are here today (final day of SC’09!), stop by and check it out.  If not, I’ll come back later with some video links.


Green Storage

Posted by cebruns Nov 18, 2009





It’s not just about energy-sipping systems—it’s also about your storage footprint



Most of us are familiar with the concept of green IT: increasing energy efficiency across the enterprise to trim costs and optimize resources. While you hear a lot about servers helping to reduce energy usage, not as much is said about storage. Intel and the storage industry are working together to provide green storage solutions, too.



For the storage community, every system has to be cost-effective as well as performance-driven, which means energy efficiency is a key consideration. It starts at the processor level, where the Intel® Xeon® processor 5500 series is extending the boundaries of energy efficient performance.



Many storage system providers have picked up on the Intel Xeon processor 5500 series since it was introduced last March. For example, the HP StorageWorks XP10000* Disk Array and 3000 Enterprise* Virtual Array are based on the new processors. Schooner Information Technology appliances leverage quad-core Intel Xeon 5500 processors and half a terabyte of Intel® X25-E flash memory. The bottom line for the Schooner appliances is an 80 percent decrease in power and cooling requirements versus ordinary servers.



But green storage isn’t just about power consumption at the processor or system level. An equally important green strategy is to reduce the overall storage footprint, and a number of technologies are available to help IT organizations implement this strategy.


Virtualization is driving huge data center energy savings by greatly reducing the number of physical machines in the data center. As Bob Fine, director of product marketing at Compellent, pointed out at the 2009 Storage Networking World conference last spring, many large enterprises realize that they’re approaching a cap. “They can only get a certain amount of power in their data centers and see virtualization as a way to reduce their power requirements,” says Fine. “Instead of building new data centers, they can stay in the ones they have, saving millions of dollars in the process.”



Many IT managers tell Intel that storage can be a big gating factor when it comes to scaling virtual environments. The Intel Xeon processor 5500 series uses Intel® HT Technology within each processor core, doubling the number of threads that can be processed at the same time. This option permits more efficient workloads and enables storage servers to virtualize more applications. Intel HT Technology is also more energy efficient than traditional threaded processing.



Compellent and Hitachi Data Systems (HDS), both users of the Intel Xeon processors, recommend reducing the storage footprint in other ways as well. “Limit the amount of content you need to store by using technologies like data deduplication,” advises Asim Zaheer, vice president of product and competitive marketing at HDS. “Also, don’t have wasted capacity or wasted systems—that’s where tiered storage and virtualization come into play.” 



Compellent’s Fine sees tiered storage as especially important when using expensive disk resources like solid-state drives (SSD). By limiting SSD to the top tier, a company could save on drive costs and increase storage efficiency. “Only the active data would sit on SSD, and all the inactive data would go onto a tier-three SATA drive,” says Fine. “Since SSD drives are about 10 times the cost of Fibre Channel, it’s very important to gain those kinds of efficiencies.”



Isilon Systems, another user of Intel processors, has a pay-as-you-grow model for its clustered storage products that makes it easier to avoid over-provisioning and wasting power. If a customer needs to add more performance, Isilon can provide nodes with Intel processors and memory, but no storage. If the customer requires capacity only, Isilon sells nodes with just disks. In addition, Isilon uses ColdWatt power supplies, which it says are about 30 percent more efficient than traditional power supplies.



As Intel works with the storage industry to deliver more energy-efficient and high-performance storage solutions, we’d like to know what IT organizations are doing to implement green storage technologies in the data center. If you work in IT and have fresh perspectives to make your organization more efficient, you’re invited to share your ideas here. 


"Live From Super Computing 2009"

Posted by whlea Nov 18, 2009

This week I'm in Portland, Oregon, where I call home. Its interesting for me since this is my first Super Computing conference, and soo far, I'm really impressed, not only by the intense knowledge and the plethera of scientific discovery all around, but also by the fact this conference is so well attended. There -s a huge trade show floor, filled to capacity where you can see everything from genome research to oil and gas exploration, to bio-computing. . It's very cool to see NASA, Oak Ridge Labratory, and many top universities all showing off the lastest in High Performance Computing, some very cool stuff indeed. From the point of view of higher learning and how super computers are changing the world, this is the place to be. Here are a few shots of the Intel booth in case you get a chance to come by and see us.


SC09-Intel Booth01.JPG


SC09-Intel Booth02.JPG


SC09-Intel Booth03.JPG

SC09-Intel Booth05.JPG


I'll be capturing some cool videos from the conference and you should keep a look out for these on Channel Intel at YouTube. Thanks for stopping by The Server Room.

I have never heard our storage processors called sexy.  Ever.  We just recently announced the details of our next gen processor, Jasper Forest, which will officially start shipping in early 2010. We integrated a lot of key storage features into the Intel(R) Xeon(R) processor 5500 series and the storage industry seems to really like it.  Our animation is posted on YouTube and one viewer called it a sexy beast.  We also had an executive from HP, Dave Roberson, Senior Vice President and General Manager for the Storage Works division talk about how HP is designing it into future storage products.  Check out the video, we are pretty excited about it!


I attended VMWorld in San Francisco and captured some video on Isilon, a great Intel-based scale-out solution. The first link is John Gallagher, Director of Product Marketing at Isilon, giving an overview of their products and how Intel adds to their solution.  He also talks about some of their more successful markets. 



The second link is a chalktalk provided by Nick Kirsch, Senior Product Manager at Isilon, in which he discusses how Isilon storage delivers scale out storage for large scale server virtualization.  I am also looking for any great Isilon success stories, so let me know!



In order to deliver to the continued promise of Moore’s Law, Intel’s Information Technology team needs to enable Intel’s Silicon designers with the tools, capabilities and streamlined processes to bring higher performing processors to market every year.  The latest generation of 45nm products (ie the Intel microarchitecture, codenamed Nehalem) was an especially challenging project for us. 



With Intel design computing demand growing an average of 45% year over year coupled with the rich technology capabilities in the 45nm based Nehalem micro-architecture, the computational requirements of silicon tape-out (the last stage of design before manufacturing) represented an approximate 13x increase in increase in demand from prior 65nm processors. Staring at this demand (1.2 million hours of compute demand per day) plus a need to bring products to market faster and more efficiently, our IT team realized we needed to do something different - our standard grid computing solution that was sufficient for earlier stage design work was insufficient for tape-out. 



Solution: Intel IT built a High Performance Computing (HPC) solution that currently rank in the Top 500 list of supercomputers (#261, #308, Nov 09) and feature a new parallel storage environment to support our 45nm Silicon tape-out process.  The details of this effort are captured in this whitepaper. 


In summary, the Intel IT HPC solution employs two of the world’s fastest supercomputers to create the fastest microprocessors helping Intel achieve the following results.



·         Completed 45nm tape-out in 10day, less than HALF the time of prior products

·         Delivered an estimated incremental value of $44M to Intel



I can’t wait for what tomorrow will bring as Intel IT is already upgrading and evolving this HPC solution to support our future generations of micro-processor designs. Tune in tomorrow at SuperComputing 2009 in Portland where Shesha Krishnapura from Intel IT will present more details on our HPC environment or join us December 8th, 2009 from 10-12am PST for a live chat with Intel IT experts in the Server Room


Chris (twitter)


HPC roadmap.JPG


Ok it may be that your IT department or enterprise applications are limiting your opportunity to adopt 64 bit version of your favorite CAD application, but your inability to adopt a 64 bit CAD application can be very limiting to your productivity.

Here as an example from a recent discussion with some end users who are involved in a workstation pilot with Intel.  When they moved to a 64 bit version of their favorite CAD application the time to open a 2.5GB file dropped from 20 minutes to less than 1.

Question - How many files does your engineering team open a day?  What is the cost of the 20 minutes?

Customers operating in a 32 bit world are forced to work with smaller models.  You knew that.  And of course smaller file sizes will open faster.


Rather than working with the chassis, engine and transmission in a single view, you will need to work each one independently.  The results is you may miss a design interference, a misalignment or another obvious design issue, because you only had a partial view of the entire design.  More rework and more delays.

Yes but... many of the enterprise applications you use are 32 bit and you need to have a 32 bit workstation environment in order to access these tools.  That may have been true once, but with technology like Intel® Virtualization Technology for Directed I/O and Parallels™ Workstation Extreme software you now have the opportunity for an uncompromised workstation experience.  You get all the benefits of a 64 bit CAD application and you can still work within a 32 bit environment when you need to.  You can even pass the data between workstation environments.


Do not be too slow to adopt a 64 bit version of your favorite CAD application, just opening files faster and working with a complete design can make eth cost of a new workstation irrelevant.


To learn more about Intel® Xeon® based workstations visit\go\workstation

Are you ready to innovate faster or explore more design options in less time than ever before?

The digital workbench powered by two Intel Xeon 5500 processors gives you the opportunity to create, test and modify your idea right at your workstation. Have no doubt, workstations powered by two processors, with eight total cores, sixteen computational threads, and memory capacities up to 192GB are proving extremely capable at analysis-driven design.

Today’s digital workbench is nothing at all like last year’s workstation, which may have struggled to design and simulate. This new breed of a workstation presents you with the capability to rapidly play “what if?”

What is driving the interest in the digital workbench?

Organizations of all shapes and sizes are looking for opportunities to reduce design cycle times and associated costs without negatively impacting product performance. One potential method of achieving this is by enabling designers to consider the validity of a greater number of design concepts earlier in the design cycle. This may not only shorten design cycles, but it may also enable you to ultimately deliver a more favorable product configuration.

The product development rules are changing.

Manufacturers are recognizing that by reordering product design activities, they may be able to achieve a more efficient product development process. By empowering engineers with easy-to-use and powerful 3D conceptual design tools, together with early access to CAE applications, engineers may be able develop the most advantageous designs before committing them to labor-intensive detailed design processes.

Isn’t this old news?

Many manufacturers agree the greatest opportunity to impact product development cost is by bringing simulation forward. That is old news. Manufacturers know that when product analysis or simulation results trail the detailed design process then product changes become extremely expensive and negatively impact new product release schedules. Worse yet, they also realize that changes made downstream in a design cycle are “last minute” and almost always imply compromises on original design goals. This, of course, cuts into the product performance and profits of the new or updated product.

Using simulation and getting results before the detailed design process begins helps ensure that the CAD models meet performance requirements, mitigating last-minute and expensive design changes.

OK, the product development rules may be changing, but I still need an expert.

No doubt, the expert is still needed. However, advancements at companies like ALTAIR, ANSYS, SIMULIA, MSC, SpaceClaim and others are all making it easier to bring simulation and analysis further upstream in the design process.

As one example, let’s look at the ANSYS Workbench platform. This solution provides an easy-to-use framework that guides the user through even complex multi-physics analyses with drag-and-drop simplicity. It supports bi-directional CAD connectivity and enables the idea of simulation-driven product development.

ANSYS is an example of what ISVs are doing to create tools that learn from the experts and export them to others who need access to their knowledge. Yes, the expert is still very much needed, but leveraging the expert’s knowledge and driving it upstream in the design process is needed even more.

The new model

Using the combined hardware and software technologies delivered through a digital workbench, engineers can now create a single digital model that gives them the ability to design, visualize and simulate their products faster than ever.

This hardware and software suite enables users to create a digital prototype and can help engineers to reduce their reliance on costly physical prototypes and get more innovative designs to market faster.

The digital workbench helps users bring together design data from all phases of the product development process into a single digital model that can be rapidly changed, tested and validated.

What can you do to test the promise of the digital workbench?

Today’s workstation can provide you with a magnificent digital canvas to create tomorrow today. You need to decide if you want to explore reordering your product design activities and potentially achieve a more efficient product development process.

Today’s workstation gives engineers a new tool that can be likened to a digital workbench. This tool, powered by two Intel Xeon 5500 series processors, hosts a suite of software applications that engineers can employ to create and test their ideas. The pliers, hammer and nails found on a workbench in a garage or basement have now been replaced with digital tools that promise to accelerate innovations via a process known as digital prototyping. Its enablers include application tools like detailed CAD, CAE and PIM. Together they represent the new digital workbench—a powerful innovation tool you can use to bring your ideas forward faster than ever before.

Are you ready to use a digital workbench?

Visit to see which workstation is right for you.


Interactive Modeling and Simulation – Come on you are kidding!!

Recent advancements in mathematical modeling, computational algorithms, and the speed of computers based on technologies like the Intel® Xeon® processor 5500 series have brought the field of computer simulation to the threshold of a new era.  While not quite interactive, simulation and analysis can now occur at a pace that impacts decisions further upstream in the design process. 

Simulation and analysis tools are also no longer the domain of the expert.  Organizations can now potentially achieve a more efficient product development process by considering a reordering of product design activities and empowering engineers with easy-to-use and powerful 3D conceptual design tools and early access to CAE applications.

Why consider reordering your product development process?

This is not new news. Manufacturers know that when product analysis or simulation results trail the detailed design process that product changes are become significantly expensive and will most likely negatively impact new product release schedules. Worse yet, they also realize that changes made downstream in any design cycle are often “last minute” and almost always imply compromises on original design goals. This, of course, cuts into the product performance and profits of the new or updated product.

By reordering product design activities, manufactures may be able to achieve a more efficient product development process and reduce overall product development cost, time and risk.

No experts needed.

Don’t be fooled.  While ISV’s from ANSYS, ALTAIR, MSC, PTC, Siemens PLM, SIMLUIA, SolidWorks and others have made tremendous strides in making their simulation products easier to use, you probably still need an expert.  However, their collective advancements in tools, wrappers, and easy-to-use frameworks that guide the engineers through complex multi-physics analyses with drag-and-drop simplicity make it easier to move analysis further upstream. 

That means your expert can now focus on the really hard problems.

Workgroup Computing – Bringing “Real” HPC Computing To Your Department

Using analysis and simulation to get results before the detailed design process begins will help ensure the CAD models meet performance requirements and will almost always mitigate last-minute and expensive design changes.

Large scale compute intensive jobs used to require investments and/or access to a divisionally shared, large scale cluster housed in a controlled Data Center environment …supporting hundreds of users.

While this may have been true a few years ago, the advancements in mathematical modeling, computational algorithms, and the speed of computers based on technologies like the Intel® Xeon® processor 5500 series now makes it possible to quickly and efficiently solve large scale problems closer to the engineers responsible for dealing with them, on compute clusters supporting small workgroups or departments of engineers vs. large scale clusters shared by hundreds of engineers.

  • As an example let’s look at the Cray CX1™ deskside personal supercomputer.  Like others in this new usage category, it presents an organization with a solution that is the "right size" in performance, functionality, and cost for individuals and departmental workgroups who want to harness HPC without the complexity of traditional clusters.  Equipped with powerful Intel Xeon 5500 series processors the Cray CX1 delivers the power of a high performance cluster with the ease-of-use and seamless integration of a workstation.

OK, You Can Give Me The Performance, But The Support Can Be A Nightmare

Intel® Cluster Ready makes HPC simpler.  It boosts productivity and solves new problems. The Intel® Cluster Ready program makes it simpler to experience the power of high-performance computing. 

Intel Cluster Ready presents HPC users a certification program that is designed to establish a common specification among original equipment manufacturers, independent software vendors (ISVs) and others for designing, programming and deploying high performance clusters built with Intel components.

For users, this certification means that these certified HPC systems will run a wide range of Intel Cluster Ready ISV applications right of the box.  Tested, validated and simple.

By selecting a certified Intel Cluster Ready system for your registered Intel Cluster Ready applications you can be confident that hardware and software components will work together, right out of the box. Software tools such as Intel® Cluster Checker help ensure that those components continue to work together, delivering a high level of quality and a low total cost of ownership over the course of the cluster’s lifetime.

To learn more about Intel HPC Technology visit

When intelligent hardware meets smart software, something amazing happens. It helps you to lower the cost of data. And not just a little. A lot. It’s no secret that businesses can gain strategic advantages from turning data into insights faster than their competitors. But the exponential growth of data threatens any effort to reduce costs and lower the data center’s environmental footprint. IBM is one of the leading companies helping customers optimize these trade-offs. IBM’s next-generation database software, DB2* 9.7, offers sophisticated features designed to increase business performance and flexibility and reduce the operational costs of managing data. IBM’s deep compression technology yields compression rates of up to 83 percent, lowering storage-related costs. DB2 is fully optimized for the Intel Xeon processor 5500 series and delivers 78 percent more performance and 52 percent better performance per watt than on the Intel Xeon processor 5400 series. That’s the largest single-generation improvement since IBM and Intel began collaborating in 1996 to optimize DB2 performance on Intel-based servers. It produces faster reports and responses at a lower cost and with a smaller environmental footprint. And it’s easy to get the performance. “Not only can you achieve superb performance results by combining the DB2 product with the Intel® processor, but we were able to do that with an absolute minimum amount of tuning,” said Berni Schiefer, distinguished engineer at IBM. “Through an out-of-the-box experience, anyone can achieve those results.”


The Intel® Xeon® processor 5500 series includes intelligent performance that can increase frequency on demanding workloads when conditions allow and turn off processors to save energy when they’re not being used. IBM DB2 9.7 automates many time-consuming database administration tasks. For example, DB2 9.7’s self-tuning memory manager allocates system memory for top performance depending on the type of workload. In a head-to-head comparison between DB2 9.7’s self-tuning memory manager and some of IBM’s best performance engineers, the self-tuning memory manager won. Mark Budzinski, vice president and general manager for WhereScape USA, which builds data warehouses, summed it up very well: “When you consider what’s going on now with Intel’s intelligent performance and what IBM is up to with DB2 9.7, this is not business as usual. This is really game-changing technology.” Check out this video for more on this.


So how does all this stack up? According to a recent ITG report, companies who upgrade from IBM X335 servers running DB2 8.2 to new IBM X3550 M2 servers running DB2 9.7 benefit from a 59 percent reduction in total cost of ownership (TCO), a 6:1 average consolidation ratio,  and a less than eight month payback period. The bottom line is: If you have 4 year-old servers running a previous version of DB2, you can substantially lower your costs, reduce your environmental footprint and achieve a rapid payback. Now is the time to upgrade your infrastructure to lower the cost of your data.

Prior to the Intel Xeon X5500 Server Platforms*, measuring server power was done via expensive equipment and could only be performed in a discrete fashion.  Unless you had tons of monitoring equipment to mash-up your power data - it was a tedious process.  Now, using Intel DCM and Node Manager - you can pull multiple servers worth of power info to make some important power decisions in your datacenter.


First of all, you need to baseline your workload.  If you're confident that you can replicate workload patterns then you've got a starting point.  Otherwise, it's usually a good idea to start monitoring and looking for some cyclical patterns and/or common data points (time, power, thermals, etc) to keep track of.


In this scenario (like in my last blog) we're using a SQL workload which can be modified to run the CPU at high levels for a relatively set amount of time.  The base workload runs for 7 min 30 seconds, as shown in the Intel DCM screencap below.



In this test case: Idle power for the 4 servers is 782W, and under load - the power increases to 1174W - which is a delta of 392W.  This power increase occurs when work is given to the server and the P/T states react to the workload and increase power/voltage to the system to increase performance.  Exactly what we've been used to seeing even since EIST was introduced several years ago.


Now, what I'll show you is something that may be very interesting in scale... I will power cap the servers by 20W each, and set the Intel DCM Power Policy to only allow 1095W for the 4 servers in the rack.




What is awesome here is that we can still finish the workload in the same 7 minutes 30 seconds.  So essentially, we have saved 80W of power for each set of 4 servers and still get the same amount of work completed!  In a large datacenter this can be HUGE in energy savings.



Let's do some quick math:  20W power savings per serer x 10,000 servers = 20kW power savings and you still get the work done.  I hope I just helped some of you server admins get some new ideas on your next "I need a raise" talk with your manager


*your mileage may vary, so test your own workloads and report out!


Most of the time, server ROI is measured on the data center scale, replacing tens, hundreds, or even thousands of servers with fewer higher-performing and more energy efficient servers.

But...have you ever wondered how much power you could save if you replaced every 4 year old server in an entire country with Xeon 5500 Nehalem-based systems?  What about how much CO2 that could be removed for those same 4-year old servers – and number of cars it effectively removes from the road?

Well, wonder no more!  Check out this short paper for an eye-opening comparison of the UK, Germany, and France, and how big of an ROI they can realize if the entire country refreshed ALL of their 4-year-old servers.  It looks at power savings, land reclamation, and monetary savings in slightly different terms, like how much space can be saved in comparison to the floor area of Notre Dame Cathedral?  You’ll need to read on to find out more… J

Additionally, all calculations were done using the Xeon ROI tool, so check it out and come up with some more interesting comparisons based on your city, state, or country data.  Be sure to post them here!

Filter Blog

By date: By tag: