What if every server in your virtualized data center was driving 10Gbps of traffic?

My team just completed a test with an end user where we drove nearly 10Gbps of traffic over Ethernet through a single Xeon 5500 based server running ESX4.0. The workload was secure FTP. Our results will be published in the next 30 days. We’ve seen 10Gbps through a server in several other cases (notably, video streaming and network security workloads) but this is first time we’ve really tried to do a 10GB “enterprise” workload in a virtualized environment. It took a fair amount of work to get the network and the solutions stack to work (we had to get a highly threaded open source SSH driver from the Pittsburgh Supercomputer Center, for example, to make it scale). We also found some good value for some of our specialized network virtualization technologies (i.e., the VT-c feature known as VMDQ). But, regardless, by working at it moderately diligently, we got it to work at 10Gbps and don’t see any real barriers to doing that in real production environments.

We also found that the solution throughput is not particularly CPU-bound, it’s “solution stack bound”. That means that workloads that are more “interesting” than virtualized secure FTP and video streaming are likely to be able to source and sync more than 10Gbps/server, too. And, when we get to converged fabrics like iSCSI and FCOE that put the storage traffic on the same network path (or at least the same medium) as the LAN traffic, we’d expect that the application needs for higher Ethernet throughput will increase.

So what? Well, if you buy the fact that virtualized servers can do interesting things and still drive 10GB/s of Ethernet traffic, you have to wonder what’s going to happen to the data center backbone network. If you have racks with 20 servers each, putting out a nominal 6Gbps of Ethernet traffic, each rack will have a flow of 120Gbps and a row of 10 racks will need to handle 1.2 Tbps. I’m not sure what backbone data center network architecture will be able to handle that kind of throughput. Fat tree architectures help especially if there are lots of flows between servers in close proximity to each other in the same data center. But, fat tree networks are very new and not widely deployed. Thoughts?

I’ve spent a lot of time talking about efficiency lately…but there’s nothing more fun than talking about pure performance.  Performance is the hum of a well tuned engine as you downshift around a mountain curve, the wind in your face as you race down a hill on your bike, the thrill of a jet plane’s roar as it streaks across the sky.  It’s also the smoking speed of a workstation when you’re used to a standard desktop PC…and that speed just got a lot faster with the introduction of our Xeon 5500 series workstation platforms.  I recently talked to Thor Sewell and Wes Shimanek from our technical computing organization about what is shaping workstations and digital workbenches today, the key technologies driving workstation performance, and what users can expect from the compute geek’s version of the muscle car.  Check out what they had to say here.

ChrisPeters

Nehalem On Another Planet

Posted by ChrisPeters Apr 27, 2009

OK .. well still Earth.  Found this Xeon 5500 proof point through my twitter feed and thought i'd share.  cool video from The Planet, the world's largest privately held dedicated hosting company, recently deployed the first PowerEdge R710 servers, based on Intel's Xeon 5500 series processors.  Check out the video here where Urvish Vashi, general manager for The Planet's Dedicated Server Hosting business talks about why Nehalem and why Dell 11g.

 

Chris

The Internet is abuzz on newly launched Intel Xeon processors, there are reviews showing manifold increase in server performance, which is for some type of applications the number is 150%. We have seen multiple records being shattered. Xeon 55XX series is doing the exact thing in the server world, what Core2duo did to the desktop space back in 2006. The beauty of new Xeon is that, its brings in something for everybody, Database applications, web servers, business logic servers, IT infrastructure applications, virtualization, HPC etc etc. While the IT administrators are busy reading reviews and calculating how much money they can save replacing thier aging infrastructure, I did like to give a small information about a less talked feature in the new Xeon called PCU.

 

While the new Xeon got a brand new architecture, much discussed features are Integrated Memory Controller, Quick Path Interconnect, Turbo Mode (Any body remember the Turbo Switch on your computer cases back in old days, Turbo Mode gets you the Turbo speed without the need of the switch). But there is onething our architects added to Xeon architecture which is quite interesting but not talked much about is the Power Control Unit or PCU, I am going to provide a simple understanding of this feature without delving into complicated terminology of gates, Phase Locked Loops etc.

While desktop users wont tend to bother much about power usage, things work differently in the server world. Data center architects and managers spend hundreds of hours crunching numbers on how to make their Data centers run cool without paying heft electricity bills. So having a power efficient processor under the hood of the server which can efficiently manage its power consumption means, saving money on power bills not only with actual power saving on the server but also the related cooling cost of the data center. Now that you know why it is a big deal to have a intelligent Microprocessor, lets see what is this thing PCU.

 

PCU is an on-die micro controller introduced and dedicated to manage power consumption of the processor, this unit comes with it's own firmware and

gathers data from temperature sensors, monitors current, voltage and takes inputs from operating systems and not to forget that it takes almost a Million transistors to put this this micro controller on-die, while a million sound like a drop in an ocean in a billion transistor processor, considering the older Intel 486 processor had the similar transistor count and ran windows 3.x quite well.

 

In simple words the PCU controls voltage applied to the individual cores by using sophisticated algorithms, and hence sending the idle core to almost shut off level and reducing the power consumption. But let me explain this in more elaborated manner. In an older generation CPUs it wasn't possible to run each core on different voltages since they shared the same source and the idle cores still leaked power. But with the new generation Xeon, even though the four cores gets voltage  from a common core voltage source, but thanks to a manufacturing material Intel uses we can run each core at different voltage level and have the ability independently clock them at different speeds. PCU can make this decision and nearly shuting off the idle core by cutting voltage to it and can intelligently increase the voltage to the active one of more cores bumping up the clock speed of one or more cores making them run faster, this is  what we call as Turbo. To make this more simpleter to understand, I can provide a simple water tap example on how this works, supposedly think we have a long water pipe with four taps connected to it, when only one tap is busy filling up a bucket with water, we can turn off other three taps and divert the the water pressure to the running tap and let that fill the bucket faster.

 

We can always say why there is a need for on-die power management when the same can be achieved by any operating system using ACPI power states, PCU accepts power state requests from operating systems but uses its own built in logic to doubly ensure that the OS request holds merit. There are instances where the operating system instructs the CPU to go to lower power state only to wake it up next moment, adding PCU get this process a fine grained efficiency and helps our customer data center run much cooler.

The Intel XEON Processor 5500 is the new world record holder in >30 top performance benchmarks for 2-socket servers. Check out this video with Pat Gelsinger at the launch event in Santa Clara.

 

 

You can also check out all the performance results here: Server Performance Summary - Intel® Xeon® Processor

In 1934, the legendary investment educators of Ben Graham and David Dodd, published the seminal stock investment book of the 20th century. "Security Analysis" provides a clear framework for value investing, unlocking capital value of a company and creating a "margin of safety" in the investments. In many respects, these principles became the backbone of Warren Buffett's investment criteria and the backbone of the great wealth that his companies have created for their shareholders and employees alike.

 

...But what does that have to do with Virtualization and Cloud Computing and VMWare's vSphere launch this week? Everything!

 

As global economic conditions careen on a daily basis it has become incumbent of IT professionals, technologists and designers to constantly re-evaluate our product development, deployment and usage models we enable. VMWare's launch of vSphere, in my opinion, is a leading indicator of this transition. After years of technology collaboration and design with Intel, vSphere introduces new usage models for Disaster Recovery, Zero-Downtime maintenance and flexibility that have yet to be realized in the x86 environment. The Distributed Resource Scheduler continues to enable additional features for virtual machine mobility, management and power optimizations. Many customers I speak to regard the rapid application deployment capability of virtualization as a differentiating factor for deploying virtualization widely. vSphere 4 enhances this capability while adding additional storage, high availability and 10 gigabit ethernet support for the new Intel platforms being introduced.

 

All of the features were designed to increase performance and efficiency. To create value that is worth more deployed than it costs to maintain your current assets. vSphere combined with the new Intel Xeon 5500 series processor family quite honestly, out performs all expectations than we had originally forecasted. We suspected that the architectural enhancements, combined with the virtualization technology collaboration would allow for 80-100% performance improvement over the previous generations. We suspected that the page table optmizations, quickpath interconnect and hyperthreading would be the key drivers...we were right. However, the new product launch has already delivered 160% plus performance increases over previous generations of VMWare when combined with Intel Xeon 5500 series processor family from Dell, HP, Cisco and IBM...exceeding our expectations by 60%. The performance results are very similar to the disciples of "Security Analysis", ahead of expectations and ahead of all others in the marketplace.

 

Next week I will be delivering an online webcast, one of the many I do each year to discuss this further and I will also spend some time discussing Cloud computing direction (link is below). vSphere is being marketed as the industry 1st operating system for Cloud Computing and on this point I would have to disagree. vSphere is a foundational technology that will help to enable enterprise cloud deployments, Paul Maritz and team have a solid vision for Cloud but vSphere falls short of being a Cloud Operating system for several reasons. It does not support a clear integration to mobile clients and mobile data. vSphere does not have desktop operating system integration and image management capabilities for IT administrators. Does that mean that I wouldn't vSphere in a Cloud? No, it just means that I would not look to vSphere as the sole technology for Cloud operating environments. For managing my server, storage and networking deployments in the Enterprise Cloud this product delivers as advertised. Cloud computing is in it's infancy, Vmware and Intel are playing important roles in bringing important technologies to market which will become the foundational technologies for Cloud computing. Like Ben Graham and David Dodd, it will be critical to evaluate your investment criteria for success, find the tools that create the best value and make the consistent investment that has a "margin of safety" you and your organization can sustain. If you follow those key steps when evaluating your decision with vSphere and Intel Xeon 5500 series processor family you will continue to create incremental value in your virtualization deployments without incremental expense.

 

 

Jake Smith Virtualization Webcast:

 

http://www.brighttalk.com/webcasts/3761/attend - April 29th, 2009 Webcast

 

Intel Xeon 5500 Series Processor Virtualization Performance Results:

http://www.vmware.com/products/vmmark/results.html

A friend just passed me this whitepaperthat shows a 75.9% increase in database performance by just buying a little higher priced processor.

 

The test was done by Principled Technologies on a new Dell PowerEdge R710 server and compares 2x Xeon E5520 (Nehalem) vs the same R710 server with 2x Xeon E5506 (Nehalem) running Microsoft SQL Server 2008.

 

 

I did a quick price check on dell system on-line configuration tool today and the price difference was only $300 ($150 per processor) in the standard R710 configuration. With a base server price of $5,345, the price delta is a little more than 5% in price for a 75% increase in performance.

 

 

Wow! … a 75% gain in database performance for just $300

 

 

 

Thanks Dell.

 

Of course, you'll need a Nehalem based server first - which can pay for itself in an estimated 8 months.

 



       

http://ark.intel.com/inc/images/diagrams/diagram-17.gifBack in the ‘dot-com’ days – many companies would build datacenters across the globe with one thing in mind – performance – and costs weren’t an issue.  It was all about getting the job done, with little concern about the costs.  Well, times have changed and companies have become more energy conscious, not only to become better stewards in using natural resources, but consumers are looking for companies who can design and develop products that can meet their own ‘green energy’ power needs.  It’s not as important anymore to make or build something to be the ‘best of class’ it also has to be ‘efficient’ while being the best. 

 

Corporate initiatives to reduce power but still “KTBR” (Keep the Business Running) are imperative to sustaining business today.  Not only do you need the best performing servers – but they need to be efficient at what is done.  Most of us would agree to cut overhead costs with energy efficiency versus headcount cuts.  It’s better for the environment, better for the business, and benefits everyone.

 

Enterprise companies have been ‘going green’ for a while now.  Initiatives like Climate Savers, LessWatts.org and others have been pushing the technical envelope on how to reduce power usage for businesses large and small.  Intel has a large hand in contributing to the conservative ecology in power usage.  People need computers, computers need power, and power is used – but can the power be reduced and still give the same experience?  With the Xeon 5500 Series platforms – the answer is a resounding - YES!

 

You’ve most likely read about the performance stats around Intel Xeon 5500 (Nehalem) Processors, but I want to show you how their efficiency can give back to your enterprise. Not only do the Xeon 5500 Series give you power and efficiency – there is more technology ‘under the hood’ to be looked at.  Intel Intelligent Power Node Manager has been released in the Xeon 5520 and 5500 Chipsets (previously called Tylersburg-EP). 

 

There are several scenarios we can go through concerning server workloads – but let’s take a real world example of Company “X” (I can’t tell you who right now) but their workloads have been very stable and growing over the past few years.  One of the issues is that their servers are heavily worked during the beginning of the week, pushing the server farm at 85-90% utilization.  The work is reduced gradually over the week and by Friday; the server utilization is around 10-20%. 

pre-x5500.JPG

       

As you can see in the chart above, the server farm would start out the week with 85-90% load and the servers would run at full power the entire week. This would burn energy at 90% cost, even though the workload had died off toward the end of the week to about 15% - not very efficient.  It’s like leaving the stove on all day long, and only using it for a few minutes when you cooked a meal.

 

       

Once we brought in the Xeon 5500 based systems, we also enabled ACPI power management which is much more pronounced with the Xeon 5500 because of the increased number of P-states.  We are able to add power capping using Node Manager to limit the power usage by the racks to meet the daily requirements of the customer workloads.  This helped us to have the power available when needed, and reduce the power when the workloads aren’t as power hungry. 

post-x5500.JPG

       

Another key feature that we’re going after is to increase the server density using Node Manager and Intel Datacenter Manager by measuring the power usage and capping the maximum power utilized by the entire rack.  The benefit is that with Intel Intelligent Power Node Manager, the data comes in real-time and we can modify the power curves on a regular basis through the server console. 

 

Like many customers, power savings can have a large impact in your overhead costs.  If this sounds like a solution that your company could benefit from, then definitely ask your favorite OEM when their Node Manager enabled Xeon 5500 series platform will be available.  Intel Server Products are available today and ready for your datacenter – the ROI is estimated to be a short 8 months, so a little green goes a long way.









Interesting new article posted on Computerworld explaining how the State of Indiana saved nearly $14M by consolidating their seven data centers into one (plus a second for disaster recovery) while also reducing server count by one-third through virtualization.

 

We’ve learned that they did this by standardizing on the 4 Socket Intel Xeon 7300 processor based Dell PowerEdge R900s.  This is a great example of an innovative IT department conducting large scale server consolidation to improve operational efficiency and reduce costs.

 

Dell IT’s own global standard virtualization model incorporates the 4 Socket Intel® Xeon 7300 processor based Dell PowerEdge R900 as the centerpiece, where they have virtualized more than 5,000 servers and saved the company over US$29 million using a scalable, worldwide virtualization model.  Click here to learn more about that as well.

 

Come talk to us if you’re not seeing similar consolidation benefits or savings, we’d like to help.

 

bryce

[This is cross posted from my Cisco blog]

 

 

As mentioned on the IPTV broadcast yesterday, the preliminary benchmarking for out new blade servers for our Unified Computing System is pretty darn good—something along the lines of 164% faster than previous-gen Intel-based two-socket systems.  I think this not only makes a clear case for the upgrading toIntel Xeon 5500 Series processor but, the same way you would not put a Ferrariengine in a Cavalier, you also want to upgrade to a system that is designed to take advantage of that kind of performance, not just retro-fitted to deal with it.

Here is a rundown of our preliminary results for some key industry benchmarks that cover a variety of workloads:

 

http://blogs.cisco.com/images/uploads/UCS-perf-380.jpg

 

 

The VMware VMMark® benchmark indicates a system’s capacity to scale when running virtualized environments. Preliminary benchmark results using a release candidate of VMware next version of ESX show a score of 24.14 running 17 benchmark tiles, an improvement of 164 percent over the prior top-scoring two-socket system based on previous-generation Intel processors.

Cisco UCS performance on the SPECfp® rate_base2006 benchmark is 194, showing high performance for multiple floating-point workloads running in parallel, and demonstrating an improvement of 125 percent over previous-generation systems. Similarly, a SPECint® rate_base2006 result of 239 demonstrates high performance on integer compute-intensive workloads, delivering an improvement of 71 percent over the prior top-scoring system.

The SPECjbb® benchmark demonstrates performance on Java software workloads that place intensive multithreaded workload demands on systems, indicating performance on multi-tier Web server environments and Web 2.0 applications. Cisco UCS performance of 556792 demonstrates breakthrough performance on Oracle JRockit running on the Microsoft Windows 2008 operating system.

Cisco UCS B-Series Blade Server performance extends beyond enterprise applications and into high-performance scientific and engineering workloads. TheSPEComp® MBase2001 benchmark is designed to test the limits of shared-memory, symmetric multiprocessing systems. The Cisco UCS B-Series Blade Server score of 43593 (currently under review) demonstrates industry leadership along with a performance increase of 154 percent over previous-generation systems.

The key thing to point out is that while we are providing Ferrari performance, we are doing it with a Prius energy footprint.  As I noted in my recent conversationwith Intel’s Ed Groden, we see up to 9:1 consolidation with the Intel Xeon 5500 processors, so 184 single core servers could be collapsed into 21 Intel Xeon 5500 Series systems, cutting energy costs by 90%.





Sure, Intel® Xeon® 5500 Series Processors represent a quantum leap forward in terms of both performance and energy efficiency. That has been proven in a number of test results and reviews.  But for your back-end data demanding enterprise app deployments, large scale server consolidation or virtualization of business critical applications, Intel® Xeon 7400 series processors offer outstanding performance and performance per watt in 4-socket servers. So, which platform do you choose, especially when this decision is likely going to be the key determining factor for capital savings, efficiency and TCO for your datacenter infrastructure? Well, you’re read a lot about Xeon 5500 series Nehalem servers over the last few weeks.  Let me share with you some reasons to consider a Xeon 7400 series 4-socket server when you are presented with the choice between Intel’s two best of breed products for virtualization.

4 Socket and above servers (Xeon 7400) are purpose built – just like a large truck: They’re purpose built for your most data demanding enterprise applications like database and ERP, and for large scale server consolidation using virtualization. Large Trucks are also purpose built.  They’re purpose built for hauling large loads over long distances.   Now, you don’t buy a large truck to commute to work in.  You also don’t take your everyday commuter and attempt to haul large loads with it, because if you did you would be significantly undersized (you’ve all seen those cars on the road with rear tires about ready to pop under the weight of a palette of heavy goods tied on top). 

More Resources Matter for 4 Socket MP Workloads:
The apps/workloads listed above benefit from the expanded feature set associated with 4 Socket Xeon 7400 based servers: more processors (4 vs. 2), more cores (24 vs. 8), more memory (32 dimms vs. 18 dimms), more I/O capacity (7 slots vs. 4) and larger cache (16MB vs. 8MB).  These features and what they enable are why MP Server buying patterns have remained stable with IT for the last 5 years and will continue to be stable for the foreseeable future according to IDC. 

But in today’s economy there may be MP customers out there that will want to push the envelope and attempt to deploy lesser expensive 2S systems for traditional 4S solutions. Would doing so pencil out from a TCO perspective? Let’s take a look at two Virtualization usage examples and find out.

Large Scale Server Consolidation: Where almost 2x the memory matters.

In this scenario, IT Manager is dealing with numerous corporate acquisitions across the country prior to the economic downturn, with servers that now need to be consolidated to cut costs quickly.  Goal is to convert 1000 older underutilized 2S servers.  He (she) converts these to 1000 VMs and transfers them electronically to the central Data Center.   He determines that these infrastructure apps when consolidated generally run into memory constraints before they run into processor constraints, so for his candidate solutions he compares a 4 Socket Server with Xeon X7460 processors vs. a new 2 Socket server with Xeon X5570 processors.   He fully loads both systems with 4GB dimms (128GB on 4S vs. 72GB on 2S), and assigns 4GBs memory for each VM deployed (enabling 32VMs per server resulting in 31 new 4S servers vs. 18 VMs per server resulting in 56 new 2S Servers.)

Now, he only propagates the 4S Solution with 2 Xeon 7400 Processors, which allows the IT manager to still use all 128GB of memory on the 4S Servers while paying lower VMWare licensing costs.  Price these systems out on Dell, HP, IBM’s or Sun’s website, and the Xeon X7460 servers will be in the $15k-$20k range vs. the Xeon X5570 based servers will be in the $10k-$12k range (i.e. roughly 1.5x higher for 4S vs. 2S server).  Add VMWare license costs, power/cooling, LAN/SAN cabling, and system maintenance costs and you’ll see the 4S solutions offer a lower cost per VM.

Virtualizing Business Critical Workloads: Where 3x the Processor Cores matter.

In the previous example, we were looking to maximize consolidation ratios.  In this example, we’re looking to achieve predictable high performance for a business critical app.  Solutions like ERP that are put into a virtualized environment perform best when run without oversubscription, where you set the same number of virtual CPUs to equal the number of physical cores available on the platform.  This helps deliver relatively more predictable performance for all VMs and is the way that IT@Intel intends to deploy ERP in a virtualized environment as they begin to test this moving forward (read more about this in the new whitepaper).  In this example, we’ll convert ~100 non-production ERP instances (i.e. the instances used for QA, Dev, and Production break fix).  We’ll assign 2 virtual CPUs and 8GB memory for each instance.  The four-socket Xeon 7400 processor based systems (with 96GB memory) will have a total of 24 cores and will have a list price of about $25k.  This allows us to run 12 Virtual Machines without oversubscription on the MP Servers and enables 100 ERP instances to be consolidated down to about 8 MP (4 Socket) servers.  Since the Xeon 5500 based Servers just have 8-cores, the IT manager decides to avoid oversubscription and deploys 4 virtual machines – consolidating down to 25 DP (2 Socket) servers with 32GB Memory and a list price of about $8k per server.  Include the costs of the hardware, VMware ESX license costs, power/cooling, cabling, and Server maintenance – the MP (4 Socket) solution here would also offer a lower cost/vm than the Xeon 5500 based DP (2 Socket) solution due to having 3x the processor cores on 4 Socket.

When you are deploying your most data demanding enterprise applications and implementing large scale server consolidation, Xeon 7400 based servers represent a very intelligent choice. 

Let me know what you think.

bryce

We all live in the information age and are bombarded constantly by more message and information than we can realistically consume (sorry for adding with this blog). I love Facebook, am involved in Linked-In and am exploring (as a newbie) Twitter. These forums and tools are really cool and if I wanted to, I could spend my life playing with them.  Last week, someone described twitter as a river of information and guided me to jump in and paddle downstream, not upstream.

 

So, that is the input I’m looking for on this blog. I just posted 2 documents as resources in the Server Roomabout the new Intel Xeon processor 5500 (Nehalem) product.

 

ð        Sales Brief– 2 page brief focused on the benefits of purchasing new servers

ð        Product Brief– 12 page brief focused on all the usages, technology and benefits inside the new servers

 

I’m told that people don’t like “sales” briefs.  Personally I like short and sweet, but give me the tools to dig into more detail if I want. 

that’s just my style – What’s Yours?

 

1) Which do you like more, if you have a preference?

ð        Sales Brief

ð        Product Brief

 

2) What do you do?

o        IT – I deploy server technology to benefit my business.

o        Sales – I sell or re-sell technology to IT and business owners.

o        Developer – I design and build servers or use them to design software products.

I'll be up front, I really don't know what Brittany Spears, Miley Cyrus or Susan Boyle would say about moving from RISC to the Xeon 5500 processor!. What I can share is the feedback that I'm getting direct from customers. I'm currently out on the road and have got some real feedback direct from customers on why they are looking at migrating their solutions from RISC  processors to Xeon processors.

 

Over the past couple of days I have had the opportunity to meet directly with individual customers and hosted a roundtable with several customers to discuss their plans to replace their RISC based infrastructure. The conversation has been very open and frank and has not been about 'should I move' but more focused on 'how do I make the move'. As could be expected the down economy is placing big taxes on the ability of IT organizations to support their business units need for organic growth in a flat to down IT spending environment. A big priority for most of the customers that I spoke with is how to reduce their overall TCO while still meeting the increased demands being placed on IT by their business Partners. Most of the customers are already engaged in active projects to assess moving from RISC or are building their plans to make this migration.

 

During the roundtable I had opportunity to share the latest Xeon 5500 processor performance comparisons Vs the main SPARC and POWER based solutions out there. There was great rejoicing and joy (ok I'm taking poetic license here) in the roundtable when we share some of the results that we highlighted when we launched the Xeon 5500 processor just over 3 weeks ago. So I want to spread the joy and let you read for yourself the performance and price performance benefits.

 

We compared the Xeon 5570 processor vs the top UltraSPARCT2+ in a 2 socket configuration. We took best published results on spec.org and sap (so no funny games at play). The results comparing best UltraSPARCT2+ vs best Xeon 5500 with 1 taken as baseline for SPARC redults were amazing

- 20% better on SAP-SD

- 62% better java performance for Specjbb2005

- 69%better for integer performance SPECIntrate-2006

- 75% better for floating point performance SPECfprate-2006

But the best bit was the cost competitiveness of the Xeon 5500 solutions. Comparing both solutions with 32GB memory, the Xeon 5500 based solutions are offered at approx $11,000 whereas the UltraSPARCT2+ is at $36,000.

 

Compared the Xeon 5570 processor vs the top POWER6 in a 2 socket configuration gave even more staggering results. At the roundtable today customers were amazed. They keep hearing that POWER 6 has leading performance and more GHz so better performance. Right?. Wrong is the answer and I noticed many customers scribbling down the comparisons. Again taking 1 as baseline for POWER results

- 150% better on SAP-SD

- 190% better java performance for Specjbb2005

- 126%better for integer performance SPECIntrate-2006

- 90%better for floating point performance SPECfprate-2006

But the best bit was the cost competitiveness of the Xeon 5500 solutions. Comparing both solutions with 32GB memory, the Xeon 5500 based solutions are 92% less expensive than equivalent POWER 6 offerings.

 

I only shared the specific comparisons vs RISC and have not gone into the architectural advancements of the Xeon 5500 processor and how it addresses real business needs that have been flagged to us. There have been lots of other blogs out in cyberspace over the last few weeks on improvements in IO, low latency etc. so you don't need my 2 cents.

 

I think now is the time to make the move from RISC, what do you think?

In this economy, saving money is job one. I came across an invitation today to an online e-seminar titled “Simplify and Save – Refresh Done Right with Dell” hosted by Ziff Davis scheduled for April 28, 2009 at 12p EST. 

 

The introduction of this e-seminar discusses how challenging the status quo, older beliefs and changing your purchase paradigm can yield dramatic savings and benefits to IT and business.

 

The "old" approach to saving money on server upgrades was to put them off - that's not only no longer true - it's actually more expensive. … Utilizing the latest server technology improves your IT business agility, making you more responsive to the business while substantially lowering administrative and TCO lifecycle costs.

If you followed my previous blog post titled Xeon 5500 (Nehalem): An Intelligent Server? where I discussed the ROI savings that can be achieved from replacing older servers with new Intel Xeon processor 5500 series, I think you’ll like this e-seminar.  Hear from a leading server manufacturer on the benefits you can get with Dell’s PowerEdge Servers.

Unfortunately, I will miss this e-seminar since I will be traveling on business next week talking to customers about this same topic. So I’m interested in hearing back from you regarding this e-seminar.   .

 

Register for this event today and come back and let me know your impressions.

 

Chris

http://twitter.com/Chris_P_Intel

I've been in Las Vegas this week for the Blades Systems Insight event talking about data center transformation and data center efficiency (no white tiger sightings...just technology this week in Vegas).  This event draws attendees who are deploying high density compute platforms in their data centers and dealing with the power and cooling challenges that come along with these environments. So I was excited to share some of Intel's thoughts on power and cooling optimization beyond pure system refresh.  If you read the blogs on the server room you know plenty about the compelling financial benefits associated with refresh...and if you haven't seen this yet check out my friend Chris Peters' blog here.

 

But back to the show and the shower curtains...If you dip a bit deeper into the challenge of data center efficiency, three primary focus areas emerge:

 

Power: The underlying power cabling and infrastructure into your datacenter.  Ultimately you want the most efficient power delivery possible.

 

Cooling: The HVAC systems, fans, and ducting installed to remove heat from your datacenter and let you avoid thermal environments that make Las Vegas feel chilly.

 

Compute: Server, network and storage gear that drive business producitivity for your organization.  This is why you have datacenters to begin with so the ultimate goal is to optimize percentage of power flowing to compute and productivity spent on every kw of power within your compute infrastructure.

 

At the Blades event we were discussing the impact of high density environments to this fragile ecosystem.  High density environments a) require more power, more than the typical 750W per square foot that an average rack requires and far more than the 75-100W/sq foor that a typical datacenter facility supports.  High density environments also produce a lot of heat that needs to be dealt with by cooling systems that are often close to their cooling capacity.  So how much density is a good thing for datacenters and how do we deal with that gap between power delivered and power required?  I'd like to provide a few concepts but ultimately every datacenter is different...so I'd love to hear from you on how you've dealt with this as well. In this blog I'm going to start with cooling capacity as there are a lot of options to consider:

 

#1 Warmer datacenters.  ASHRAE recently updated their datacenter temp and humidity recommendations with a range of 18-27 C.  What this means is that server inlet temps can be set higher than what many datacenters are running today...the first step here is to measure your server inlet temp to get a picture of what your facility is operating at, checking with your manufacturers warranty spec, and measuring your power usage difference when altering the datacenter temp - remember to take before and after readings on your cooling power usage.

 

#2 Cool aisle containment: This is a pretty simple concept - placing barriers to control cool air and confining it to the area where servers need it.  Think about this as constructing a type of wall or ceiling around the cool aisle to control air flow.  So what are these walls made of? I've seen them made of plexiglass and plastic sheeting...and this week at the conference I heard about one of the largest banks in America who is experimenting with the deployment of shower curtains to control air flow and reporting a 15 degree drop in temperature associated with installation.  Now...last time I checked a shower curtain cost a few bucks so we're not talking about a major investment to test this in your datacenter.

 

#3 Ambient air cooling: Even in Las Vegas datacenters are utilizing outside/filtered ambient air economizers instead of their chillers to deliver cooled air at least part of the year.  This concept is simple - it's like turning on your furnace's fan setting to cool your house instead of your AC and in many regions of the country you can utilize this much of the year at a fraction of the cost of running a chiller.

 

#4 Liquid cooled cabinets - think of these essentially as a good Sub-Zero for the datacenter and especially applicable for the high density environments that we were focused on at the blade conference.  They basically contain a rack of compute equipment and chill this equipment utilizing liquid cooling.  This is a great way to isolate highly dense racks from your datacenter cooling equation completely and works especially well in heterogeneous environments where cooling requirements vary from rack to rack.

 

I will be back to you on the power and compute vectors next...in the meantime I'd love to hear if your datacenter has implemented any of these approaches and any results you've been able to measure.

OK, so we launched the Xeon 5500 processor based servers and workstations a couple of weeks ago. While I don’t have direct quotes of support from Brit, Miley, Susan or any country presidents who have signed economic stimulus into law I am pretty confident that if they were ever actually considering purchasing a server or workstation they would come to the conclusion that the new Xeon 5500 platforms would be their best choice.

 

I had the privilege of being at one of the thirty seven different worldwide Xeon 5500 launch events. I was on Wall Street and attended the NASDAQ launch event on March 31st. Based on which data source estimate you look at Financial Services as a whole represents about 20% of the worldwide market for servers. It was also evident when meeting with customers in the NYC area that they are passionate about performance and power consumption. Most of them had received pre-production seed systems and had already done extensive testing prior to this launch event. I have been in Intel’s Server Platform Group for over a decade now and I have never seen so much enthusiasm for a product launch.

 

I won’t rehash the performance benchmarks and performance per watt data. There are many benchmarks, blogs and press articles doing that. What I took away from the conversations was a feeling of optimism from the end users I spoke to. Some people felt that these new products would be what it takes for them to deliver solutions that would give them a performance advantage over their competition. In few markets does that pay off more, and translate almost directly to the bottom line, than in Financial Services. Others felt that these systems would help them continue to add to their existing datacenters without having the need to build a new one. This was due to the performance per watt improvements and the end users ability to replace many old servers and workstations with a few new ones.

 

Lastly, I think human nature being what it is we are seeing that IT professionals want to work on cool new projects. These Xeon 5500 servers and workstations represent a shiny new toy that IT professionals can use to have a material impact on the bottom lines of their companies. To some degree the same applies to virtualization in that it is disruptive and provides a new cost effective way to deliver legacy solutions and also enables flexibility for future growth. The IT folks that I have met who familiarize themselves with virtualization, new hardware and advanced management techniques (power, systems, virtualization) generally are viewed internal to their companies as leaders with visionary capabilities.

 

As we all work through this economic morass I am hopeful that with new technology introductions, and a relentless focus on efficiency, we will all emerge with a greater level of capability and a higher degree of flexibility. I also believe IT will emerge as a key asset of differentiation for companies from Wall Street to Main Street and this will place an even greater burden on delivering solutions to meet those unique needs.

 

What do you think?

 

Shannon

 

shannon.poulin@intel.com

There are literally thousands of ways a new Intel® Xeon® processor 5500 series based server can benefit your business. I’ll spare you the complete list, but let’s start with the major ones. When you ask yourself the following questions about your business and its needs, you may just find that getting a server right now is the right decision for you.

 

  • Are your servers getting old? Things change, and with the latest server processors, things have changed a lot. The new Intel Xeon processor 5500 series offers up to nine times the performance of a server purchased just four years ago.1 Even though you’ll be outlaying cash for new equipment, spending wisely on a server now will probably save you money in the long run. From energy usage to maintenance costs to software licensing fees, it adds up. Plus, with a new server, you get a new warranty and compatibility with the latest applications. That means fewer hassles for you. Don’t wait until your server breaks, you don’t want to discover the cost of losing data and business downtime.
  • Are increasing employee and data demands taxing your systems and your staff? If so, you need the processing power, energy efficiency and reliability a Xeon-based server can deliver – 24/7 uptime, industry-leading performance, memory protection, and a server that automatically shuts down to save energy.
  • Do you want to improve productivity? The increased performance of the latest Intel Xeon-based servers enables your IT equipment and your staff to do more with less.
  • Is cost-cutting a high priority? With an Intel Xeon processor 5500 series based server, you can benefit from significant energy savings, the reduced costs of easier maintenance and need for fewer servers. By consolidating servers, you can save up on your utility bill. Check out the estimator tool www.intel.com/go/xeonestimator to see how much you could save.
  • Are you ready to implement new software? Some of the latest software advances demand newer server capabilities. If you’re looking to implement VMWare* or Microsoft Windows Small Business Server 2008*, an Intel Xeon processor 5500 series server delivers the performance your new business applications will need.
  • Is your company still using a desktop as a server? Then, now is the time to step up to a real server. Down markets are when smaller companies can take advantage of their agility. Plus, you definitely can’t afford downtime when customer service is so critical.

 

 

 

Ultimately, the biggest question is: Can you afford NOT to invest in the newest Intel Xeon processor-based servers in this economy?

 

 

 

Learn more about our new server processors:

  • Read this brochure to learn more about the advantages of Intel® processor-based servers for small and medium businesses.
  • And talk to your IT solutions provider.

 

 

 

 

Also, I’d love to hear your best reasons for buying new servers, so I can add them to the list. If you have already made the transition to the new Intel Xeon processor 5500 series, please share you story.

 

 

 

 

 

 

1Performance increase based on Intel comparison using SPECjbb2005 business operations per second (bops) between four-year-old single-core Intel® Xeon® processor 3.8GHz with 2M cache based servers and new Intel Xeon processor X5570 based server. Intel consolidation based on replacing nine four-year-old single-core Intel Xeon processor based servers with one new Intel Xeon Processor X5570 based server while maintaining SPECjbb2009 performance. Costs and return on investment have been estimated based on internal Intel analysis and are provided for information purposes only. Performance tests and ratings are measured using specific computer systems and/or components and reflect the approximate performance of Intel products as measured by those tests.  Any difference in system hardware or software design or configuration may affect actual performance.  Buyers should consult other sources of information to evaluate the performance of systems or components they are considering purchasing. For more information, visit www.intel.com/performance/server.

 

K_Lloyd

Nehalem Rocks - now use it

Posted by K_Lloyd Apr 17, 2009

It has been a couple weeks now and just in case anyone may have forgotten, Nehalem rocks.  In my job I talk to customers every day and even though I have become a bit jaded by the numbers associated with the new Xeon 5500 series processors, customers constantly remind me just how significant this change is.  The leap in performance is unprecedented in the history of the "Xeon" family.  The opportunity that this creates for businesses is tremendous.  Chris has blogged a lot about the economics of refresh and anyone who is not paying attention has a job that is just too cushy.  For the rest of you that actually worry about performance, data center power capacity, data center space, etc - please pay attention.

 

Data center space is for many businesses the single most expensive "office" space they own. Consider this coupled with the reality that demand for computing continues to grow, and 81% of businesses report line of site to data center capacity ( power or space ) overflow.

 

Any data center owner who is facing capacity challenges and not aggressively refreshing and consolidating should be "made redundant".  (opinion)

 

some very very round numbers to consider:

If you have servers that are 4 or 5 years old, the new Xeon 5500 series processor based servers can be as much as 10 times faster.

Those old servers ( if they are typical enterprise servers ) are setting at about 10% utilization.

 

When you refresh and consolidate you are going to virtualize - so now, lets do the simple back of napkin math on the opportunity :

you have 1000 servers that are at 10% utilization.

with virtualization you could boost up to 50% utilization - 5 to 1 consolidation - now you have 200 servers

the new servers are 10 times faster - so with an aggressive refresh - now you have 20 servers

 

Demand is not going away, and eventually you will fill up all this new capacity and of course in the real world this isn't all going to happen day one,

BUT, anyone complaining of capacity issues AND using old hardware, must not be paying attention.  Please wake them up.

In my blog titled “why buy for the little guy, I shared that I was in the market for a new home desktop to replace my existing one.  Well, today I spent my 2009 tax refund check on a new computer – an iMac. 

 

So why the secrecy?  ... The iMac is a surprise for my wife’s birthday.  My wife has wanted a Mac since we left college and I’ve always been a PC guy (grew up with one, always used them at work, etc.) and when I started working at Intel almost 10 years ago, I could not justify a Mac given my corporate loyalties and who was paying the bills.  When Apple adopted Intel architecture a couple years ago, my options were now opened (really my old excuses were not longer valid).  

 

I’m very excited to install the new iMac as the challenges of my old technology and the limitations and headaches they were giving me will be gone - and I think I’ll finally get some good-guy points with my wife.

 

Chris

Hello.  My name is Omar Sultan and I am from the Cisco Data Center Solutions team.  I am a fairly regular blogger over on the Cisco Data Center blog and I'll be spending some time over here too.  The development of the Cisco Unified Computing System launch gave us the opportunity work with some seriously smart folks at Intel and I got a chance to capture some of those conversations.

 

Here is a podcast that I recorded with Jake Smith from the Intel Advanced Server Technologies team.  One of the things that makes Cisco UCS work is that Intel and Cisco share a common vision for the value of virtualization in the data center and how it can be practically applied to address customer challenges.

 

Over on my blog, you can also check out a conversation with Ed Groden on why it makes sense to invest in new platforms in these challenging times--he makes a pretty good case.

 

Anyway, hope the podcasts are helpful, be sure to also check out my blog, and definitely let me know what you would like to hear about from a Cisco perspective.

 

Omar

 

Omar Sultan

Cisco Systems

With the introduction of the Intel Xeon processor 5500 series last month, I wrote a blog that discussed that server refresh was an intelligent investment in that it could deliver a rapid payback on investment. For the past few years, I have been working to understand the costs and benefits of server replacement and there are a few conclusions I can draw.

 

1)      Server Refresh is not new concept.  This approach has existed for decades.  People replace technology as it ages because new software and new technologies enable better business capabilities and as technology ages, the warranty expires and incidence of failure increases. How many of you still have your first mp3 player?

 

2)      ROI and Refresh Vary. The rate of refresh is a balance of the investment required (purchase, install, removal, validation, etc) and the savings achieved (operational costs, cost avoidance, employee productivity) balanced with the business opportunities available to you (business growth or new business markets, cost of capital, revenue generating investments)

 

3)      One Size Does not fit all.  Every business looks at financials and opportunities for their business a little differently and calculates their costs and savings differently.

 

So a few months ago, I embarked with some of my peers, with Intel IT, and industry leading ROI and TCO consultant Alinean, to apply what I have been learning and build an interactive tool to help you model your savings opportunity for server refresh and replacement. 

 

We identified and were able to model eleven cost and savings categories (both pluses and minuses) in the Server Refresh ROI calculation and make these cost category assumptions able to be included, excluded or modified by you.  You can model and view scenario output real time and print/email reports to share with others.

 

I invite you to learn more about the tool with this informal how-to-use guide , or better yet, use the tool and estimate how much you could save replacing old servers with new.  Try the new Intel® Xeon® processor-based Server Refresh Savings Estimator today.

 

You can provide feedback through the tool’s registration process or by responding directly on this blog. I look forward to hearing from you either way.

 

Thanks, Chris

There has been lots of discussion recently about whether its better to replace or upgrade existing CPUs in your installed base of servers rather than purchase new servers. I wanted to share some thoughts with you that might clarify why a new server purchase is the better option for most IT departments.


Here are some of the challenges that an IT department must face when considering replacing CPUs rather than buying new servers.

  1. Does the existing system support the new CPU – most CPUs require specific BIOS versions, is there a BIOS update available for the server that supports the new CPU ? Also the server motherboard may not have been tested by the OEM with the new CPU.
  2. Has the software stack you are running on the server  been validated on the new CPU.
  3. Replacing a CPU is a non-trivial exercise – it takes time and you run the risk of damaging a working server
    • the server must be shut down and dismantled to access the existing CPU
    • the existing CPU/heatsink combo must be removed. The heatsinks used by OEMs in branded servers are specifically designed for the server in which they are used. These heatsinks typically have significant mass so they are usually very firmly attached to the server chassis to prevent damage due to shock and vibration whilst in transit and in use.
      • The existing heatsink must be removed from the current CPU, which may not be easy if the system has been in use for some time the thermal bonding may have hardened permanently attaching the heatsink to the CPU.
      • The heatsink must be attached to the new CPU – with the appropriate thermal bonding.
      • The CPU/heatsink combo must then be correctly re-installed in the system and the system re-assembled.
    • It is also necessary to take into account that removing/changing a CPU may also void or otherwise affect the system warranty.


    It is conceivable that some IT folk may want to consider this approach but the risks associated with undertaking this operation are very high and many IT departments will take the approach of not touching working systems.  If you are still not convinced its also worth considering -


    • Replacing the CPU in an old server may not significantly improve its energy efficiency. The latest generation server designs not only use latest CPUs but they incorporate many new features that improve the overall energy efficiency of the complete platform – making them a much better proposition when looking to reduce overall data centre utility bills and OpEx costs.
    • Upgrading the CPUs in an old server may expose other limitations of the server in terms of memory and I/O, this could result in having to upgrade many other parts of the server resulting in an overall higher cost than replacing the server with a new purpose designed solution

     

    So, as far as I can see very few IT departments are going to seriously consider replacing CPUs in their existing installed base and will look instead to deploy latest generation high performance energy efficient server designs – i.e. servers based on Xeon 5500 or Xeon 7400


    What’s your opinion – are you prepared to attempt to upgrade your CPUs or will you refresh the complete server system to get the latest technology for all elements of the server platform

    gwagnon

    The buzz around Nehalem

    Posted by gwagnon Apr 8, 2009

    A part of my job these days is to interact with and track online press content for our server products.  The launch of the Intel Xeon Processor 5500 Series (codenamed Nehalem) product was a big one.  Big by any standard of measurement (...except for perhaps Geologic time).  I thought I would share with you a peak at some of the metrics we have looked at as an outcome of last week's product launch.

     

    Metrics are always tricky, because the source of the data is always something you can question, and frequently find holes.  But, if you take a bunch of data from different sources, stand back a bit, then look at it with your hands cupped together over your eyes to block the shiny distractions (think big picture), you often get some actionable tidbits out of it.

     

    Something as simple as Pageviews is a metric of success.  The idea is that you are measuring the number of people who look at your page... then you look deeper and find that bots and search sites are also looking at the new content to categorize it, log it, and have it ready for people to search upon.  So, Pageviews are a bit of a can of worms.  Good can, good worms, but not necessarily what you were expecting to find when you opened it.  So, we look at it with some measure of caution.  Again, big picture, you do see some trends that tell a little bit of the story.

     

    Note: I only discuss and cover a few items... there are many, many, many more.

     

    First here are just a few of the landing pages that I personally keep an eye on...

    Community sites; 'The Server Room' and its new sub-community the 'Server Solutions Insider' where you are right now.

    On Facebook, we have some fan pages: 'The Server Room'  and  ''Intel Xeon 5500 "Nehalem"'

     

    For the Community sites we can use a simple tool like Google Analytics to see Pageviews.  The following shows the pageview trend of 'The Server Room' over a few months.  Again, numbers are not really important as much as the pattern you can see.

    TSR_Pageviews.JPG

    Results: Weekend traffic is lower than mid-week, a couple of product launches and the 'buzz' around them drive a fair amount of traffic, and finally that last peak was pretty big compared to anything previous.  All good trends to be aware of.  The next big step is to do something with that information... and that is something to share another day.

     

    For the Facebook pages, we get some nice metrics directly from the admin tools.  Here is some nice trend data on the 'Intel Xeon 5500 "Nehalem"' page

    IX5500NHM_Fans.JPG

    Results: The total number of 'Fans' (dark blue) are growing, but on a daily (light blue) basis, we only see some peaks and not a consistent growth.  Actionable item (assuming we find such a metric as growing Facebook fans of vital importance) is that we look into how to promote the site more, and make it worth people's time to join.  But, you have to look at the forum and the point of it all... not be pushy.  Then decide what to do (if anything) from there.

     

    Now we switch gears a bit and look at some external (non-Intel) sources that we like to keep an eye on.  These are journalist websites and specifically we look for certain articles and product reviews.  The ones that actually test hardware, then give their results and analysis are key to watch since their influence is vital to knowing how well a product might be perceived.  Here are some articles in particular that I found especially interesting (no particular order):

     

    The Tech Report - Intel's Xeon W5580 processors

    AnandTech.com - The Best Server CPUs part 2: the Intel "Nehalem" Xeon X5570

    The Inquirer - Nehalem proves its server mettle

    IT168.com (Chinese Language) - Intel Nehalem-EP处理器首发深度评测

    CRN - Review: Intel's New Nehalem Historic, Game Changing

    InfoWorld - Test Center: Intel's Nehalem simply sizzles

    2cpu.com - Nehalem: Xeon Gets Core i7 Upgrade

    Tecchannel (German Language) - Test: Intel Xeon X5570 Nehalem-EP

    RealWorldTech.com - Nehalem Performance Preview

    Hardware.info (Dutch Language): - Intel Xeon X5570 'Nehalem' test

    The Inquirer - Double Nehalem for double power

     

    The metrics you gather from these (as with anything), depends on what you want to measure.  If you simply want to count the number of reviews... ok, there are a 11 (many more are out there).  If you want to look at the number of benchmarks where our product came out on top... ok, a large majority.  If you want to look at the 'tone' of the article... ok, that is very dependent upon the reader's mood (I'm feeling pretty good actually) and even more on the mood of the writer at the time it was written.  So, what do we get from all this?  We take all of these things and give it the 'take a few steps away' view (big picture again).  Hey, it all looks good.  Actionable items... something else to share with you another day.  ;o)

     

    All in all, a lot of good stuff to consume around the launch of the Intel Xeon Processor 5500 Series (codenamed Nehalem) products.  It is a joy to follow it, a deeper joy to be a part of it, and this product represents a 'new normal' for those of us that interact with the social media aspect of things.

     

    Leave me a note, I would be happy to explore this topic with you more.

    - GW

    In my last blog I talked about working on great projects which were “special”. Special in that everyone enjoyed coming to work, they worked well together, and part of the “magic” was we all knew we were working on something revolutionary.  Well that special, revolutionary project is now available for all to see, and it is known as the Intel® Xeon® processor 5500 series and Intel Xeon® 5500 series chipset.

     

     

    What amazes me the most about this project/platform is the incredible leap in performance compared to its previous generation platform which was based on the Intel Xeon processor 5400 series. For a new generation platform, 20-30% improvements in performance is typical.  And 50% vs. the previous generation platform is above normal, but the new Xeon 5500 series platform out performs the previous generation platform (Xeon 5400) by 2X or more on many benchmarks.  That’s right…nearly twice the performance!  (Click here for performance details.)

     

     

    So how did they achieve this monster leap in performance? 

     

    -          Did they double the core frequency? 

    -          No…in fact core frequency has gone down slightly.

     

     

    -          Did they double the number of cores?

    -          No…same number of cores.

     

     

    -          Did they make major changes to the CPU micro-architecture…like issuing and retiring many more instructions per clock? 

    -          No.  Same 4-instruction per clock issue/retire capabilities.

     

     

    -          Did they use a new silicon process technology? 

    -          No…both use the same 45nm process.

     

     

    -          Did they increase cache size? 

    -          No…total L2 + L3 cache size actually went down (9MB vs 12MB).

     

     

    So how did they “double’ the performance?  This is what truly amazes me.  This team was able to essentially double the performance of the platform, without changing the most obvious (e.g. # of cores, CPU frequency, major micro-architecture changes, Si process technology or cache size).  Instead, they made many changes and optimizations to the entire “platform” as well as some incremental enhancements to the processor micro-architecture (like deeper queues)…which collectively removed bottle necks in many different places and the results are nothing short of fantastic. 

     

     

    The various changes added up to “major” improvements in performance.  Some of these changes are listed below…shown in comparison to the previous generation Xeon 5400 series platform which was/is no slouch.  Even today, more than a year after its introduction, the Xeon 5400 was still was the highest performing 2-scoket platform on many benchmarks.  That is until the Xeon 5500.

     

     

    Feature

    Xeon 5400

    Xeon 5500

    # Cores

    4

    4

    Core Frequency

    3.4GHz

    3.2GHz

    Instructions per clock

    4

    4

    Process Technology

    45nm

    45nm

    L2 Cache Size

    2 x 6MB

    4 x 256MB

    L3 Cache Size

    N/A

    8MB

    Intel® Hyper-Threading Technology

     

    No

     

    Yes

    Intel® Turbo-Boost Technology

     

    No

     

    Yes

    Queues and execution resources

     

    Baseline

    Deeper queues & more resources

    Bus Connection

    FSB – 1.6GHz

    QPI – 6.4GT/s

    Memory Controller

    Discrete

    Integrated

    Memory Channels         (2 Socket platform)

     

    4

     

    6

    Memory Type

    DDR2-FB-DIMM

    DDR3

    Max # DIMMS                (2 Socket Platform)

     

    16

     

    18

    Memory Frequency

    533, 667, 800MHz

    800, 1066, 1333MHz

    Virtualization Features

    Intel VT

    Intel VT + Enhancements

    PCI Express

    Gen 1

    Gen 2

     

     

    All I can say is wow!  And all this performance comes in a lower platform power envelope than the Xeon 5400.  The performance and power savings are a true testament to this team’s ability to work together and deliver a truly revolutionary product. Congratulations to the entire “Nehalem” team (aka Xeon 5500)!    Click the link below to find out more about “Nehalem”. http://www.intel.com/products/processor/xeon5000/index.htm

    kW of Power.  BTU of Cooling. Square feet of Datacenter Space.  What do they have in common?

     

    Power, Cooling and Space are resources – more specifically, constrained resources that are available to support the delivery of compute capability. As datacenter managers look at their projected compute capacity in the coming years, it becomes clear that these scarce resources will eventually run out – in fact it’s estimated that 70% will run out of power or cooling capacity in the next two years. Adding power or cooling capacity is expensive and there likely isn’t any budget for that, especially in today’s economic environment. If there isn’t budget for adding resources, there surely isn’t funding to build additional datacenter space. So how does IT get past this impasse?

     

    By making more efficient use of the constrained resources with Intel® Xeon® Processor 5500.

     

    In the launch announcements and blogs over the course of the last week, you have heard about the cool features and improvements delivered by the Xeon® 5500.   9X the performance of older single core products.  Significantly reduced power consumption at all points of the load line between idle and max utilization.  Interesting nuggets by themselves, but when taken in the context of the datacenter, they are powerful capabilities that IT can use to address their resource constraint issues.  Let’s look as two scenarios at opposite ends of the spectrum.

     

    Scenario 1: Same Compute, Less Resources – Assuming an installed base of single core servers, you can replace the legacy servers with approximately 89% fewer Xeon® 5500 servers – fewer servers take up less datacenter space, consume less power and require less cooling – in fact, now you have headroom to add servers to meet growing compute requirements moving forward.

     

    Scenario 2; More Compute, Same Resources – For those that that crave all of the compute capacity they can get their hands on, deploying Xeon® 5500 servers would increase compute capacity by 9X in the same power, cooling and floor space constraints (again, assuming a single core installed base) and consume approximately 18% less power than the legacy servers.

     

    The kicker is that although it seems somewhat counterintuitive, when you run the numbers it actually makes financial sense to refresh old servers with Xeon® 5500 servers.  We estimate that the power and OS savings associated with Scenario 1 can pay off the investment in as little as 8 months, and those OPEX savings continue for the life of the server.

     

    For both scenarios, Xeon® 5500 also delivers improved energy efficiency with Integrated Power Gates and Automated Lower Power States, which automatically and dynamically adjusts power and performance to the specific needs of the work being done. Throw in system level instrumentation capabilities to report and cap system power, and you can further reduce your operating costs by adjusting the HVAC output to the specific heat output of the servers in real time.

     

    Power, cooling and space resources aren’t likely to start growing on trees, but Xeon® 5500 is a key to enabling a more efficiency datacenter,  to getting more out of every kilowatt, BTU and square foot that are available and to driving Datacenter Efficiency.

    During a keynote at the recent VMWorld EMEA event, in Cannes, Dr. Wolfgang Krips, VP, SAP Managed Services postulated that the Cloud Computing industry could become like the airline industry - not in terms of its energy consumption as has been speculated by various environmental groups and analysts but in terms of the way IT managers buy Cloud Computing services.

     

    • Today there are full service airlines ( seat reservation, in-flight meals, luggage handling - the works ) and low cost airlines ( open seating, bring-your-own food & pay extra for hold baggage ) - you pay your money and take your choice as to the type of service you want.
    • Ticket prices vary enormously depending on routing and day/time
    • Over-booking is an accepted practice and having a ticket does not always guarantee a seat
    • Departure/Arrival times are variable - weather, air-traffic delays etc
    • You can but your tickets from the airline directly , via a portal ( www.expedia.com, www.opodo.com etc ), as part of a complete package - flight, hotel, car etc, last minute or discounted from a bucket shop .

     

    When you think forward as to where the Cloud Computing industry is going it quite easy to imagine that all of these elements could be applied to future cloud offerings

     

    • Prices will depend on the SLA offered - guaranteed uptime, data integrity or just take lowest cost compute resource available.
    • Portal sites will act as brokers for the various services available and sell capacity - we are already seeing this from companies like Zimory ( www.zimory.com )
    • Underutilised data centres may sell off excess capacity at discounted rates just to fill their facilities or the popular services may raise price to limit demand
    • response time/completion time of a job run in the cloud will be non-deterministic - dependant on network traffic and system loading

     

     

    So, definitely food for thought as to what the future of Cloud Computing will bring and how IT might interact with the various providers on the market place.

     

    Are there other business models being proposed for Cloud services - I would be interested in hearing your opinions.

    The recently introduced Intel® Xeon® 5500 Series Processor, formerly code named Nehalem brings a number of power management features that not only improve on energy efficiency over previous generations, such as a more aggressive implementation of power proportional computing.  Depending on the server design, users of Nehalem-based servers can expect idle power consumption that is about half of the power consumed at full load, down from about two thirds in the  previous generation.

     

    A less heralded capability for this new generation of servers is that users can actually adjust the server power consumption and therefore trade off power consumption against performance.  This capability is known as power capping. The power capping range is not insignificant.  For a dual socket server consuming about 300 watt at full load, the capping range is in the order of 100 watts, that is, for a fully loaded server consuming 300 watts, power consumption can ratcheted down to about 200 watts.  The actual numbers depend on the server implementation.

    The application of this mechanism for servers deployed in a data center leads to some energy savings.  However, perhaps the most valuable aspect of this technology is the operational flexibility it confers to data center operators.

    This value comes from two capabilities:  First, power capping brings predictable power consumption within the specified power capping range, and second, servers implementing power capping offer actual power readouts as a bonus: their power supplies are PMBus(tm) enabled and their historical power consumption can be retrieved through standard APIs.

    With actual historical power data, it is possible to optimize the loading of power limited racks, whereas before the most accurate estimation of power consumption came from derated nameplate data.  The nameplate estimation for power consumption is a static measure that requires a considerable safety margin.  This conservative approach to power sizing leads to overprovisioning of power.  This was OK in those times when energy costs were a second order consideration.  That is not the case anymore.

    This technology allows dialing the power to be consumed by groups of over  a thousand servers, allowing a power control authority of tens of thousands of watts in data centers.  How does power capping work?  The technology implements power control by taking advantage of the CPU voltage and frequency scaling implemented by the Nehalem architecture.  The CPUs are one of the most power consuming components in a server.  If we can regulate the power consumed by the CPUs we can have an effect on the power consumed by the whole server.  Furthermore, if we can control the power consumed by the thousands of servers in a data center, we'll be able to alter the power consumed in that data center.

    Power control for groups of servers is attained by composing power control capabilities of power control of each server.  Likewise, power control for a server is attained by composing CPU power control as illustrated in the figure below.  We will explain each of the constructs in the rest of this article.

    hierarchy.png

    Conceptually, power control for thousands of servers in a data center is implemented through a series of coordinated set of nested mechanisms.

     

    The lowest level is  implemented through frequency and voltage scaling: laws of physics dictate that for a given architecture, power consumption is proportional to the CPU's frequency and to the square of the voltage use to power the CPU.  There are mechanisms built into the CPU architecture that allow a certain number of discrete combinations of voltage and frequency.  Using the ACPI standard nomenclature, these discrete combinations are called P-states, the highest performing state is nominally identified as P0, and the lower power consumption states are identified as P1, P2 and so on.  A Nehalem CPU supports about ten states, the actual number depending on the processor model.  For the sake of an example, a CPU in P0 may have been assigned a voltage of 1.4 volts and 3.6 GHz, at which point it draws about 100 watts.  As the CPU transitions to lower power states, it may have a state P4 using 1.2 volts running at 2.8 GHz and consuming about 70 watts.

     

    The P-states by themselves can't control the power consumed by a server.  The CPU itself has no mechanisms to measure the power it consumes.   This mechanism is implemented by firmware running in the Nehalem chipset. This firmware implements the Intel(r) Dynamic Node Power Management technology, or Node manager for short..  If what we want is to measure the power consumed by a server, looking only at CPU consumption does not provide the whole picture.  For this purpose, the power supplies in Node Manager-enabled servers provide actual power readings for the whole server.  It is now possible to establish a classic control feedback loop where we compare a target power against the actual power indicated by the power supplies.  The Node Manager code manipulates the P-states up or down until the desired target power is reached.  If the desired power lies between two P-states, the Node Manager code rapidly switches between the two states until the average power consumption meets the set power.  This is an implementation of another classic control scheme, affectionately called bang-bang control for obvious reasons.

    NM.png

    From a data center perspective, regulating power consumption of just a single server is not an interesting capability.  We need the means to control servers as a group, and just as we were able to obtain power supply readouts for one server, we need to monitor the power for the group of servers to allow meeting a global power target for that group of servers.  This function is provided by a software development kit (SDK), the Intel(r) Data Center Manager or Intel DCM for short. Notice that DCM implements a feedback control mechanism very similar to the mechanism that regulates power consumption for a single server, but at a much larger scale.  Instead of watching one or two power supplies, DCM oversees the power consumption of multiple servers or "nodes", whose number can range up to thousands.

     

    dcm.png

     

    Intel DCM was purposely architected as an SDK as a building block for industry players to build more sophisticated and valuable capabilities for the benefit of data center operators.  One possible application is shown below, where Intel DCM has been integrated into a Building Management System (BMS) application.  Some Node Manager-enabled servers come with inlet temperature sensors.  This allows the BMS application to monitor the inlet temperature of group of servers, and if the temperature rises above a certain threshold, it can take a number of measures, from throttling back the power consumed to reduce the thermal stress on that particular area of the data center to alerting system operators.  The BMS can also coordinate the power consumed by the server equipment, for instance with the  CRAC fan speeds.

     

    DataCenter.png

    With this discussion we have barely begun to scratch the  surface of the capabilities from the family of technologies implementing power management.  In subsequent notes we'll dig deeper into each of the components and explore how they are implemented, how these technologies can be extended and the extensive range of uses for which they can be applied.

     

     

    Two weeks ago, I flew to Mexico City to present on virtualization technologies to both government agencies and private industry.  In both cases their issues were the same.  They are trying to do more with less.  In these times of global economic uncertainty, businesses are being challenged to reduce spending, while still improving infrastructure to keep up with business demand.  This is true, especially in the US, where in one case the construction of a 300 million dollar data center was put on hold and instead IT was task to reduce their server footprint in an at-capacity data center. The new focuses . . . find ways to reduce overall power and cooling costs.  Almost every company is looking at virtualization as one key component of the equation to finding solutions to these data center problems.

     

    The combination of a managed virtualization solution coupled with an efficient Intel processor based server is one highly effective means to solve the “do more with less” mandate.  Let’s start by talking about the new Xeon 5500 processor that was just unveiled last Monday.  You have a need to reduce power and consolidate servers?  A Xeon 5500 based server can effectively replace eight to nine older single-core servers. 9x performance improvements have been seen using things like Turbo Boost.  The processor idle power drops to only 10 watts, enabling a 50% reduction in system idle power compared to our previous generation chip.  Everything I’m seeing on this is that you can recoup your capital investment in around 8 to 9 months from reduced maintenance, power use, software licensing, and cooling costs. Your energy savings alone can be as high as a 90% reduction!  That’s big! 

    Check out more details on the launch of the new Xeon 5500 processor with Intel’s press release.

    http://www.intel.com/pressroom/archive/releases/20090330corp_sm.htm?iid=pr1_releasepri_20090330smr#story

    Second, let’s talk about Intel’s power management embedded in the chipset.  This component is the key to rapidly recouping power costs and maximizing your server consolidation efforts.  For a good introduction to Intel’s power management system for server power capping in the data center, take a look at Jackson He’s blog “Datacenter Dynamic Power Management – Intelligent Power Management on Intel Xeon® 5500”.

    http://communities.intel.com/community/openportit/server/blog/2009/03/31/datacenter-dynamic-power-management-intelligent-power-management-on-intel-xeon-5500

    Lastly, virtualization management software drives ROI but the challenge in the management of large virtual infrastructures is that there are no clear boundaries in terms of network, storage and datacenter management teams.  This needs to be defined as well as an emphasis on a holistic management approach or a “Service Management” approach.  We have to get beyond just monitoring the uptime or resource usage levels of virtual machines (VM) and physical hosts. Along with Intel’s announcement of our latest Xeon 5500, there have also been a number of new product announcements in the past two months.  From VMworld Europe 2009, we heard about vSphere 4.0 and Citrix Essentials for Hyper-V and at ManageFusion Symantec touting improved virtualization functionality and management with CMS/SMS 7.0 integrating Intel’s vPro functionality.

     

    Are the current products providing a holistic management approach with virtualization?

     

    Is it the right strategy to integrate power management with virtualization management?

     

    I’ve got my opinion on this, what’s yours?

     

     

    Mark

    Sometimes the next step up is a big one. The Intel® Xeon® processor 5500 series (formerly codenamed “Nehalem”) is one of those kinds of steps.

     

     

     

    Over the last few years 10 Gigabit has started to take off, but there have always been some negative mutterings: “Why do I need 10 Gigabit?”, “Why do we need this much bandwidth?” or “My server can’t support 10 Gigabit per second bidirectional traffic anyway.” Despite the volume of 10 Gigabit products shipped, there is still the reality that if you intend to use the entire 20 Gbps (10G both directions) or heaven forbid you try to use 40 Gbps with a dual port product; you will likely be sorely disappointed with the results.

     

     

     

     

    The reason for this is simple. Most current mainstream servers and 10 Gigabit products don’t support the intense usage models needed to drive that much network I/O and they also don’t have the memory architecture to unleash the full potential of dual 10 Gigabit links.

     

     

     

     

    Luckily, that all just changed with Intel® Xeon® processor 5500 series.

     

     

     

    In addition to the great processing improvements that the Intel® Xeon® processor 5500 series brings to the table, Intel has also introduced our third generation 10 Gigabit product, the Intel® 82599 10 Gigabit Ethernet Controller which provides two ports, and new capabilities and enhancement to the 10 Gigabit product landscape that help unshackle the new processor from its predecessor’s network I/O handcuffs and unleashes blazing performance in a variety of usage models. These improvements, coupled with the new architecture of the Xeon 5500 provide a symbiotic processor-networking combination that makes new usages possible and expands server and datacenter computing by a big leap… not just a baby step.

     

     

     

     

    One of the key changes with Intel® Xeon® processor 5500 series architecture is a step function improvement in the internal system I/O. The new local memory controller design, faster cache architecture, and support for DDR3 help push Xeon 5500 to be able to support peak memory bandwidth of ~32 Gigabytes, per socket. In a dual socket system this provides for ~64 Gigabytes of bandwidth which is dramatically more than the previous generation server configuration. In addition, the new Intel® QuickPath Interconnect (Intel® QPI) improves the speed both for inter-Processor communication as well as a faster path to the I/O hub. Finally, PCI Express* 2.0 I/O Bus support has been added to improve the entire data path from Processor to the 10 Gigabit Ethernet link.

     

     

     

     

    Taken together, the above improvements are a performance game changer for 10 Gigabit Ethernet.

     

     

    The chart below** shows the previous generation Intel® Xeon® paired with the previous generation Intel 10 Gigabit Ethernet Controller compared to the latest platform using the newest Intel silicon for both processor and networking. Not only is the performance better in 1-4 port configurations, but the performance scales dramatically better to above 50 Gigabits per second of total LAN throughput in a four port configuration vs. *just* 17 Gigabits on the previous generation! A complete platform architecture solution makes this huge improvement possible.

    82599 + Xeon 5500.jpg

     

     

     

     

     

     

    Now, it’s great that Intel® Xeon® processor 5500 series coupled with the Intel® 82599 10 Gigabit Ethernet can deliver such raw performance, but there is the forever nagging question of usage model. Luckily, the new headroom breathes new life into both Virtualization and storage over Ethernet usages (both of which I’ve talked about here and here) and provides new opportunities to more efficiently utilize your network link.

     

     

     

     

    Intel® Xeon® processor 5500 series allows the vision of consolidation in the datacenter to scale new heights, increasing the number of Virtual Machines (VM) that can effectively live inside a single system enclosure. Each incremental VM will add additional network I/O that is already starting to exhaust a 4 or 8 port single gigabit interface configuration with today’s server capabilities. As more VMs are added in the Xeon 5500 generation, 10 Gigabit will no longer be seen as optional; it will be required. For its part, the Intel® 82599 10 Gigabit Ethernet Controller supports Intel® Virtualization Technology for Connectivity (Intel® VT-c) to improve overall system performance in virtualized server environments. Intel VT- c includes hardware optimizations that help reduce I/O bottlenecks, boost throughput and reduce latency. Components of Intel VT-c include VMDq, and VMDc. VMDc consists of SR-IOV which I’ve covered before, and the ability to support VM mobility; a critical usage model for modern a IT deployment. All together, server systems can support more VMs, more throughput, more flexibility and better performance in a datacenter environment.

     

     

     

     

    Finally, the additional capabilities of the Intel® 82599 10 Gigabit Ethernet Controller product surrounding support for FCoE offloads and full support for the new Data Center Bridging (DCB) standards provide an opportunity for storage convergence over Ethernet in either a datacenter using a Fiber Channel SAN environment or an IT environment more focused on iSCSI. On the performance side of things, iSCSI acceleration along with FCoE data path offloads are supported in the Ethernet controller, and on the processor there is support for the CRC instruction set which insures iSCSI data integrity while minimizing processor overhead.

     

     

     

     

    The ability to converge at least part of the additional storage infrastructure onto Ethernet is just another factor driving massively increased data rates over Ethernet… luckily, the Intel® Xeon® processor 5500 series and the Intel® 82599 10 Gigabit Ethernet Controller solutions are up to the task.

     

     

     

    Over the past few days, there has been a lot of noise around Intel® Xeon® processor 5500 series and the many other platform components that help it shine its brightest. Improved processing power, memory controller bandwidth, faster and redesigned FSB, and improved 10 Gigabit networking all converge together to provide a fantastic performance, convergence, scalability and power story. Intel’s strong history in the server and processor markets, coupled with over 25 years in Ethernet makes this latest release a natural evolution of technology. Together these capabilities, along with the improved 10 Gigabit features and performance, are helping to transform the datacenter. It will be denser, more power efficient, more performant, and more consolidated with capabilities like FCoE and iSCSI.

     

     

     

     

    As for “Why do I need 10 Gigabit?” We have the answer, and it’s the new Xeon®.

     

     

     

     

    Ben Hacker

     

    -----

     

    ** Source. Intel. Mar 2009. Up to 2.5x performance compared to Intel® Xeon® processor 5300 series. Performance result of a bandwidth intensive network benchmark (IxChariot). Network throughput was measured on 64KB I/O size transfers between the test system and multiple network targets. Intel pre-production system with two Quad-Core Intel® Xeon® processor 5500 series CPUs (2.93 GHz), 12 GB memory (6 x2GB DDR3 - 1066MHz) vs. Intel Production system with two Intel® Xeon® processors X5365 (3.0GHz, 1333MHz FSB), 8 GB memory (8 x 1 GB DDR 2 - 667). Windows Server 2008, stock unmodified installation.

    Intel® has just launched their latest server processor, the Intel® Xeon® processor 5500 series. It really is a breakthrough processor for Intel and a clearly phenomenal solution for HPC. I was watching a keynote presentation this week and our Vice President was downright giddy about it. What makes this processor such a phenomenal solution for HPC? The answer is really easy; it expands capabilities and shortens users’ time to results. The real question is how does this processor perform so much better than other solutions out there? This answer is a bit more complicated but really fun to answer. Here we go…

    Intel® QuickPath Interconnect (QPI) – This is the technology that has replaced the front side bus used in previous generation Xeon® processors. Our previous generation architecture had a bandwidth of 21 GB/s vs. the QPI bandwidth of 46.1 GB/s. This is a speedup of 2.2X, very impressive. For applications that require lots of I/O this is huge. It’s like going from a country back road to an expressway!

    Integrated memory controller – Intel has moved the memory controller from the MCH (memory controller Hub) into the processor.  In addition to integrating the memory controller, Intel is now using native DDR3 with speeds up to 1333MHz and three memory channelsper processor; this is a total of 6 memory channels and 64 GB/s of total memory bandwidth for a 2S HPC node.  This is a 3x jump in memory bandwidth from theprevious generation memory controller which only supported speeds up to 1066MHz and 4 memory channels. By integrating the memory controller you are now in closer contact with the processor for lower latency reads and writes.  Intel added two additional memory channel (one per socket) to increase memory capacity and increase the speed to faster reads and writes. 

    Energy efficient design – The new Intel® Xeon® processor 5500 series has the dynamic capability of turning off cores when not required. There are more power states and has the ability to transition between power states faster than ever before. Net, net this means less power consumption. By consuming less power and providing world class performance Intel has created a solution that cries out HPC!

    By taking advantage of the power saving, Intel has introduced another feature called Intel® Turbo Boost Technology. Intel® Turbo Boost Technology automatically increases processor frequency to boost application performance if thermal headroom is available. Depending on the environment Turbo Boost can increase the processor frequency by as much as 400 MHz!

    Another technology supported in the Intel® Xeon processor 5500 series is Hyper-Threading. Intel® Hyper-Threading Technology enables users to run multiple threads on each processing core to increase total application performance while requiring only a fraction of the power that would be necessary to support additional cores. For highly threaded HPC applications this is showing performance gains over 25%.

    The Intel® Xeon® processor 5500 series is considered a general purpose processor. However, a closer look at the features and capabilities show that this is one heck of an HPC solution. You can’t help but think Intel knew HPC was an important market segment for servers and they had this in mind as they created the architecture and developed the features.

    Well, is Intel pounding their chest…again! They should be. The introduction of the Intel® Xeon® processor 5500 series is breakthrough architecture for HPC users. The industry hasn’t seen generation to generation performance gains like this since the Pentium® Pro was introduced back in the mid 90’s. Congratulations Intel and go ahead and pound that chest, you deserve it!

    Everyday people are facing exploding volumes of data that they need to manage. As the model size continues to grow, they need to figure out how to maximize the efficiency of data movement and where possible to move processing to data, rather than data to processing. I see this as a constant issue for most of my customers and therefore part of my job is to provide system benchmarks to help people understand how to choose the most efficient platforms for their data-intensive computations. I recently co-authored a technical white paper on Data Intensive Computing that will provide you a bit more insight on this topic. Feel free to download at: http://www.sgi.com/pdfs/4154.pdf.

    This week Silicon Graphics posted a large number of standard and application benchmarks for the SGI® Altix® ICE platform and the Intel® Xeon® Processor 5500 Series (codenamed Nehalem). You can find both the standard and application benchmarks on http://www.sgi.com/products/servers/altix/ice.

    As we ran the benchmarks we were able to achieve superior application performance through a combination of different factors, most important being the innovative memory system of the new Intel processor family and the improved ICE platform network topology and I/O. On the website you will find a variety of application benchmarks including MD.Nastran, LS-Dyna, Abaqus, Radioss, Fluent, Gaussian and NAMD just to name a few.

    I was out at the Santa Clara Launch Event on Monday. I had the distinct pleasure to capture some comments from the top OEM server suppliers. Check out the videos below to hear what the OEMs are saying. (Note: I added one from VMware, so not only OEMs)

     

    Sally Stevens from Dell:

     

     

    Shekar Ayyar from VMware:

     

     

    Paul Gottsegen from HP:

     

     

     

    Dimitris Dovas from SUN:

     

     

    David Lawler from Cisco:

     

     

    And check out this link if you want to know more about the Cisco servers http://blogs.cisco.com/news

     

     

    Hope you enjoyed the videos.

       

    I’ve written about the big data problem and asserted some ideas on what makes up an ideal infrastructure. Let’s look at the some progress I think is relevant.

    At SGI we’ve been wrestling with the big data problem for many years now and we’re continuing to building and integrating systems with the attributes we feel are ideal for data intensive computing. More recently we’ve been encouraged by the potential of Intel’s 5500 series Xeon Processor (code named Nehalem) to take on the data intensive computing problem. We have run various “Data Intensive” performance benchmarks using the SGI Altix ICE platform along with the Intel Xeon Processor 5500 Series to see how well the combination would handle real world Data Intensive Computing.

    The results have been outstanding and represent material progress in sustainable efficiency for big data problems. The new system delivers reliably scalable performance gains of up to 140 percent over current generation systems across a variety of data-intensive applications

    So, can we outrun the data avalanche? We can discuss that more in the intel.com/server discussion room, but I think the answer is that we don’t really have a choice if we want to survive. It is just a matter of figuring out the best approach to keeping one step ahead of the huge amounts of data cascading towards us and if I have my way, a better quality of live by feeling less stress from that data yoke on my shoulders.

    I recently co-authored a technical white paper on Data Intensive Computing that will provide you a bit more insight on this topic. Feel free to download at http://www.sgi.com/pdfs/4154.pdf....

    Here is a question that needs an answer.  Should new technology change the way we work?

    Tom Peters, author and consultant, may have captured the idea of innovation perfectly when he said “experiment fearlessly” and “innovation is bloody random.”  Hmm, experiment fearlessly – that is hard to do with an impending product development deadline.  However, with the performance in today’s new dual socket Intel® Xeon® 5500-based workstations; engineers can explore more and test creative ideas faster than ever before.

    It’s not just a design station limited to CAD; it is a digital workbench capable of much more.

    You are probably asking what this digital workbench idea is by now.  Well like a real workbench with hammers, screwdrivers and pliers, the digital workbench replaces analog with digital tools and gives users access to powerful integrated software suites running on a workstation platform with two powerful Intel® Xeon® processors.  The digital workbench provides engineers with an opportunity to do more than just CAD; it gives them an opportunity to do CAD quickly and efficiently while concurrently testing their innovative ideas for form, fit, and functions against the initial design requirement. The digital workbench maximizes the value of engineers time and capital investments for increased productivity.   With these two socket workstations engineers have the tools right at their fingertips to bring model analysis or rendering into their workflow earlier than ever before.

     

     

    Ever hear of Algorithmic Design?

    The digital workbench just got busier.  With the software advancement of strategic players involved in design and engineering the idea of digital prototyping, analysis driven design, and design based simulation are about to become common place.  What better way to attack the randomness of innovation than by providing technology, in the form of a digital workbench powered by two Intel® Xeon® processors, to innovators so they can execute many more experiments.

    Carl Bass, CEO at Autodesk, recently noted that Boeing used a process known as algorithmic design as “another way in which designers can access new options and ideas.”  Boeing’s result was a vehicle that was counter intuitive and may have been overlooked had it not been for algorithmic design.

     

     

    Do you need a digital workbench?

    I would say yes, but to get the real answer visit http://www.intel.com/products/workstation/processorsand use the configuration tool to see which workstation may impact your productivity the most.

    Hank Lea & I will be talking about a recent interview with Sally Stevens (Director of Server Platform Marketing for Dell) and Kirk Skaugen, (vice president of the Digital Enterprise Group and general manager of the Server Platforms Group for Intel). the are talking about their new platforms - Dell11G & Intel Xeon Processor 5500 Series.

    Filter Blog

    By date:
    By tag: