Question: Is this thing for real?

Answer: Yes, we are here to answer your questions. Thanks for joining.

 

That's real dialogue from our recent "Live Chat" forum that brought together Intel Experts on XEON 7400-series product, Intel Architecture, Server Platforms, Visual Computing, Energy, and many other interesting topics. Live Chat is so new to the tech enthusiast, many were'nt sure we were real people and not internet bots....

 

Question: Are you for real or is this a bot?

Answer: I'm real... no bots today. :^)

 

The topics included virtualization, intel architecture, gaming, processor TDP, and many others. Check out the transcripts to see what all the chatting is about...

 

Live Chat: North America

 

Live Chat: Asia Pacific

This is the 3rd and final part of a 3 part series exploring Virtualization, Grids and cloud computing and the exponential value obtained by integrating these to realize enterprise objectives.

 

The links to earlier parts are:

 

Part 1 - Virtualization

 

Part 2 - Grids and Cloud computing

 

Here is the video for Part 3

 

    

 

The video for the sake of brevity does not explore the range of potential synergies that are possible but motivate a few. Some other points to consider:

 

  • One of the key aspects of the synergy of these paradigms that I have not touched on is "Private" or "On Premise" cloud where the IT environment can be implemented as a cloud but within the confines of the enterprise "firewall" - currently there is a lot of discussion in the media on this topic. This is the next frontier of cloud that will get the CIO jazzed. Grids and virtualization will play a very important role in the implementation of these "private clouds". The primary difference between IT today and the "Private Cloud" paradigm is that the "cloud" paradigm requires a layer (a virtualization if you will) between the user and the "traditional" resources that users have direct access to in today's IT. Separating the concerns in this manner (i.e. with the layer) improves the consistency/reliability of the services that users need and use while allowing IT to make changes to the infrastructure without the user noticing the changes. The resulting infrastructure opacity (to the user) now allows IT to manage both workloads and resources proactively where they previously only had management control over resources. This layer is a win-win if IT can now manage the "opaque infrastructure" in a way that the user has the desired elasticity, performance and reliability - it is in this management that Grids and Virtualization will play a key role.  

 

  • The synergy of these 3 paradigms now allow the merging of "use" and "management" into a single framework where till now they were treated as separate disciplines. This merging now makes "IT agility" a reality where "proactivty" is as much a part of "agility" as "reactivity". Driving "agility" as a "proactive excercise" will also lead to improved TCO over that delivered by a "reactive excercise".

 

  • The biggest challenge is whether IT can change it's models, processes and mindframe to really recognize and leverage the value that is placed before them - furthermore IT can not be a mere deployer of solutions but has now to get into the business of design, development and integration to a degree that they have not done or been comfortable with.

 

I hope you found this series interesting and found some things to ponder. I would like to hear your thoughts - would also be very interested to hear other examples of folks bringing these paradigms together.

 

There is always a sense of apprehension and skepticism when there is a new processor out in the market. And there is nothing wrong with that. Customers would like to see not just benchmarks (with simulated real life workloads) but also real applications demonstrating performance and scaling on a new technology.

 

 

 

 

 

And I decided to share with you the gains our software friends see on the 6-core Intel Xeon Processor 7400 based Servers.

 

 

 

 

 

OMNIEnterprise is a Core Banking Solution from a leading ISV, InfrasoftTech. Running on Intel Xeon Processor E7430, this application showed a 22% performance at about 50% less processor utilization than the previous generation Xeon E7330. Clearly, giving customers the headroom to grow the load on the server.

 

 

 

 

 

SARAS is an E-learning suite of applications from Excel-Soft. This application was able to handle 50% more requests in a virtual environment using VMware VMM as compared to the previous generation platform.

 

 

 

 

 

TCS, a leading Financial Services ISV, ran a number of Banking and Financial Apps on the Xeon Processor X7460 and saw 20% to 50% higher throughput than its predecessor.

 

 

 

 

 

We are now seeing many ISVs and customers get the performance and scaling of the 45nm 6-core Xeon 7400 processor based Servers.

 

 

 

 

 

So what are you waiting for? Grab a Xeon 7400 based Server now and get it to work for you !!!

 

 

S_Poulin

When to Buy?

Posted by S_Poulin Oct 4, 2008

I was on a plane flying somewhere the other day and I happened to be seated next to someone who ran consumer sales for a large Multi-National Corporation.  We had a great conversation about technology and discussed his specific focus on client computing.  During the course of the conversation we talked about what computers we carried around, what we had at home and some of the exciting things happening in the mobile space.  To keep a long story short we debated the best time to buy something.  One of the dangers of being an Intel employee is you always know there is something great coming right around the corner.  It can create paralysis when deciding to buy that next computer for my wife or that next mobile device for one of my two daughters.  Buy today and Nehalem is coming tomorrow.  Buy tomorrow and 32 nm products are coming soon after.  When I apply this thinking to my position in the Server group I realize that system admins and IT professionals are making the same sorts of decisions everyday.  The difference is their penalties for waiting are much more severe.  They could lose profit, lose share or but their existence in jeopardy if they make the decision to wait and fall behind their competitors.  Likewise, if they are on the leading edge with their technology purchases and can not extract value for that then they are exposing themselves in that they have wasted opportunity cost.  Now if I decide to not buy my wife and my kids a new computer the consequences are severe but not quite visible on the bottom line of a balance sheet. I have also not seen the downside of buying them a new computer ahead of their normal replacement cycle.  I'm sure there is a lesson in there somewhere but I don't have time to dig for it.

 

When we looked at this phenomenon in the Enterprise we wanted to minimize the risk of being a leading technology adopter.  That meant trying to find a way that our customers could adopt server technology today and extend and blend the use of that technology in the future with their next generation hardware.  One example of this would be what we have done for years with the Intel Architecture.  The very nature of the instruction sets that we develop allow old and new software alike to run on next generation hardware.  As enterprises evolve and virtualization grows in it’s adoption we developed another feature called FlexMigration that allows someone to start virtualization pools with today’s hardware and grow the size of the pool with the next generation hardware that we will be delivering soon.  It is amazing the positive feedback we have received from a feature that in essence isn’t about a performance enhancement (Intel’s Moore's Law) but is rather about giving them better investment protection.  Look for more of these types of advancements from Intel in the future because while we realize the need for absolute performance leadership in all segments, we also know that there are features just as important to an IT professional when it comes to the bottom line.

For those of us who have lived through the cyclical nature of Enterprise technology innovation, the last month has seen the public and private backlash of an "emerging" compute models, known as Cloud Computing. Industry veterans claiming marketing hype, others claiming leadership in this space and some committed to changing the landscape of computing have begun their public positioning of this new compute paradigm. So what has changed and why all the "backlash": I'll outline my thoughts, remind people of history and outline what I believe the future holds.

 

Why now?

 

1. Moore's Law, Metcalfe's Law and Buffett's law (I'll explain later).

 

A. Moore's law and the era of multi-core provides all of us in the industry the capability to grow our compute infrastructures in a more scalable, cost effective fashion then at anytime in the history of our industry. This isn't hype this is reality. We are delivering better than 2x the performance in our recently announced Intel Xeon 7400 series product than our Intel Xeon 7100 series product launched just 2 years ago. I might add that our Intel Xeon 7400 platform is the industry 1st 1 million tpm/c x86 architecture in history. Moore's law is alive and well. Perfectly suited for the Physical layer of Cloud Computing .

 

B. Ethernet performance and innovation has outstripped Fiber Channel and Infiniband in terms of innovation. This has allowed for a Gigabit revolution from Desktop to Data Center. The Gigabit revolution is driving a transformation of how communication networks are being built around the world. This law will be constant and Intel is committed to being a leader here as well.

(Note: I was the NUMA-Q systems architect for the 1st ever production fiber channel solution deployed for commercial use at the NASD in 1997 with EMC...at the time ethernet was a 10/100mb controller)

 

C. Buffett's Law: The value of an enterprise is a direct correlation of it's ability to deliver consistent return on invested capital, regardless of market conditions. Enterprises will/are being required to drive consistent returns with an intelligent, expandable and cost effective IT infrastructures. Information Technology leadership is key to all successful businesses in a global economy. Cloud computing has the potential to provide a flexible internal capability with external application growth capabilities. This is not yesterday's outsourcing models. Warren Buffett is driving change through value investing, this rewards the conservative innovator more than the flambouyant visionary of a "boom" cycle. I'm not sure he even understands the significance his investment models have begun to place on our industry...

 

2. Virtualization for x86 compute technologies. Over 80% of the world's server shipments.

 

 

A. Expansion of virtualization technologies to increase the utilization of the compute infrastructures has increased the flexibility of these environments. Introduction of flexible migration technologies for virtual machines provides meaningful cost savings for software licensing, energy efficiencies and scale for viral workloads.

 

 

B. Educating tomorrow's Web 2.0 innovators is easier on our platforms, compilers and technologies that are available in every market in the world....within state department allowed countries. :>

 

 

C. Broadband Access around the world. Wimax, 3G, Fiber to Home, xDSL and Cable...thanks!

 

 

3. Google's Law: Viral IS the market.

 

 

A. Our children fortunate enough to afford access, live in a online world where Viral isn't a sickness. Communities expand and contract based upon their ability to meet the needs of their constituency. John Stuart Mill would be proud.

B. Cloud Computing provides the necessary compute infrastructure for tomorrow's software artists to build a community in their own likeness, even the founders of Google are growing increasingly out of touch as they grow older.

 

 

What's different from the past?

 

 

1. Clusters aren't a requirement. Cluster technologies are difficult to manage, administer and unpredictable. In addition their software licensing costs have been traditionally prohibitive for wide-scale deployment across an enterprise.

 

 

2. Intel's compute requirements and capabilities are as never before. See above or visit www.intel.com

 

 

3. Client/Server architectures, Open Source and Web 2.0 have unlocked a world of programming models capable of acting different.

 

 

4. It has been 10 years since Siebel Net, the 1st Client Server ASP, was architected from a deal between Tom Seibel and Casey Powell....yes it was ground breaking and no it didn't make a lot of money....but it did spark an industry that has given us Salesforce.com, Corio, OracleOnline and others. We designed these infrastructures knowing they had meaningful limitations yet with the industry's best technology available at the time. We have had a first act.

 

5. Latency improvements has reduced virtual machine migration to minutes and application deployment models from weeks to hours. As we decrease the latencies of the physical layer from minutes to milliseconds across cpu, memory, networking and storage we are providing the basis for breakthrough programming models such as Cloud Computing.

 

 

What does the future hold?

 

My Top 9 predictions....(time for the station announcement....these opinions are not necessarily opinions of Intel, the opinion expressed below are the author.)

 

1. Cloud Computing infrastructures will provide the basis for a new compute and development infrastructures that will allow the world's smartest technologists to have access to more compute resources than they have ever been able to afford. The cloud will provide breakthrough research in the following areas in the next 5 years: DNA mapping, FDA regulation, Energy Consumption and Exploration modeling, Retail Supply Chain, High Definition Content Creation and Human Rights.

 

2. Cloud Computing will deliver the most consistent, scalable and available compute infrastructure ever created. Translation...there is failure, there is not downtime. It will also mark the end of Tape backup technologies and proprietary storage interfaces over the next decade. I'm not sure why we need clustering technology in the future either but...that is the subject for another blog.

 

 

3. Cloud Computing will make device compute capabilities even more important than they are today. The easier the access to high value content the more important the compute capabilities of the end user device to enjoy the content.

 

 

4. Cloud Computing will change the Entertainment Industry over the next 5 years. More access, more distribution, less cost, more profits with less risk.

 

 

5. Cloud computing will have the single biggest effect on traditional software licensing models of any environemental change in our industry. More of an impact than Multi-Core, Clustering or Google.

 

 

6. Cloud Computing will drive innovation and interoperability. Innovation in programming models for business and consumer applications. Interoperability across the physical, application and user interface layers of a cloud infrastructure.

 

 

7. Cloud Computing infrastructures will consume up to 50% of all x86 server CPU cycles by 2016.

 

 

8. Whichever entrepeneurs emerge as the leaders of the "Age of Cloud Computing" will be the technically most well-rounded technologists our industry has ever created. Hopefully, they will be equally well-rounded in their business planning processes.

 

 

9. Cloud Computing will begin to unearth a global data standards initiative. As an industry we will have to work with the regulatory leadership around the world to define acceptable and secure standards of data integrity across Cloud infrastructures.

 

 

For the non-believers...I must apologize. Explanations I can give, understanding I cannot. Cloud Computing is not a fad. it is a desired state of end users who face real business challenges with unpredictable compute requirements, budgets and human resources.

 

 

Where there are clouds there is usually rain.....

 

One of the common questions that I get from customers is whether their applications will be able to take advantage of so many cores in their server. And it's not just running the application without changes, but also being able to scale in performance. I would like to address this concern in three parts:

 

 

 

 

  • What does many cores on a server bring to me? Applications that run on x86 servers today have been written to take advantage of more processors (or SMP) on the Servers since the 90s. These applications have been threaded over time and fined tuned to deliver optimum performance. When such applications are run on a many-core platform (such as the 4-way Intel(r) Xeon(r) 7400 processor-based server), these applications show instantaneous performance gain. Database applications such Oracle 10g, IBM DB2, Microsoft SQL Server 2005, Microsoft SQL Server 2008 have shown significant performance gains when run on a 6-core Xeon 7400 processor-based server. As an example, check out the TPC-C performance of the 4P platform. We have seen SQL Server 2005 performance gains up to 68% compared to the previous generation processor - The highest 4P database performance on a Windows Server Platform today. We also have seen the highest DB2 database performance on Linux OS. Similarly, using Oracle OASB benchmark with Oracle 10g R2 DB and Oracle E-business suite v12, the IBM x3850M2 delivered unparalleled processing of 10,000 employee payroll batch update in 5.37 seconds (Wall Clock Duration). So Clearly, the benefits of using multi-core processors such as the 6-core Xeon 7400 processor are immense. It's performance and more performance all the way for enterprise workloads.

 

 

 

  • Do I have to get the latest version of my application to get optimum performance? In most of the cases for enterprise workloads, the application gets performance boost as more cores are added to the system. However, the performance may not be optimum. As newer tools emerge to take advantage of Intel® Core® Architecture in a parallel environment, the ISVs may make changes to their software to give the their applications additional performance boost. Tools such as compilers and libraries from Intel, Microsoft, Oracle, Sun and others are constantly updated to provide optimum scaling and performance on Intel Architecture based Servers.

 

 

 

  • Does my application cost more on a 6-core Xeon 7400 processor-based server? In most of the cases, NO. You need to make an assessment of the software licensing model currently used on your servers and then decide if the price/performance is worth moving to the Xeon 7400 processor-based platform. From what we have seen in the past many years, the performance of a multi-core platform far outweighs the price of the platform. In short, YOU PAY LESS TO GET MORE. Today multi-core processing has become the norm for Enterprise Applications. ISVs are constantly evaluating their application licensing models to run in an SMP multi-core environment. ISVs such as Oracle, Microsoft, SAP, VMware have made their applications multi-core friendly, giving you more for less.

 

 

 

 

Another area that you MUST consider is Virtualization or Server Consolidation - where multi-core servers have been known to provide the optimum use of compute resources in your environment. You can read Virtualization benefits in blogs from Sudip Chahal, dave_hill, RK_Hiremane, and K_Lloyd

 

 

 

 

 

To summarize, enterprise applications running on the 6-core Intel Xeon 7400 processor-based servers will see performance scaling as the number of cores increase. And it WILL get better over time. I hope you have enjoyed reading this. Let me know what you think.

 

 

ChrisPeters

A 45nm 6-core QnA

Posted by ChrisPeters Oct 3, 2008

Following my earlier blog, I promised to share answers to some of the more common questions I get from customers on 45nm and mostly about the newest product we have on 6-core 45nm: the Xeon processor 7400 series.

 

1. What does 45nm really mean? A nanometer represents a distance that is one billionth of a meter in length. 45nm represents the width of a single transistor and is used to describe the manufacturing technology Intel uses to create our latest generation of processors. Because of the small 45nm transistor size, Intel is able fit 2 million transistors on the period at the end of this sentence.

 

2. Are all 45nm transistors the same? No. Materials used in silicon manufacturing process can vary from manufacturer to manufacturer. Intel switched over to a high-k dielectric material (Halfnium) that helps dramatically reduce leakage current – improving the performance/watt characteristic of our processors.

 

3. What OEM products feature 6-core 45nm products? Servers based on the processor are expected to be announced from over 50 system manufacturers around the world, including four-socket rack servers from Dell, Fujitsu, Fujitsu-Siemens, Hitachi, HP, IBM, NEC, Sun, Supermicro and Unisys. There are four-socket blade servers from Egenera, HP, Sun and NEC and there are server designs that scale up to 16-sockets from IBM, NEC and Unisys.

 

4. How does 6-core affect my software licensing? Just like with other multi-core processors, licensing will depend on the software vendor. With quad-core most ISVs elected to license by socket or processor meaning that the performance enhancements came “for free” as the number of cores are increased. Recently VMware updated their definition of a “processor” to include up to 6-cores per processor (learn more) meaning that with VMware ESX 3.5 update 2 and Intel Xeon processor 7400 series, IT can deploy a higher density of virtual machines per server without an incremental increase in licensing costs. Everyone does it differently – so do your homework.

 

Other common questions circle around IT usage trends and how this technology can really be applied. Here is an interesting (and somewhat long) video where Intel VP and CIO Diane Bryant discusses with executives from Yahoo, Oracle, MySpace and Verisign about the challenges they face and how technology is helping them. If you choose to listen you will find answers to questions (paraphrased) like?

 

  • What are some of the top challenges IT faces today? How can technology help?

  • Is 6 core performance too much? Does IT have the ability inside their environment to take advantage of this additional compute capacity?

  • Is the software ecosystem is ready for multi-core? Can today's applications take advantage of it?

  • How are customers using Virtualization today and how do they see it changing over time?

  • When virtualizing ... how does IT view MP servers (4 socket) vs DP (2 socket)?

  • When deploying next generation technology, how important is the power capacity of the IT environment when selecting technology?

  • Are Intel Xeon servers powerful and reliable enough to consider moving away from RISC or other proprietary architectures?

 

If I missed your burning question, just ask … I’d be happy to share. Chris

In my last I/O Virtualization blog, earlier this year, I discussed a fundamental problem with virtualizing I/O and one the solution that Intel and VMware have teamed up to deliver - VMDq and VMware NetQueue. These queuing technologies together can help to offload some of the virtual switching (vswitch) functionality to the network adapter from the hypervisor. VMDq provides a method for the Hypervisor to do less work, and also provides a way to share the I/O processing across multiple cores; improving system bandwidth and more fully utilizing its processing power.

 

Now, VMDq and NetQueue are a great solution together that scale well, support Vmotion, and are relatively simple to manage. However, is there a way to get even better performance from your Virtualized I/O?

 

What if there was a way to completely cut the Hypervisor software switch out of the picture and remove the associated latency and CPU overhead? The ideal scenario for optimum performance is for the VM to communicate directly with LAN hardware itself, and bypass the vswitch completely. For example, you could have a single 10 Gigabit port expose multiple LAN interfaces at the hardware level (on the PCI-e bus), and each VM could be assigned directly to a hardware interface. Alternatively, you have multiple physical NICs in the system that could be directly assigned to a given VM. Below is a diagram that summarizes the 3 main variations of attachment for I/O in a virtualized server. Below we will get into more detail to put the diagram in context.

 

 

 

 

In the diagram above, the left side represents an implementation of a virtualized environment with a standard I/O setup using the Hypervisor vswitch and VMDq for I/O performance enhancement. In the middle is an example of direct I/O assignment between a single physical LAN interface and a single Virtual Machine. The implementation on the right is showing what is possible with a single NIC that supports SR-IOV (we'll discuss this later) for a fuller, hardware level I/O virtualization. After taking a moment to understand the basic differences in these three implementations, there are immediately a few obvious benefits here for bypassing the Hypervisor vswitch and going with either of the two directly assigned designs...

 

 

By allowing the Virtual Machines to talk directly to the networking hardware, throughput, latency, and CPU utilization of the I/O traffic processing will be greatly improved. So the question is, "why hasn't this been done before?" Well, the answer is that there are several gotchas to make this implementation work well...

 

 

First, in order to implement this properly, the LAN hardware needs to support some physical capabilities to successfully route the networking traffic in this kind of virtualized system. In addition to all of the above the actual server hardware itself must also support VT-d so that the memory mapping between the Virtual Machine PCI-e memory space and the systems physical memory space are correlated correctly. Also, the actual system itself must also support VT-d so that the memory mapping between the Virtual Machine (I/O data memory address) and the systems physical memory address are correlated correctly.

 

 

Finally, and this is a big one, this kind of implementation while very good for performance just happens to break the ability to move a VM from one physical server to another (VMware Vmotion). This is one of the more widely used aspects of VMware's software that has been utilized heavily by most IT shops. Seamless vmotion support is critical for making any I/O performance improvement deployable in the real world.

 

 

Now, if you stop at the 2nd diagram, and use separate NICs for each VM, you will also miss out on a few key advantages of new Ethernet capabilities. You won't be able to allocate your overall bandwidth between your VMs (each VM will get a single Gig or 10Gig port), and more importantly, you won't be able to effectively share higher bandwidth pipes. For example, a server with a few 10 Gigabit ports may have enough I/O horse power to handle traffic for 30 VMs, but there would be no way to assign only a portion of the bandwidth of the pipe to an individual VM.

 

 

Additionally, the LAN hardware needs to support the ability for each virtual function of the LAN device to be able to support bandwidth segregation (think QoS per VM) and the ability to support multiple queues and traffic classes per LAN virtual function. This last piece is necessary for those who remember the discussion on Fiber Channel over Ethernet (FCoE), as the ability to support multiple traffic classes, and dedicated bandwidth links, are key needs for the storage over Ethernet market.

 

 

Now that I've set up what is needed to make this directly assigned virtualized I/O environment work, and called out the potential problems, you don't need to worry; I won't throw cold water on this idea. In fact, most of the pieces are in place today and there is already work being done to complete the solution as we speak.

 

 

First, Intel network adapters now support some fancy hardware capabilities related to virtualization. In addition to all the hooks for VMDq, our newest NICs support PCI-SIG SR-IOV (I know... technologist love acronyms) which provides the ability to virtualize the LAN at the lowest hardware level. The networking hardware also supports some smart logic to be able to function properly in a virtualized system. For example, VM to VM communication in the same server must be looped back before it gets to the wire or the switch connected to the machine won't know how to route the packet. This is all taken care of in the LAN hardware. And of course, all the support for bandwidth segregation, and support for multiple queues and traffic classes is there as well to make sure Storage and other QoS sensitive applications are still going to work well.

 

 

As for VT-d support, Intel platforms now come with this basically standard, so there is no issue there. But the last most important piece is the ability for an individual VM to be moved between physical servers while still being able to ‘renegotiate' with its physical network connection. The ability to do this is under development by Intel, VMware and others in the industry, and the end goal is to have an architectural framework in place to make this kind of handoff seamless from a hardware and software perspective.

 

 

This architectural framework will be the topic of a future post, as I think I've used up all the lines I can before I start putting my readers to sleep. Until next time!

 

 

Ben Hacker

 

 

At Oracle OpenWorld in the Dell booth on September 22nd – 24th, we educated a large number of IT managers and Oracle Database Administrators about how best to harness the power of the new Xeon 7400 processor for their Oracle Middleware and Database environments.  Check out this video and learn about the Xeon 7400 based Dell PowerEdge R900, the features / benefits of this new platform, the virtualization performance advantages, and the energy efficiency benefits of Intel’s 45nm manufacturing process. 

 

 

Posted by Eoin McConnell Oct 2nd, 2008

Back in May I shared some thoughts about how I would choose between different Servers based on RISC architecture and Intel based architecture.  My decision making was based on three basic tenets in terms of choosing the right CPU architecture

1)     Choice and the ability to pick between multiple suppliers.

2)     Performance

3)     System Cost and Total cost of Ownership

 

As you probably know by now, we launched the Xeon Processor 7400 Series (codename; Dunnington) on September 15th. The performance results delivered by systems based on the Xeon 7400 processor are astounding when you actually compare with performance delivered by systems based on RISC architecture. Who would have thought that you could get this level of performance from Xeon at a fraction of the cost of comparable RISC based architectures.

The Xeon 7400 is designed for high-end enterprise workloads like your typical database so I decided to look at the latest database results. If you get a chance, then check these out for yourself at tpc.org. Amazing performance, a fraction of the cost and you can choose from multiple Vendors and Operation System combinations.  

- HP Proliant DL 580 4s system delivered 634,825 tpmC at $1.10/tpmc. This compares with an equivalent POWER 6 based system at 629,159 tpmC at $2.49/tpmC

 

I also decided to look at how many users a Xeon 7400 based system could support in an SAP environment.  For this comparison I took a slightly different approach to look at a 4s Xeon 7400 based system as compared to a 2S UltraSPARCT2 system. You may ask why I made this strange comparison, well to me a 2S UltraSPARCT2 system is a 4S system in disguise in terms of system capability, memory supported and most of all the price!

- HP Proliant DL 580 4s system supported 5,155 users. This compares with an equivalent UltraSPARcT2 based system at 4,170.

Oh and a similar system with 64GB memory is about $32,000 for HP DL580 and a T5240 is about $56,000

 

Ok, I’ll stop doing direct comparisons now as I can understand how this could read as Intel marketing. I’m really excited by these results and wanted to share with you, please check these performance results out here at intel.com.

 

Here are also some links to articles that I found written about Intel Xeon 7400 offering  ‘RISC-Class performance at a fraction of the cost’.  Wall Street Journal, Internet News, The Register

 

In the next few weeks I will share some further thoughts on comparing Xeon with RISC, but in the meantime, what do you think?

 

Related Blog Links:

 

It's official - Intel Xeon Processor 7400 Series (Dunnington) has launched

Six More Benefits of 45nm

HP Announces World Record 4-Socket TPC-C Result

IBM Announces World Record 8-Socket TPC-C Result

 

Previous Blog links:

So what does RISC really mean to you?

Recently our team completed a comprehensive analysis comparing two and four socket server platforms based on both 65nm and 45nm processors.  The comparison included the just recently launched Intel ® Xeon ® X7460 processor featuring 6 high-performance Core micro-architecture based cores.  Servers based on this new processor have been setting all kinds of performance world-records across a wide range of benchmarks and we were very interested to see how these servers would fare in our virtualization tests.  Needless to say, our server platform based on this new high-end processor did not disappoint – it delivered on all its promise both in terms of absolute performance as well as performance/watt.  We have included these results in a brand new IT@Intel whitepaper published recently. 

 

In addition to performance comparisons, this paper also includes a comprehensive comparative analysis spanning a wide range of commonly occurring virtualization deployment scenarios in the enterprise. For instance, are you looking to select a server platform that will enable you to host the maximum number of VMs for a given TCO (we discuss the key components of the TCO model in the paper)? Assuming that you are interested in optimizing your TCO, are you aware that the answer might change depending on the specifics of your situation e.g., are the majority of your workloads performance SLA-centric or memory capacity focused or a combination of the two(we cover what these terms mean in the paper)?  What happens if your primary concern is not the overall TCO but lack of datacenter power and/or cooling capacity - is a particular server platform preferable to the others in that situation?  How about if the primary constraint you are concerned with has to do with limited number of available LAN/SAN connectivity ports in the Datacenter - does that change the "answer"?  What if your enterprise architect says that maximizing resource pool capacity is his primary objective and TCO is a secondary concern - does that have any implications for your server platform selection? What if you want to ensure the most predictable performance scaling in the event of unanticipated workload spikes - does your choice of server virtualization platform make a difference?

 

We cover the important deployment scenarios relevant for this comparison (including answering the questions in the preceding paragraph) in this new IT@Intel paper.  This paper and the companion short video summarizing the key findings can be accessed at www.intel.com/it.  Please check it out and let us know if you agree or if we have missed the mark!

As an Intel PR manager who works regularly with Sun Microsystems, its introduction of two Sun Netra servers based on Intel® Xeon® Processors got me doing some math. For those keeping count, the new servers brings its total to 10 new Intel Xeon-based servers, or roughly one every other month, since the companies formed their alliance in January of last year. These most recent servers, which are aimed at the telecommunications industry, include the first carrier-grade server, the Sun Netra X4450, powered by four quad-core Intel Xeon processors 7000 Series. The energy-efficient performance of the Xeon processors helps Sun solve three growing problems in the telco datacenter - limited space, energy consumption and cooling costs. The 4U rackmount Sun X4450 takes advantage of the robust 45nm technology available from of the four Intel Xeon Processors E7338 processors to create an excellent platform for consolidation and virtualization. Features such as 32 memory DIMM slots, more than 1 TB of storage and 10 PCI slots enable telco data-center managers to consolidate Solaris OS, Linux and Windows applications on a single NEBS-certified server. Each processor dissipates a maximum of 80W of power. The new Sun Netra X4250 2U rackmount server is powered by two LV Intel Xeon Processors 5000 Series that offer power savings as well as performance. The Sun Netra X4250 server is designed to be energy-efficient, supporting up to 16 memory slots and four internal disk drives in a 2U, 20-inch-deep carrier-grade package. The low-power Intel Xeon Processor L5408 dissipates a maximum of 40W of power.

Filter Blog

By author:
By date:
By tag: