Currently Being Moderated

The debate on how to best increase system capacity to accommodate growing applications has raged on for years; “scale up” with more CPU, memory, and I/O, or “scale out” with loosely connected systems.    Scaling out by adding networked systems to increase capacity has been a good economical solution for many IT managers because it allows them to grow by using less expensive, industry standard building blocks.  However, there are some notable exceptions to this line of thought.  One is that the class of applications that require shared memory and large database support are much better suited to run on a single, expandable system that scales up.  These are typically transaction processing, business intelligence and ERP solutions.   Until now, IT managers running applications that require scale-up systems larger than 4 or 8 CPUs have had limited platform choices and most were proprietary and expensive RISC-based servers.

 

 

The other problem with the scale out approach is the people, facilities, software and overhead costs and complexity of managing very large numbers of servers, which can grow to a point where the costs outweigh the performance and system cost benefits.  The industry solution to achieving better ROI has been to consolidate multiple scale-out servers onto single industry standard scale-up servers with virtualization solutions.  This is a good solution, but is limited by the number of application loads the IT manager feels comfortable placing on a single server, given the need to maintain peak performance and availability for each application.

 

 

Well, it looks like the scale-up, scale-out debate is about to take another turn.  In the server product update Intel gave on May 26th, they talked about new levels of system scalability and choice supported by the upcoming Nehalem-EX processor.  This processor will support systems that scale up to 8 sockets natively (shared memory, without any additional silicon), and up to 16 sockets and higher with node controllers from system manufactures that allow single systems to share memory beyond 8 sockets.   So far there are over 15 different designs from 8 OEMs that offer 8 socket or higher scalability.  But of course, for the class of application where scaling is important, socket count doesn’t tell the whole story of what’s needed for scalable performance.  Thread support, key for transaction processing and virtualization, scales at the rate of 16 threads per socket with 8 cores and Hyper Threading (2 threads per core).  That would be 128 threads for an 8-socket system, and 256 threads for 16 sockets.   And in order to keep those threads fed with data close to the CPU, each processor supports up to 24 MB of shared cache (1.5X current generation Xeon), and an impressive 16 memory slots per socket or 128 DIMMs on an 8-socket system.  In addition, the Scalable Memory Interconnect gives these systems 9 times the memory bandwidth of today’s top Xeon processor.  Finally, four QuickPath interconnect links per socket allow for high-bandwidth sharing of data across the system.

 

 

So the net of it is that the industry is going to see a broad selection of highly scalable, next-generation servers that significantly extend the economic advantage of industry standard scale-up solutions for business-critical, large database, and high-end virtualization/consolidation deployments.     I would expect these systems to give IT managers a very cost-effective alternative to the much more expensive and proprietary RISC-based servers they use today.

 

What are your thoughts?  Mike

 

 

Related Topics:

 

 

 

 

Comments

Filter Blog

By author:
By date:
By tag: