Skip navigation

Ethernet never seems to stop evolving to meet the needs of the computing world.  First it was a shared medium, then computers need more bandwidth and multi-port bridges became switches.  Over time, Ethernet speeds increased by four orders of magnitude.  The protocol was updated for the telecom network, then for data centers. 

 

Now it’s scaling down to meet the needs of microserver clusters.

 

Micro server clusters are boards with multiple low-power processors that can work together on computing tasks.  They are growing in popularity to serve a market that needs fast I/O but not necessarily the highest processing performance.

 

From its many evolutions, Ethernet has the right combination of bandwidth, routing, and latency to be the networking foundation for the microserver cluster application.

 

Bandwidth: For certain workloads, congestion can occur reducing system performance when processors are connected using 1GbE, so the preferred speed is 2.5 GbE.  If you’ve never heard of 2.5GbE, it’s because it was derived from the XAUI spec, but only uses a single XAUI lane.  The XAUI standard was implemented with the idea that four lanes of XAUI could be used to transmit 10GbE signals from chip to chip for distances longer than the alternative (which capped out at 7 cm).  XAUI specifies 3.125 Gbps, which provides for overhead and a 2.5 Gbps full duplex data path.  XAUI had a claim to fame in 2005, when it because a leading technology to improve backplane speeds in ATCA chassis designs.  By using a single lane of XAUI, it’s the perfect technology for a microserver cluster.

 

Density: Density and performance are key attributes for micro server clusters. Bandwidth is related to total throughput of the switch in that packet forwarding engine must be robust enough to switch all ports at full speed.  For example, the new Intel® Ethernet Switch Family FM5224 has up to 64 ports of 2.5 GbE and another 80Gb of inbound/outbound bandwidth (either eight 10GbE ports or two 40GbE ports).  Thus the FM5224 packet-processing engine handles 360 million packets per second to provide non-blocking throughput.

 

Distributed Switching: Some proprietary micro server cluster technologies advocate layer two switching, perhaps under the assumption that the limited number of devices on a shelf eliminates the need for layer three switching. But traffic will need to exit the shelf – maybe even to be processed by another shelf – and so to make this true, the shelf would depend on an outside top-of-rack (TOR) switch to route traffic between shelves or back out onto the Internet.  This would change the nature of a TOR switch, which now is  “pizza box” style switch with between 48-72 ports.  With an average of 40 servers per shelf, a TOR would need 400 or more ports to connect all of the shelves.  But Ethernet routing can be placed in every cluster (micro server shelf) to provide advanced features such as load balancing and network overlay tunneling while reducing the dependency on the TOR switch.

 

Latency: To be considered for high-performance data center applications, Ethernet vendors needed to reduce latency from microseconds to nanoseconds (Intel lead this industry effort and is the low latency leader at 400 ns).  That work is paying dividends in the microserver cluster.  Low latency means better performance with small packet transmissions and also with storage-related data transfers.  For certain high performance workloads, the processors on micro server clusters must communicate constantly with each other, making low latency essential. 

 

With the perfect combination of bandwidth, routing and latency, Ethernet is the right technology for microserver networks.  Check out our product page to take a look at the Intel Ethernet Switch FM5224 that is built just for this application.

With the growing popularity of micro servers, networking has moved from connecting computers on desktops or servers in a rack to connecting processors on a board. 

 

That’s a new way of thinking about networking that takes a new kind of switch chip, which is why we’ve recently introduced the Intel® Ethernet Switch FM5224. To meet these new market needs takes a device that features a new design, mixed with legacy networking strengths.

 

What’s New

Micro servers are part of an emerging computing platform architecture that includes many low power processor modules in a single enclosure.  The micro server operating system parcels out computing tasks to the various processors and coordinates their work. For certain distributed data center workloads, there is a lot of interest in this approach.

 

However, designing a dense micro server cluster calls for a significant uplink data bandwidth combined with high port count interconnectivity between the processor modules.  Enter the new FM5224, which we call a high port-count Ethernet switch, to meet these needs.

 

The device can support up to 64 nonblocking ports of 2.5GbE along with up to eight 10GbE uplink ports (or two 40GbE ports). 

 

Why 2.5GbE?  This speed was popularized by blade server systems, but never made an official Ethernet standard. Our analysis of the bandwidth needs of micro servers shows that many workloads need more than 1 GbE per server module, which makes our 2.5GbE switch ports ideal for this application.

 

What’s the Same

While micro servers are new, they still communicate using Ethernet.  The FM5224 is built using Intel’s Alta switch architecture, which brings the benefit of some advanced features for micro server applications.

 

The I/O –heavy nature of micro servers makes non-blocking, low latency performance very important.  The FM5224 is built with Intel’s FlexPipe packet processing technology that delivers 360 million pps forwarding rate.  The device also offers less than 400 ns of latency, independent of packet size or features enabled. 

 

Combined, this performance makes it possible for each processor to pass data at wire rate, even small packets, which are expected to make up most of the data passed between processors. In addition, the FM5224 has excellent load distribution features that can be used to efficiently spread the workload across multiple micro server modules.

 

For the micro server chassis uplinks, OEMs have their choice of four 10GbE or two 40GbE ports that can directly drive direct-attach copper cables up to 7 meters without the need for an external PHY.

 

With the FM5224, OEMs have a tremendously flexible chip that is fine tuned for micro server applications.

 

Filter Blog

By date: By tag: