Ethernet never seems to stop evolving to meet the needs of the computing world.  First it was a shared medium, then computers need more bandwidth and multi-port bridges became switches.  Over time, Ethernet speeds increased by four orders of magnitude.  The protocol was updated for the telecom network, then for data centers. 

 

Now it’s scaling down to meet the needs of microserver clusters.

 

Micro server clusters are boards with multiple low-power processors that can work together on computing tasks.  They are growing in popularity to serve a market that needs fast I/O but not necessarily the highest processing performance.

 

From its many evolutions, Ethernet has the right combination of bandwidth, routing, and latency to be the networking foundation for the microserver cluster application.

 

Bandwidth: For certain workloads, congestion can occur reducing system performance when processors are connected using 1GbE, so the preferred speed is 2.5 GbE.  If you’ve never heard of 2.5GbE, it’s because it was derived from the XAUI spec, but only uses a single XAUI lane.  The XAUI standard was implemented with the idea that four lanes of XAUI could be used to transmit 10GbE signals from chip to chip for distances longer than the alternative (which capped out at 7 cm).  XAUI specifies 3.125 Gbps, which provides for overhead and a 2.5 Gbps full duplex data path.  XAUI had a claim to fame in 2005, when it because a leading technology to improve backplane speeds in ATCA chassis designs.  By using a single lane of XAUI, it’s the perfect technology for a microserver cluster.

 

Density: Density and performance are key attributes for micro server clusters. Bandwidth is related to total throughput of the switch in that packet forwarding engine must be robust enough to switch all ports at full speed.  For example, the new Intel® Ethernet Switch Family FM5224 has up to 64 ports of 2.5 GbE and another 80Gb of inbound/outbound bandwidth (either eight 10GbE ports or two 40GbE ports).  Thus the FM5224 packet-processing engine handles 360 million packets per second to provide non-blocking throughput.

 

Distributed Switching: Some proprietary micro server cluster technologies advocate layer two switching, perhaps under the assumption that the limited number of devices on a shelf eliminates the need for layer three switching. But traffic will need to exit the shelf – maybe even to be processed by another shelf – and so to make this true, the shelf would depend on an outside top-of-rack (TOR) switch to route traffic between shelves or back out onto the Internet.  This would change the nature of a TOR switch, which now is  “pizza box” style switch with between 48-72 ports.  With an average of 40 servers per shelf, a TOR would need 400 or more ports to connect all of the shelves.  But Ethernet routing can be placed in every cluster (micro server shelf) to provide advanced features such as load balancing and network overlay tunneling while reducing the dependency on the TOR switch.

 

Latency: To be considered for high-performance data center applications, Ethernet vendors needed to reduce latency from microseconds to nanoseconds (Intel lead this industry effort and is the low latency leader at 400 ns).  That work is paying dividends in the microserver cluster.  Low latency means better performance with small packet transmissions and also with storage-related data transfers.  For certain high performance workloads, the processors on micro server clusters must communicate constantly with each other, making low latency essential. 

 

With the perfect combination of bandwidth, routing and latency, Ethernet is the right technology for microserver networks.  Check out our product page to take a look at the Intel Ethernet Switch FM5224 that is built just for this application.