One challenge for Ethernet in the data center is the increasing amount of “east-west” data traffic (i.e. server-to-server traffic), which demands much lower latency than traditional north-south traffic (ie server-to-user/Internet). 

 

Server virtualization and new web applications are spawning a dramatic increase in east-west traffic. For example, web content delivery can spawn hundreds of server-to-server workflows in order to provide a timely, customized experience for each unique client.

 

As we describe in a recent article in EE Times, iWARP technology combined with low-latency switching is the way to optimize Ethernet for these environments.

 

Historically, InfiniBand* has led a pack of proprietary, low-latency network protocols optimized for east-west traffic, delivering latency measured in nanoseconds from end-to-end vs. microseconds for traditional Ethernet networks.

 

The article shows results of tests conducted using the combination of the NetEffect™ Ethernet Server Cluster Adapters from Intel and the Intel® Ethernet switching technology which showed latency performance that matched InfiniBand.

 

At the server level, the network adapter must speed the transfer of data from the server onto the network, a process that can add significant latency in normal Ethernet networks.  The NetEffect™ Ethernet Server Cluster Adapters from Intel support Internet wide-area RDMA protocol (or iWARP), a protocol that improves efficiency by eliminating the need to copy data from a receive buffer memory to server memory. Instead, iWARP has direct server memory access and doesn’t need to cache data in a buffer, which improves overall application performance.

 

Low latency is also important in switches, as even the most efficient data center network design involves two or three switch “hops” to have enough ports and bandwidth to connect all data center servers.

 

Intel Ethernet switch silicon provides low-latency Ethernet through its unique use of a true output queued shared memory architecture. By providing full bandwidth access to every output queue from every input port, no blocking occurs within the switch. In addition, since the packet is queued only once, cut through latencies of a few hundred nanoseconds can be achieved independent of packet size.

 

Data center bridging is being specified by the IEEE to enable converged data center fabrics. Intel switches support key DCB features and have an advanced TCAM-based classification engine that uses ACL rules to assign a traffic class to each frame. Based on traffic class, frames can be placed into one of several logical shared memory partitions in the switch and flow controlled separately. For example, iSCSI or iWARP traffic can be placed into one memory partition, while other data traffic can be placed into other memory partitions. This ensures that storage or HPC traffic will not be delayed or dropped if data traffic becomes congested in the switch.

 

Network performance has always been a critical element of data center computing, but the prospect of using the well-known Ethernet protocol in data center networks has a lot of benefits.  Delivering the necessary performance requires the right technology at the server and in the switch.  Intel’s combination of NetEffect adapters and Intel Ethernet switch silicon is the complete solution that delivers the promise of converged data center Ethernet.