Skip navigation

Most top-of-rack (ToR) data center switches are installed where their name says, on the top of the server rack. This means their 10G downlink ports are connected to servers that are within a few meters of the switch. In data center applications where low latency is critical, SFP+ direct attach (DA) copper cables can be used to connect servers at up to 7 meters from the switch. But this requires high quality PHYs inside the ToR switch.

 

Switching ASICs used in ToR switches can have up to 72 10G ports on a single piece of silicon. But high quality 10G SerDes are difficult to design, and many of these large chips are designed with the assumption that the SerDes only needs to drive a locally connected PHY chip that will take on the burden of driving DA copper cables or backplanes.

 

At Intel, we took a different approach. With the knowledge that many of our customers are designing low-latency ToR switches using DA copper cabling, we chose to embed high quality 10G PHYs within our Intel Ethernet FM6000 series switch silicon. These PHYs can drive up to 7m of SFP+ DA copper on 10GbE ports or up to 5m of QSFP DA copper on 40GbE ports. With up to 72 10G SerDes on the FM6000, this eliminates up to 18 external quad PHY chips that must be used when lower quality SerDes are used within the switch ASIC. Elimination of these external PHYs saves cost, power and board area, which are critical in today’s large, flat data center installations.

Last month at Cisco Live* in San Diego, Cisco announced the first 10GBASE-T member of its Cisco Nexus* 5000 switch family, the Nexus 5596T. This new switch is a great complement to 10GBASE-T LAN on motherboard (LOM) connections powered by the Intel® Ethernet Controller X540.  Intel is a major proponent of 10GBASE-T, and we believe it will help drive 10 Gigabit Ethernet (10GbE) adoption by lowering costs and giving IT organizations an easy migration path from Gigabit Ethernet (GbE).

 

I caught up with Kaartik Viswanath, product manager for the Nexus 5000 family, at the show last week and asked him a few questions about the Nexus 5596T and Cisco’s views on 10GBASE-T. Here’s an excerpt from the interview:

 

BY: Kaartik, thanks for taking the time to answer a few questions today. Tell me about the Nexus 5596T switch and 10GBASE-T module that Cisco announced yesterday.

KV: Sure. We’re very excited about the new Nexus 5596T switch. It’s the first 10GBASE-T member of the Nexus 5000 family, and it’s coming at the perfect time, with 10GBASE-T LOM connections now being integrated onto mainstream server motherboards. LOM integration will help drive 10GbE adoption, and all those new 10GBASE-T ports need a high-performance, high-port-density switch to connect to. The Nexus 5596T has 32 fixed 10GBASE-T ports, and through the addition of the new 12-port 10GBASE-T Cisco Generic Expansion Module (GEM), it can support up to 68 total 10GBASE-T ports in a two-RU (rack unit) design. Plus, customers can deploy any of the existing GEMs in any of the Nexus 5596T’s three GEM slots.

 

The Nexus 5596T also includes 16 fixed SFP+ ports, which customers can use to connect to aggregation switches, servers, or Nexus 2000 Fabric Extenders using optical fiber or direct attach copper connections. With the Nexus 5596T switch, our customers have the flexibility to deploy both 1/10GBASE-T Ethernet on Copper and FC/FCoE/Ethernet on SFP+ ports on the same chassis.

 

BY: Are you hearing a lot of interest in 10GBASE-T from your customers?

KV: Yes, definitely, and I think there are a couple of major reasons for that. First, 10GBASE-T offers the easiest path for folks looking to migrate from One Gigabit Ethernet (GbE). 10GBASE-T uses the same twisted-pair copper cabling and RJ-45 connectors as existing GbE networks, and it’s backwards-compatible with all the 1000BASE-T products out there today. That means you can replace your existing 1000BASE-T switch with a Nexus 5596T and connect to both 10GBASE-T and 1000BASE-T server connections. And as you’re ready, you can upgrade servers to 10GBASE-T.

 

I think the other big reason 10GBASE-T is so appealing is the deployment flexibility it offers; 100 meters of reach is sufficient for the vast majority of data center deployments, whether it’s top-of-rack, middle-of-row, or end-of-row.  Plus, twisted-pair copper cabling is much more cost-effective than the fiber or direct-attach copper cabling that’s used in the majority of 10GbE deployments today.

 

BY: Cisco and Intel both support multiple 10GbE interfaces in their products. How do you see 10GBASE-T fitting into the mix?

KV: We’ll support whichever interfaces our customers want to use. However, there are some general guidelines that most folks seem to be following. For longer distances – over 100 meters – SFP+ optical connections are really the only choice, given their longer reach.  But fiber costs really don’t lend themselves to broad deployment. Today, most 10GbE deployments use the top of rack model, where servers connect to an in-rack switch using SFP+ direct attach copper (DAC) connections. DAC reach is only seven meters, but that’s plenty for any intra-rack connections.

 

10GBASE-T hits sort of a sweet spot because of its distance capabilities. It can connect switches to servers in top of rack deployments, with cables that are less expensive than SFP+ DAC, or it can be used for the longer runs where fiber is being used today – up to 100 meters, of course.

 

There are cases where SFP+ has some advantages, particularly for latency-sensitive applications or if the customers are sensitive to power consumption, but when it comes to deployment flexibility, costs, and ease of implementation, 10GBASE-T is well-positioned as the interface of choice for broad adoption.

 

BY: How about Fibre Channel over Ethernet? Does the Nexus 5596T switch support FCoE over 10GBASE-T?

KV: Great question. FCoE is a key ingredient in Cisco’s unified fabric vision, and it’s supported in our 10 Gigabit Nexus and UCS product lines. The Nexus 5596T hardware is FCoE-capable like all of our Nexus 5000 switches, and we’re working on FCoE characterization in our labs. We’ve been working closely with Intel to verify FCoE interoperability with the Intel® Ethernet Controller X540.

 

There’s been a fair amount of discussion in the industry around whether 10GBASE-T is a suitable fabric for FCoE. Our collaboration with our ecosystem partners, including Intel, network cable vendors, and storage vendors, will help ensure there aren’t any issues before we enable the feature on the Nexus 5596T. Assuming everything goes well, we’ll also enable FCoE over 10GBASE-T in our 12-port 10GBASE-T GEM Module as well as our fabric extender line with the upcoming Nexus 2232TM-E Fabric Extender.

 

You can ready the full interview and sample my witticisms in this post in the Intel Data Stack community.

 

 

For the latest, follow us on Twitter: @IntelEthernet.

Last month, I was part of the Intel team that participated in Interop Tokyo, and demonstrated Intel’s software-defined networking (SDN)-compatible switch silicon.

 

Like here in the US, interest is high in Japan for SDN, but from what I saw during my time at the show and in meeting with several customers, I believe the market for SDN products will emerge more quickly in Japan.

 

The market is excited about SDN, and the leading SDN protocol OpenFlow, because it gives users a vendor-neutral way to control and manage the network.  Instead of using L2/L3 protocols for routing data, an SDN controller manages the data flows.  This should provide the same level of services as an IP network without vendor-proprietary protocols that can limit advanced network services in a multi-vendor network.

 

In Japan, local networking providers have good market share and their emphasis hasn’t been on high-margin routing software.  Therefore, these companies are aggressively embracing SDN.

 

The proof of this was in the strong turnout at the booth (estimated at 200 people), who came to see Intel and NTT Data* demonstrate a network built using NTT Data’s OpenFlow controller and Intel’s Barcelona 10/40 GBE switch reference platform.  The Barcelona platform uses the Intel® Ethernet FM6000 Switch silicon that provides 72 10GbE/18 40GbE ports and supports non-blocking switching and routing with a latency of less than 400ns.

 

Interop Tokyo was my first visit to Japan and I really appreciated the hospitality and the food in Tokyo.  If my prediction about the success of SDN in Japan holds true, I don’t think it will be my last trip.

One challenge for Ethernet in the data center is the increasing amount of “east-west” data traffic (i.e. server-to-server traffic), which demands much lower latency than traditional north-south traffic (ie server-to-user/Internet). 

 

Server virtualization and new web applications are spawning a dramatic increase in east-west traffic. For example, web content delivery can spawn hundreds of server-to-server workflows in order to provide a timely, customized experience for each unique client.

 

As we describe in a recent article in EE Times, iWARP technology combined with low-latency switching is the way to optimize Ethernet for these environments.

 

Historically, InfiniBand* has led a pack of proprietary, low-latency network protocols optimized for east-west traffic, delivering latency measured in nanoseconds from end-to-end vs. microseconds for traditional Ethernet networks.

 

The article shows results of tests conducted using the combination of the NetEffect™ Ethernet Server Cluster Adapters from Intel and the Intel® Ethernet switching technology which showed latency performance that matched InfiniBand.

 

At the server level, the network adapter must speed the transfer of data from the server onto the network, a process that can add significant latency in normal Ethernet networks.  The NetEffect™ Ethernet Server Cluster Adapters from Intel support Internet wide-area RDMA protocol (or iWARP), a protocol that improves efficiency by eliminating the need to copy data from a receive buffer memory to server memory. Instead, iWARP has direct server memory access and doesn’t need to cache data in a buffer, which improves overall application performance.

 

Low latency is also important in switches, as even the most efficient data center network design involves two or three switch “hops” to have enough ports and bandwidth to connect all data center servers.

 

Intel Ethernet switch silicon provides low-latency Ethernet through its unique use of a true output queued shared memory architecture. By providing full bandwidth access to every output queue from every input port, no blocking occurs within the switch. In addition, since the packet is queued only once, cut through latencies of a few hundred nanoseconds can be achieved independent of packet size.

 

Data center bridging is being specified by the IEEE to enable converged data center fabrics. Intel switches support key DCB features and have an advanced TCAM-based classification engine that uses ACL rules to assign a traffic class to each frame. Based on traffic class, frames can be placed into one of several logical shared memory partitions in the switch and flow controlled separately. For example, iSCSI or iWARP traffic can be placed into one memory partition, while other data traffic can be placed into other memory partitions. This ensures that storage or HPC traffic will not be delayed or dropped if data traffic becomes congested in the switch.

 

Network performance has always been a critical element of data center computing, but the prospect of using the well-known Ethernet protocol in data center networks has a lot of benefits.  Delivering the necessary performance requires the right technology at the server and in the switch.  Intel’s combination of NetEffect adapters and Intel Ethernet switch silicon is the complete solution that delivers the promise of converged data center Ethernet.

Filter Blog

By date: By tag: