Skip navigation

Wired Ethernet

5 Posts authored by: davidlfair


The industry is abuzz about the specification under development by the NVM Express Working Group call “NVMe over Fabrics” — for good reason.  The goal of this specification is to extend the highly efficient and scalable protocol for NVMe beyond just direct-attached storage to include networked storage.  For an excellent up-to-date background on NVMe over Fabrics, I strongly recommend the December 2015 SNIA Ethernet Storage Forum webcast, Under the Hood with NVMe over Fabrics, presented by J Metz of Cisco and Dave Minturn of Intel.


SNIA-ESF is following up on this webcast with another on Tuesday, January 26, 2016 10 a.m. Pacific that focuses on how Ethernet RDMA fabrics specifically fit into this new specification: How Ethernet RDMA Protocols iWARP and RoCE Support NVMe over Fabrics.  This will be co-presented by John Kim from Mellanox and yours truly from Intel.  The webcast is free, and you can register at  If you are interested, but can’t make it at that time, the webcast will be posted immediately afterwards at the SNIA ESF website:


David Fair


Chair, SNIA Ethernet Storage Forum

Ethernet Networking Mktg Mgr, Intel


by David Fair, Product Marketing, Networking Division, Intel Corporation


Odd title for a networking article, don’t you think?  It’s odd for a couple of reasons, but reasons that reveal the vibrancy of Ethernet.  For four decades, Ethernet advanced on a “powers-of-ten” model from an initial 10 Mbps to 100 to 1GbE to 10GbE.  Part of why that worked was that the ratified IEEE Ethernet speeds kept well ahead of most market requirements.  Moving an entire Ethernet ecosystem to a new speed is expensive for everyone.  The “powers-of-ten” model helped control those costs.


What changed?  Well, my theory is that Ethernet simply got too successful for the powers-of-ten model.  By that I mean that the volumes got large enough for some specific requirements at more fine-grained speeds to warrant infrastructure upgrades to support those speeds. 


It is the rapid growth of wireless access points and increases in their speed specifically that creates the problem leading to a desire Next Generation Enterprise Access BASE-T.   Not in the data center but rather in the office.  Most have built out a wireless infrastructure with CAT 5e or 6 in the ceilings connecting wireless access points at 1GbE, in addition to connecting wired desktops and workstations.  But the latest wireless spec, IEEE 802.11ac can drive bandwidth back on the wire well beyond 1GbE.  And some of those desktops and workstations may be chomping at the bit as well, so to speak, to go faster than 1GbE.  The problem is that the next “powers of 10” solution from the IEEE, 10GBASE-T won’t work on CAT 5e and will work on CAT 6 only to 55 meters.


As often happens in these situations, alliances establish themselves to build momentum to influence the IEEE to consider their proposal.  In this case, there are now two such groups calling themselves the “NBASE-T Alliance” and the “MGBASE-T Alliance” respectively.  Both are proposing intermediate “step-down” speeds of 2.5 Mbps and 5 Mbps.


To learn more about 2.5G/5G technology and standardization related efforts, please join the Ethernet Alliance for its upcoming “Ethernet 104: Introduction to 2.5G/5G BASE-T Ethernet” webinar on Thursday, May 21, 2015, at 10am PDT. Additional information is available, and registration is now open at .

The industry continues to advance the iWARP specification for RDMA over Ethernet, first ratified by the Internet Engineering Task Force (IETF) in 2007.  This article in Network World, “iWARP Update Advances RDMA over Ethernet for Data Center and Cloud Networks,” co-authored by Chelsio Communications and Intel, describes two new features that have been added to help software developers of RDMA code by aligning iWARP more tightly with RDMA technologies based on the InfiniBand network and transport, i.e., InfiniBand itself and RoCE.  By bringing these technologies into alignment, we realize the promise that the application developer need not concern herself with which of these is the underlying network technology -- RDMA will "just work" on all.  - David Fair



Certainly one of the miracles of technology is that Ethernet continues to be a fast growing technology 40 years after its initial definition.  That was May 23, 1973 when Bob Metcalf wrote his memo to his Xerox PARC managers proposing “Ethernet.”  To put things in perspective, 1973 was the year a signed ceasefire ended the Vietnam War.  The U.S. Supreme Court issued their Roe v. Wade decision. Pink Floyd released “Dark Side of the Moon.”  In New York City, Motorola made the first handheld mobile phone call (and, no, it would not fit in your pocket).   1973 was four years before the first Apple II computer became available, and eight years before the launch of the first IBM PC. In 1973, all consumer music was analog: vinyl LPs and tape.  It would be nine more years before consumer digital audio arrived in the form of the compact disc—which ironically has long since been eclipsed by Ethernet packets as the primary way digital audio gets to consumers.


The key reason for Ethernet’s longevity, imho, is its uncanny, Darwinian ability to evolve to adapt to ever-changing technology landscapes.  A tome could be written about the many technological challenges to Ethernet and its evolutionary response, but I want to focus here on just one of these: the emergence of multi-core processors in the first decade of this century.  The problem Bob Metcalf was trying to solve was how to get packets of data from computers to computers, and, of course, to Xerox laser printers.  But multi-core challenges that paradigm because Ethernet’s job as Bob defined it is done when it gets to a computer’s processor, before it gets to the correct core in that processor waiting to consume that data.


Intel developed a technology to help address that problem, and we call it “Intel® Ethernet Flow Director.”  We implemented it in all of Intel’s most current 10GbE and 40GbE controllers. What Intel® Ethernet Flow Director does in-a-nutshell is establish an affinity between a flow of Ethernet traffic and the specific core in a processor waiting to consume that traffic. I encourage you to watch a two and a half minute video explanation of how Intel® Ethernet Flow Director works.  If that, as I hope, just whets your appetite to learn more about this Intel technology, we also have a white paper that delves into deeper details with an illustration of what Intel® Ethernet Flow Director does for a “network stress test” application like Memcached.  For that, click here.  We hope you find both the video and white paper enjoyable and illuminating.


David Fair

David Fair, Unified Networking Mktg Mgr, Intel Networking Division


WARP was on display at IDF14 in multiple contexts.  If you’re not familiar with iWARP, it is an enhancement to Ethernet based on an IETF standard that delivers Remote Direct Memory Access (RDMA).  In a nutshell, RDMA allows an application to read or write a block of data from or to the memory space of another application that can be in another virtual machine or even a server on the other side of the planet.  It delivers high bandwidth and low latency by bypassing the kernel of system software and avoiding interrupts and making extra copies of data.  A secondary benefit of kernel bypass is reduced CPU utilization, which is particularly important in cloud deployments.  More information about iWARP has recently been posted to Intel’s website if you’d like to dig deeper.


Intel is planning to incorporate iWARP technology in future server chipsets and systems-on-a-chip (SOCs).  To emphasize our commitment and show how far along we are, Intel showed a demo using the RTL from that future chipset in FPGAs running Windows* Server 2012 SMB Direct and doing a boot and virtual machine migration over iWARP.  Naturally it was slow – about 1 Gbps - since it was FPGA-based, but Intel demonstrated that our iWARP design is already very far along and robust.  (That’s Julie Cummings, the engineer who built the demo, in the photo with me.)


Jim Pinkerton, Windows Server Architect, from Microsoft joined me in a Poster Chat on iWARP and Microsoft’s SMB Direct technology, which scans the network for RDMA-capable resources and uses RDMA pathways to automatically accelerate SMB-aware applications.  No new software and no system configuration changes are required for system administrators to take advantage of iWARP.



Jim Pinkerton also co-taught the “Virtualizing the Network to Enable a Software Defined Infrastructure” session with Brian Johnson of Intel’s Networking Division.  Jim presented specific iWARP performance results in that session that Microsoft has measured with SMB Direct.


Lastly, the NVMe (Non-Volatile Memory Express) community demonstrated “remote NVMe” made possible by iWARP.  NVMe is a specification for efficient communication to non-volatile memory like flash over PCI Express.  NVMe is many times faster than SATA or SAS, but like those technologies, targets local communication with storage devices.  iWARP makes it possible to securely and efficiently access NVM across an Ethernet network.  The demo showed remote access occurring with the same bandwidth (~550k IOPS) with a latency penalty of less than 10 µs.

Intel is supporting iWARP because it is layered on top of the TCP/IP industry standards.  iWARP goes anywhere the internet goes and does it with all the benefits of TCP/IP, including reliable delivery and congestion management. iWARP works with all existing switches and routers and requires no special datacenter configurations to work. Intel believes the future is bright for iWARP.

Filter Blog