1 2 3 Previous Next

Wired Ethernet

218 posts


We are interested to use the X540 Twinville Dual Port 10GbE MAC/PHY in our application. The marketing data sheet


Intel® Ethernet Controllers and PHYs


lists the operating temperature at 0-55C. Yet the data sheet




on pg 1188 lists a maximum case temperature of Tcase Max = 107C.


Please elaborate on the difference and meaning between these two figures. We need the 0-70C temperature range if we are to use this part.


Thank you.

From Dawn Moore, General Manager of the Networking Division, read her latest blog:  Better Together: Balanced System Performance Through Network Innovation


The IT environment depends on hyperscale data centers and virtualized servers, making it crucial that upgrading to the latest technology be viewed from a comprehensive systems viewpoint. Need more data center performance? Maximize investment by upgrading the CPU, network and storage.

Due to the rapid growth of ever-more powerful mobile devices, enterprise networks need to keep pace. NBASE-T™ technology boosts the speed of twisted pair copper cabling up to 100 meters in length well beyond the designed limits of 1 Gigabit per second (Gbps). Capable of reaching 2.5 and 5 Gbps over 100m of Cat 5e cable, NBASE-T solutions implement a new type of signaling over twisted-pair cabling. The upcoming Intel(R)  Ethernet Controller code named Sageville, a single chip dual-port 10GBASE-T  and NBASE-T controller, can auto-negotiate to allow the selection of the best speed: 100 Megabit Ethernet (100MbE), 1 Gigabit Ethernet (GbE), 2.5GbE and 5GbE, over Cat 5e or Cat 6 and 10GbE over Cat 6A or Cat 7. Watch our recent demo from Cisco Live.




This year, I attended Cisco Live for the first time and it was quite a large event. At Intel’s booth, we showcased a network service chaining demo, which was a combination of Cisco’s optimized UCS platform and Intel’s Ethernet Controller XL710 along with Intel’s Ethernet SDI Adapter using our new 100GbE silicon code named Red Rock Canyon. By using network service headers (NSH) to forward packets to virtualized network functions, virtual packet processing pipelines can be established on top of physical networks. But with the exponential increase in networking bandwidth, high performance forwarding solutions are needed. This demo showed how packets with NSH headers can be forwarded to virtual machines running on UCS platforms using the latest generation Intel adapters operating at 40GbE and 100GbE. Watch our recent demo from Cisco Live.



If you want to learn more about service creation using NSH, see the Cisco and Intel webinar from April 2015. Register here to replay.


Dynamic Service Creation (Making SDN Work for Your Success with Network Service Header)


Host: Dan Kurschner, Sr. Manager SP Mobility Marketing Speakers:
Paul Quinn, Cisco Distinguished Engineer Cloud Systems Development
Humberto La Roche, Cisco Principal Engineer
Uri Elzur, Intel Engineer


Overview: We all like to talk about creating new customized services for the end user at “web speed”.  But today there is no way to automate service creation or to dynamically affect changes (augmentation) to existing services without touching the network topology.  This is because we use physical service chains across the data plane. To achieve automated flexibility in service creation, we must logically decouple the service plane from the transport plane—a software abstraction from specific network nodes. Cisco and Intel are leading a fast-growing ecosystem of network technology vendors, which includes Citrix and F5, to drive the Internet Engineering Task Force (IETF) standardization of the Network Services Headers (NSH) protocol. Open source NSH implementations are available today for Open Virtual Switch (OVS) and OpenDayLight (ODL).


At Interop Las Vegas in April 2015, Intel took part in the NBASE-T Alliance public multi-vendor interoperability demonstration. Carl Wilson, Product Marketing Engineer, walks through the demo to show how it leveraged Intel's next generation single-chip 10GBASE-T controller supporting the NBASE-T intermediate speeds of 2.5Gbps and 5Gbps.


The demonstration showed NBASE-T™ technology deployed in the three key components of an enterprise network: wireless access points, switches and client devices. Specific products on display included NBASE-T technology-enabled wiring closet/campus LAN switches, 802.11ac Wave 2 Wireless LAN Access Points (WLAN APs), Network Interface Controller (NIC) in Personal Computer (PC), Network-Attached Storage (NAS), Field-Programmable Gate Array (FPGA), network and embedded processors and Power-over-Ethernet (PoE) chipsets. Connectivity between these products were based on a wide range of cabling configurations including Cat5e, Cat6 and Cat6A, with lengths extending up to 100m. For more information, check out the NBASE-T Alliance press release.


Intel Network Division is pleased to deliver Release 20.0 (codenamed FVL3), a package that contains a new NVM images and Software that will provide customers with numerous new features and benefits when using the Intel® Ethernet XL710 and X710 controllers and adapters.


Highlights of Release 20.0 include:  

  • QSFP Configuration Utility (QCU) to allow customers migrate from 4x10 to true 40 GbE
  • Intel NVM Update Package (NUP), allowing customers to update older NVM’s in the field
  • Support for Intel® Ethernet Modular Optical Cables(MOCS)  and Active Optical Cables 
  • XLAUI backplane support for our valued embedded customers
  • Major performance and maintenance improvements


Release 20.0  Download links:

NVM Update Utility for Intel® Ethernet Converged Network Adapter XL710 

NVM Update Package (NUP) must be used with Intel® Network Connections software release 20.0.  This package intended to be used to update existing LOM/Embedded NVM’s, which are using the default dev starter NVM’s and can be used to update Intel® Ethernet Controller XL710 based Network Adapter Cards. This will update the NVM version to 4.42.


Intel® Network Connections software release 20.0 CD download

This Zip file contains all of the Intel® Ethernet network drivers and software for currently supported versions of Windows*, Linux* and FreeBSD* for most Intel® Ethernet adapters as found on the CD that


Administrative Tools for Intel® Network Adapters 

Includes QSFP Configuration Utility. This requires the NVM already be updated to NVM version 4.42.


Intel® Ethernet Connections Boot Utility, Preboot images, and EFI Drivers

Includes updated Preboot images and EFI drivers.


Please note Intel recommends updating the NVM, SW driver and pre-boot images together as they are tightly coupled in XL710.  For more details see the documentation provided at the link above for the NVM Update Utility.



Matt Eszenyi, Intel® XL710 (Fortville) PME



by David Fair, Product Marketing, Networking Division, Intel Corporation


Odd title for a networking article, don’t you think?  It’s odd for a couple of reasons, but reasons that reveal the vibrancy of Ethernet.  For four decades, Ethernet advanced on a “powers-of-ten” model from an initial 10 Mbps to 100 to 1GbE to 10GbE.  Part of why that worked was that the ratified IEEE Ethernet speeds kept well ahead of most market requirements.  Moving an entire Ethernet ecosystem to a new speed is expensive for everyone.  The “powers-of-ten” model helped control those costs.


What changed?  Well, my theory is that Ethernet simply got too successful for the powers-of-ten model.  By that I mean that the volumes got large enough for some specific requirements at more fine-grained speeds to warrant infrastructure upgrades to support those speeds. 


It is the rapid growth of wireless access points and increases in their speed specifically that creates the problem leading to a desire Next Generation Enterprise Access BASE-T.   Not in the data center but rather in the office.  Most have built out a wireless infrastructure with CAT 5e or 6 in the ceilings connecting wireless access points at 1GbE, in addition to connecting wired desktops and workstations.  But the latest wireless spec, IEEE 802.11ac can drive bandwidth back on the wire well beyond 1GbE.  And some of those desktops and workstations may be chomping at the bit as well, so to speak, to go faster than 1GbE.  The problem is that the next “powers of 10” solution from the IEEE, 10GBASE-T won’t work on CAT 5e and will work on CAT 6 only to 55 meters.


As often happens in these situations, alliances establish themselves to build momentum to influence the IEEE to consider their proposal.  In this case, there are now two such groups calling themselves the “NBASE-T Alliance” and the “MGBASE-T Alliance” respectively.  Both are proposing intermediate “step-down” speeds of 2.5 Mbps and 5 Mbps.


To learn more about 2.5G/5G technology and standardization related efforts, please join the Ethernet Alliance for its upcoming “Ethernet 104: Introduction to 2.5G/5G BASE-T Ethernet” webinar on Thursday, May 21, 2015, at 10am PDT. Additional information is available, and registration is now open at http://bit.ly/Ethernet104 .

From Dawn Moore, General Manager of the Networking Division, read her latest blog on 10GbE in the Intel® Xeon® processor D product family: The Intel® Ethernet 10GbE revolution that was 12 years in the making

Read two recent blogs from Dawn Moore, General Manager of the Networking Division.


Intel's demo with Cisco at Mobile World Congress illustrates the latest in network virtualization overlays and Ethernet’s role in the data center.


Intel® Ethernet demos at the OCP Summit shows the performance and low-latency needed for Rack Scale Architecture data centers.

It has been a while since I’ve made a blog posting.  That is because I was moved away from doing Virtualization and Manageability technologies to work on Intel Switching products.  Last week I was fortunate to be at the Open Compute Summit in San Jose, CA.


I was only able to attend one actual session while there, because the rest of my time was spent in the Intel® booth presenting a technology preview of Intel’s upcoming Red Rock Canyon switch product and the accompanying quick video.  It was exciting to be able to demonstrate and discuss Red Rock Canyon with people.


We made a quick video of me doing my chat, not my most fluid discussion however it gets the point across and luckily the pretty demo GUI distracts from my ugly mug. 

Red Rock Canyon will be available in Q3 of this year.  At that time I will have more videos, blogs papers etc. Until then, I hope this video will give you some insight.


From Dawn Moore, General Manager of the Networking Division, read her latest blog on the future of Ethernet and the market developments that will ensure it remains ubiquitous.


The industry continues to advance the iWARP specification for RDMA over Ethernet, first ratified by the Internet Engineering Task Force (IETF) in 2007.  This article in Network World, “iWARP Update Advances RDMA over Ethernet for Data Center and Cloud Networks,” co-authored by Chelsio Communications and Intel, describes two new features that have been added to help software developers of RDMA code by aligning iWARP more tightly with RDMA technologies based on the InfiniBand network and transport, i.e., InfiniBand itself and RoCE.  By bringing these technologies into alignment, we realize the promise that the application developer need not concern herself with which of these is the underlying network technology -- RDMA will "just work" on all.  - David Fair



Certainly one of the miracles of technology is that Ethernet continues to be a fast growing technology 40 years after its initial definition.  That was May 23, 1973 when Bob Metcalf wrote his memo to his Xerox PARC managers proposing “Ethernet.”  To put things in perspective, 1973 was the year a signed ceasefire ended the Vietnam War.  The U.S. Supreme Court issued their Roe v. Wade decision. Pink Floyd released “Dark Side of the Moon.”  In New York City, Motorola made the first handheld mobile phone call (and, no, it would not fit in your pocket).   1973 was four years before the first Apple II computer became available, and eight years before the launch of the first IBM PC. In 1973, all consumer music was analog: vinyl LPs and tape.  It would be nine more years before consumer digital audio arrived in the form of the compact disc—which ironically has long since been eclipsed by Ethernet packets as the primary way digital audio gets to consumers.


The key reason for Ethernet’s longevity, imho, is its uncanny, Darwinian ability to evolve to adapt to ever-changing technology landscapes.  A tome could be written about the many technological challenges to Ethernet and its evolutionary response, but I want to focus here on just one of these: the emergence of multi-core processors in the first decade of this century.  The problem Bob Metcalf was trying to solve was how to get packets of data from computers to computers, and, of course, to Xerox laser printers.  But multi-core challenges that paradigm because Ethernet’s job as Bob defined it is done when it gets to a computer’s processor, before it gets to the correct core in that processor waiting to consume that data.


Intel developed a technology to help address that problem, and we call it “Intel® Ethernet Flow Director.”  We implemented it in all of Intel’s most current 10GbE and 40GbE controllers. What Intel® Ethernet Flow Director does in-a-nutshell is establish an affinity between a flow of Ethernet traffic and the specific core in a processor waiting to consume that traffic. I encourage you to watch a two and a half minute video explanation of how Intel® Ethernet Flow Director works.  If that, as I hope, just whets your appetite to learn more about this Intel technology, we also have a white paper that delves into deeper details with an illustration of what Intel® Ethernet Flow Director does for a “network stress test” application like Memcached.  For that, click here.  We hope you find both the video and white paper enjoyable and illuminating.


David Fair

I was going through folders on my laptop in an effort to free up some space when I came upon a presentation I was working on before my transition to new responsibilities here within the Intel Networking Division.  The presentation was going to be the basis for a new video related to a blog and white paper I did regarding network performance for BMCs.


Seemed a shame to let all that work go to waste, so I finished up the presentation and quickly recorded a video.


The paper discussing this topic is located at https://www-ssl.intel.com/content/www/us/en/ethernet-controllers/nc-si-overview-and-performance-notes.html


And the video can be found at https://www.youtube.com/watch?v=-fA7_3-UlYY&list=UUAug6KFsT_2tC1zLwe2h6uA


Hope it is of use.






David Fair, Unified Networking Mktg Mgr, Intel Networking Division


WARP was on display at IDF14 in multiple contexts.  If you’re not familiar with iWARP, it is an enhancement to Ethernet based on an IETF standard that delivers Remote Direct Memory Access (RDMA).  In a nutshell, RDMA allows an application to read or write a block of data from or to the memory space of another application that can be in another virtual machine or even a server on the other side of the planet.  It delivers high bandwidth and low latency by bypassing the kernel of system software and avoiding interrupts and making extra copies of data.  A secondary benefit of kernel bypass is reduced CPU utilization, which is particularly important in cloud deployments.  More information about iWARP has recently been posted to Intel’s website if you’d like to dig deeper.


Intel is planning to incorporate iWARP technology in future server chipsets and systems-on-a-chip (SOCs).  To emphasize our commitment and show how far along we are, Intel showed a demo using the RTL from that future chipset in FPGAs running Windows* Server 2012 SMB Direct and doing a boot and virtual machine migration over iWARP.  Naturally it was slow – about 1 Gbps - since it was FPGA-based, but Intel demonstrated that our iWARP design is already very far along and robust.  (That’s Julie Cummings, the engineer who built the demo, in the photo with me.)


Jim Pinkerton, Windows Server Architect, from Microsoft joined me in a Poster Chat on iWARP and Microsoft’s SMB Direct technology, which scans the network for RDMA-capable resources and uses RDMA pathways to automatically accelerate SMB-aware applications.  No new software and no system configuration changes are required for system administrators to take advantage of iWARP.



Jim Pinkerton also co-taught the “Virtualizing the Network to Enable a Software Defined Infrastructure” session with Brian Johnson of Intel’s Networking Division.  Jim presented specific iWARP performance results in that session that Microsoft has measured with SMB Direct.


Lastly, the NVMe (Non-Volatile Memory Express) community demonstrated “remote NVMe” made possible by iWARP.  NVMe is a specification for efficient communication to non-volatile memory like flash over PCI Express.  NVMe is many times faster than SATA or SAS, but like those technologies, targets local communication with storage devices.  iWARP makes it possible to securely and efficiently access NVM across an Ethernet network.  The demo showed remote access occurring with the same bandwidth (~550k IOPS) with a latency penalty of less than 10 µs.

Intel is supporting iWARP because it is layered on top of the TCP/IP industry standards.  iWARP goes anywhere the internet goes and does it with all the benefits of TCP/IP, including reliable delivery and congestion management. iWARP works with all existing switches and routers and requires no special datacenter configurations to work. Intel believes the future is bright for iWARP.

Filter Blog

By date:
By tag: