1 2 3 Previous Next

Wired Ethernet

210 posts

Read two recent blogs from Dawn Moore, General Manager of the Networking Division.

 

Intel's demo with Cisco at Mobile World Congress illustrates the latest in network virtualization overlays and Ethernet’s role in the data center.

 

Intel® Ethernet demos at the OCP Summit shows the performance and low-latency needed for Rack Scale Architecture data centers.

It has been a while since I’ve made a blog posting.  That is because I was moved away from doing Virtualization and Manageability technologies to work on Intel Switching products.  Last week I was fortunate to be at the Open Compute Summit in San Jose, CA.

 

I was only able to attend one actual session while there, because the rest of my time was spent in the Intel® booth presenting a technology preview of Intel’s upcoming Red Rock Canyon switch product and the accompanying quick video.  It was exciting to be able to demonstrate and discuss Red Rock Canyon with people.

 

We made a quick video of me doing my chat, not my most fluid discussion however it gets the point across and luckily the pretty demo GUI distracts from my ugly mug. 

Red Rock Canyon will be available in Q3 of this year.  At that time I will have more videos, blogs papers etc. Until then, I hope this video will give you some insight.

 

From Dawn Moore, General Manager of the Networking Division, read her latest blog on the future of Ethernet and the market developments that will ensure it remains ubiquitous.

https://communities.intel.com/community/itpeernetwork/datastack/blog/2015/03/01/ubiquitous-ethernet-poised-for-greater-success-in-the-future

The industry continues to advance the iWARP specification for RDMA over Ethernet, first ratified by the Internet Engineering Task Force (IETF) in 2007.  This article in Network World, “iWARP Update Advances RDMA over Ethernet for Data Center and Cloud Networks,” co-authored by Chelsio Communications and Intel, describes two new features that have been added to help software developers of RDMA code by aligning iWARP more tightly with RDMA technologies based on the InfiniBand network and transport, i.e., InfiniBand itself and RoCE.  By bringing these technologies into alignment, we realize the promise that the application developer need not concern herself with which of these is the underlying network technology -- RDMA will "just work" on all.  - David Fair

 

 

Certainly one of the miracles of technology is that Ethernet continues to be a fast growing technology 40 years after its initial definition.  That was May 23, 1973 when Bob Metcalf wrote his memo to his Xerox PARC managers proposing “Ethernet.”  To put things in perspective, 1973 was the year a signed ceasefire ended the Vietnam War.  The U.S. Supreme Court issued their Roe v. Wade decision. Pink Floyd released “Dark Side of the Moon.”  In New York City, Motorola made the first handheld mobile phone call (and, no, it would not fit in your pocket).   1973 was four years before the first Apple II computer became available, and eight years before the launch of the first IBM PC. In 1973, all consumer music was analog: vinyl LPs and tape.  It would be nine more years before consumer digital audio arrived in the form of the compact disc—which ironically has long since been eclipsed by Ethernet packets as the primary way digital audio gets to consumers.

  motophone.jpg

The key reason for Ethernet’s longevity, imho, is its uncanny, Darwinian ability to evolve to adapt to ever-changing technology landscapes.  A tome could be written about the many technological challenges to Ethernet and its evolutionary response, but I want to focus here on just one of these: the emergence of multi-core processors in the first decade of this century.  The problem Bob Metcalf was trying to solve was how to get packets of data from computers to computers, and, of course, to Xerox laser printers.  But multi-core challenges that paradigm because Ethernet’s job as Bob defined it is done when it gets to a computer’s processor, before it gets to the correct core in that processor waiting to consume that data.

 

Intel developed a technology to help address that problem, and we call it “Intel® Ethernet Flow Director.”  We implemented it in all of Intel’s most current 10GbE and 40GbE controllers. What Intel® Ethernet Flow Director does in-a-nutshell is establish an affinity between a flow of Ethernet traffic and the specific core in a processor waiting to consume that traffic. I encourage you to watch a two and a half minute video explanation of how Intel® Ethernet Flow Director works.  If that, as I hope, just whets your appetite to learn more about this Intel technology, we also have a white paper that delves into deeper details with an illustration of what Intel® Ethernet Flow Director does for a “network stress test” application like Memcached.  For that, click here.  We hope you find both the video and white paper enjoyable and illuminating.

 

David Fair

I was going through folders on my laptop in an effort to free up some space when I came upon a presentation I was working on before my transition to new responsibilities here within the Intel Networking Division.  The presentation was going to be the basis for a new video related to a blog and white paper I did regarding network performance for BMCs.

    

Seemed a shame to let all that work go to waste, so I finished up the presentation and quickly recorded a video.

 

The paper discussing this topic is located at https://www-ssl.intel.com/content/www/us/en/ethernet-controllers/nc-si-overview-and-performance-notes.html

 

And the video can be found at https://www.youtube.com/watch?v=-fA7_3-UlYY&list=UUAug6KFsT_2tC1zLwe2h6uA

    

Hope it is of use.

    

Thanx,

 

 

Patrick

David Fair, Unified Networking Mktg Mgr, Intel Networking Division

 

WARP was on display at IDF14 in multiple contexts.  If you’re not familiar with iWARP, it is an enhancement to Ethernet based on an IETF standard that delivers Remote Direct Memory Access (RDMA).  In a nutshell, RDMA allows an application to read or write a block of data from or to the memory space of another application that can be in another virtual machine or even a server on the other side of the planet.  It delivers high bandwidth and low latency by bypassing the kernel of system software and avoiding interrupts and making extra copies of data.  A secondary benefit of kernel bypass is reduced CPU utilization, which is particularly important in cloud deployments.  More information about iWARP has recently been posted to Intel’s website if you’d like to dig deeper.

 

Intel is planning to incorporate iWARP technology in future server chipsets and systems-on-a-chip (SOCs).  To emphasize our commitment and show how far along we are, Intel showed a demo using the RTL from that future chipset in FPGAs running Windows* Server 2012 SMB Direct and doing a boot and virtual machine migration over iWARP.  Naturally it was slow – about 1 Gbps - since it was FPGA-based, but Intel demonstrated that our iWARP design is already very far along and robust.  (That’s Julie Cummings, the engineer who built the demo, in the photo with me.)

 

Jim Pinkerton, Windows Server Architect, from Microsoft joined me in a Poster Chat on iWARP and Microsoft’s SMB Direct technology, which scans the network for RDMA-capable resources and uses RDMA pathways to automatically accelerate SMB-aware applications.  No new software and no system configuration changes are required for system administrators to take advantage of iWARP.

 

 

Jim Pinkerton also co-taught the “Virtualizing the Network to Enable a Software Defined Infrastructure” session with Brian Johnson of Intel’s Networking Division.  Jim presented specific iWARP performance results in that session that Microsoft has measured with SMB Direct.

 

Lastly, the NVMe (Non-Volatile Memory Express) community demonstrated “remote NVMe” made possible by iWARP.  NVMe is a specification for efficient communication to non-volatile memory like flash over PCI Express.  NVMe is many times faster than SATA or SAS, but like those technologies, targets local communication with storage devices.  iWARP makes it possible to securely and efficiently access NVM across an Ethernet network.  The demo showed remote access occurring with the same bandwidth (~550k IOPS) with a latency penalty of less than 10 µs.

Intel is supporting iWARP because it is layered on top of the TCP/IP industry standards.  iWARP goes anywhere the internet goes and does it with all the benefits of TCP/IP, including reliable delivery and congestion management. iWARP works with all existing switches and routers and requires no special datacenter configurations to work. Intel believes the future is bright for iWARP.

Hi, my name is Craig Pierce and I support Intel’s Unified Networking Program.

 

As an Applications Engineer with Intel’s Networking Division, I work in Folsom, California and am responsible for end-user customer support for “Storage over Network” on Intel Ethernet products. This includes functionality support for Fiber-Channel over Ethernet (FCoE), iSCSI, and Network File Systems (NFS) for secondary storage attachment as wells as SAN Boot, which includes iSCSI and FCoE legacy boot.

 

In 2014, I've begun to test as many configurations and options for secondary and tertiary storage and for storage-boot using as many operating systems as my schedule allows. I plan to post configurations, videos, and use-cases that I am evaluating in my lab.

 

There are several reasons I am doing this: my job is enablement and support for Intel products, I want to document storage and storage-boot setup processes for Intel® Converged Network Adaptors, and the more research I do, the better I can support my customers.

 

I may entertain some recommendations from the community; however, I have a pretty good list of items that I want to test.  If I get issues from the communities, I will always point you towards following a standardized process with your OEM.

 

In my lab, I have both 1 Gigabit and 10 Gigabit network switches with some limited Layer 3. I plan to setup a multi-routed environment with SAN access to frames that will supply my targets.

 

Here’s an example of the kind of content I want to share with the community.  This is a quick connect guide for FCoE on Windows and I have a Linux version following soon.  

 

http://www.intel.com/content/www/us/en/ethernet-controllers/fcoe-windows-server-quick-connect-guide.html

 

Currently, I am putting an iSCSI boot video together that shows SAN-Boot with VLANs and primary/secondary ports. I will post the link to the video in this blog as soon as it’s ready.

 

The next steps will be to walk through other supported operating systems and post the results. 

 

Talk to you soon,

Craig

dougb

Intel® Ethernet and Security

Posted by dougb Apr 16, 2014

There has been a very famous tale of a security issue in the news lately.  Others have done a great job explaining it, so I won't even dare to even try.   Some people are concerned that Intel® Ethernet products might be at risk.  Coming to this blog is a great first step, but there is a much better place for you to look.  Intel® Product Security Center.  It lists all the advisories that Intel is currently tracking.  A great resource, it has clear version and update suggestions for products that have issues.  It even has a mailing list option so you can get updates when they come out.  The one in the news is listed as Multiple Intel Software Products and API Services impacted by CVE-2014-0160, and !spoilers alert! Intel Ethernet doesn't have any products listed.  If we did have any security related issues, you would find them there.  I strongly suggest you add the Intel® Product Security Center to your bookmarks, and sign up for the e-mail.  Vigilance is the first step to better security, and Intel tries to make it easier for busy IT professionals to stay informed.

The following blog post was originally published on NetApp’s SANbytes blog to commemorate the launch of the NetApp X1120A-R6 10-BASE-T adapter – the latest milestone in the long and fruitful relationship between the companies. We’re reposting it here because it's a good overview of the state of the art in Ethernet storage networking.

 

When two leaders like Intel and NetApp work together on storage networking, the industry should expect big things. Intel® Xeon® processor-based storage systems from NetApp, for example, are delivering new levels of performance for customers around the world who are trying to keep up with the ever-increasing amounts of data generated by their users and applications. Intel and NetApp have also collaborated on many engineering efforts to improve performance of storage protocols including iSCSI and NFS.

 

This week’s announcement of the NetApp X1120A-R6 10GBASE-T adapter, which is based on the Intel® Ethernet Controller X540, is another significant development for Ethernet-based storage. Virtualization and converged data and storage networking have been key drivers of the migration to 10 Gigabit Ethernet (10GbE), and NetApp was an early adopter of the technology. Today, many applications are optimized for 10GbE. VMware vSphere, for example,
allows vMotion (live migration) events to use up to eight Gigabits of bandwidth and move up to eight virtual machines simultaneously. These actions rely on high-bandwidth connections to network storage systems.

 

10 Gigabit connectivity in these systems isn’t new, so why is the NetApp X1120A-R6 adapter special? For starters, it’s the first 10GBASE-T adapter supported by NetApp storage systems (including the FAS3200, FAS6200,
and the new FAS8000 lines), and we believe 10GBASE-T will have a huge appeal to data center managers who are looking to upgrade from one Gigabit Ethernet to a higher-speed network. 

 

There are a few key reasons for this:

 

  • 10GBASE-T allows IT to use their existing Category 6/6A twisted-pair copper cabling. And for new installations, this cabling is far more cost-effective than other options.
  • Distance flexibility: 10GBASE-T supports distances up to 100 meters and can be field-terminated, making it a great choice for short or long connections in the data center.
  • Backwards-compatibility: Support for one Gigabit Ethernet (1000GBASE-T) allows for easy, phased migrations to 10GbE.

The NetApp X1120A-R6 adapter gives data center operators a new option for cost-effective and flexible high-performance networking. For the first time, they’ll be able to use 10GBASE-T to connect from server to switch
to storage system.

 

Intel and NetApp have worked together to drive the market transition to 10GbE unified networking for many years, and this announcement is another example of our commitment to bringing these technologies to our customers.

 

If you’d like to learn more about the benefits of 10GBASE-T, here are a couple of great resources:

 

 

Follow me on Twitter @Connected_Brian 

Intel is pleased to announce the Intel® Ethernet Server Adapter X520 Series for the Open Compute Project.

 

Available in both single and dual-port SKU’s, these adapters deliver a proven, reliable solution for deployments of Ethernet for high bandwidth, low cost, 10GbE network connections.  Increased I/O performance with Intel® Data Direct I/O Technology (DDIO) and support for intelligent offloads make this adapter a perfect match for scaling performance on Intel® Xeon® processor E5/E7 based servers.

 

The best-selling Intel® Ethernet Converged Network Adapter X520 Series is known for its high performance, low latency, reliability, and flexibility.  The addition of the Intel® Ethernet Server Adapter X520 Series for Open Compute Project to the family delivers all the X520 capabilities, in an Open Compute Project (OCP) form factor.  The Open Compute Project (OCP) is a Facebook* initiative to openly share custom data center designs to improve both cost and energy efficiency across the industry.  The OCP uses a minimalist approach to system design thereby reducing complexity and cost, allowing data centers to scale out more effectively.  By publishing the designs and specifications of this low-power, low-cost hardware, it can reduce the cost of infrastructure for businesses large and small.

 

For more information on the Intel® Ethernet Server Adapter X520 for Open Compute Project, visit:  http://www.intel.com/content/www/us/en/network-adapters/converged-network-adapters/server-adapter-x520-da1-da2-for-ocp-brief.html

 

For more general information on the Open Compute Project initiative, visit:  http://opencompute.org/

The Intel Developer Forum (IDF) was held last week and it was an amazing collection of the brightest minds in the industry looking to push the technology envelope.

 

Like in years past, when it comes time to for Intel® to debut its new products at IDF they are always better and more powerful.  But this year we had one announcement that bucked that trend.  It was a new product that was better and less powerful.

 

I’m talking about microservers, a new trend in computing where multiple, lower performance, lower power processors are used as servers for a new class of computing tasks. It was one of the topics I presented at two poster chats on Tuesday and to about 60 attendees during my technical session on Wednesday.

 

The microserver initiative fits with Intel’s strategy of developing “workload-optimized” solutions.  There are a lot of computing tasks, such as memory caching, dedicated webhosting and cold storage, where the processing and I/O demands per server are light.

 

To meet these needs, we formerly introduced the Intel Atom™ C2000, at a special event one week before IDF.

 

The density of microservers on a board makes networking these new systems a challenge. The power profile of the Atom™ C2000, for example, allows data center shelves with 48 microservers per board. A standard telecom rack can hold 12 of these shelves for a total of 576 microservers per rack. That’s more network connections than many enterprise workgroups.

 

However, by using the Intel® Ethernet Switch FM5224 chip, all of the processors on a shelf can be internetworked so that there are only a few uplink connections needed to a top of-rack-switch.  This makes it manageable from a connectivity perspective. 

 

But there’s still traffic from all of the processors that needs to be routed.  That is why we’re evolving our Open Network Platform (ONP) software-defined networking (SDN) architecture to support microservers. 

 

My colleague Recep wrote a post on this blog describing how SDN works just recently, so I’ll just summarize here to say that SDN shifts the packet processing intelligence from the switch to a centralized controller. This offers the benefit of reducing the complexity of network design and increasing the throughput.

 

Many microservers today are sold with proprietary networking software.  The issue with this is vendor lock in and the potential for slower pace of innovation.  This last point is important since many of the applications for microservers are cloud-based and that market is evolving very quickly.

 

Intel’s Open Network Platform combines the performance of our Intel Ethernet Switch FM5000/FM6000 high-volume, high performance switching hardware with ONP software based on Windriver Linux.  In addition, there are open APIs for drivers to eliminate vendor lock in and APIs for controllers and a wide variety of third-party apps and OEM apps.

 

What this means to microserver OEMs is that they can bring their own unique differentiating software to their products while at the same time integrating with an SDN controller or other app from a third party.  Cost is kept at a minimum while functionality and differentiation is maximized. 

 

The reception to this message at IDF13 was good and already several OEMs are planning to develop microserver products.  We’ll take a look at some of those designs in a future blog post.

Software-defined networking (SDN) is a major revolution in networking that is just starting to move from bleeding edge to leading edge customer adoption. 

 

But already, Intel® is starting to think about what comes next and how the software defined model can be more pervasive in cloud-scale data centers. A key factor in those plans is the Open Networking Platform (ONP) that we announced in April.

 

That was my takeaway from the announcement about Intel’s cloud infrastructure data center strategy.  If you read the press release and the related presentations and watch the videos, you will see that the emphasis is on the strategy and several new microprocessors, including the Avoton and Rangely 22nm Atom processors, and the 14nm Broadwell SoC.

 

I want to unpack a bit about how the ONP fits in this next-generation data center strategy.  The architecture of the next-generation cloud infrastructure data center is built on three technology pillars:

  • Workload-optimized technologies: Examples here include deploying servers with the different CPU, memory and I/O capabilities based on the workload.
  • Composable resources: Moving from building server racks from discrete servers, networking equipment, etc to deploying a more integrated solution. Intel is making strides here with its Rack-Scale Architecture initiative.
  • Software-defined infrastructure: this is using a software controller to direct data flows to available resources which helps overcome bottlenecks keeping data centers from overprovisioning.

 

The ONP initiative combines our low-latency network processing, switching and interface hardware with a customizable software stack that works with third-party SDN controllers and network applications.

 

Already, the ONP “Seacliff Trail” 10GbE / 40GbE SDN top-of-rack switch plays a key role in the Rack Scale Architecture.

 

But the ONP also provides the foundation for a future where the SDN controller evolves into a workload orchestration controller – directing data flows not only to network resources but also orchestrating computing, memory and storage resources as well. 

 

Our open approach means that ONP infrastructure can support new controllers or orchestration applications.  The switching architecture of the Intel Ethernet Switch FM6000 chip family is designed for evolving network standards with industry-low L3 latency (400ns), high throughput and microcode programmability that gives it plenty of ability to support future standards.

 

Like the Intel strategy for next-generation cloud data center infrastructure, ONP is both comprehensive and high performance, with the openness and flexibility that allows our customers to innovate as well. 

I’m putting the final touches on my presentation for IDF13 (session #CLDS006) which will look at the emerging networking requirements for new server form-factors, specifically microservers and rack scale architectures.

 

Microservers contain multiple servers on a single board and are an emerging solution for some targeted data center workloads requiring a high number of low-power processors.  As these servers evolve and new CPUs are introduced (like the Intel® Atom™ C2000 processor family), CPU density is increasing.  At IDF13, I’ll talk about the emerging networking requirements in this category and how the Intel Ethernet Switch family and the Open Networking Platform address these needs.

 

Another hot topic for cloud data centers is the concept of rack scale architecture systems – which are pre-configured, high-performance data center racks that can be rapidly deployed to meet new workload requirements.  In this part of my talk, I’ll cover how the Open Networking Platform is being extended to provide efficient connectivity within these high-performance systems.

 

Here’s an outline of my presentation:

•      Data Center Trends

•      New Microserver Solutions

•      Intel® Ethernet Switch Silicon Architecture

•      Rack Scale Architecture

•      Software-defined Infrastructure

•      Example applications and proof points

 

I hope to see you at my IDF session at 3:45PM on September 11th in San Francisco. You are also invited to my poster chats on Switching for High Density Microservers from 11:30-1:00 and from 3:00-4:30 on September 10th.

 

If you are still on the fence about the value of attending IDF – or now want to register – I have included some links to the IDF website below.

 

Why Attend IDF13:

https://www-ssl.intel.com/content/www/us/en/intel-developer-forum-idf/san-francisco/2013/why-attend-idf-2013-san-francisco.html

 

Registration now:

https://secure.idfregistration.com/IDF2013/

 

More information on the IDF13 keynote:

http://www.intel.com/content/www/us/en/intel-developer-forum-idf/san-francisco/2013/idf-2013-san-francisco-keynotes.html

 

Main IDF13 landing page:

https://www-ssl.intel.com/content/www/us/en/intel-developer-forum-idf/san-francisco/2013/idf-2013-san-francisco.html

 

What’s New at IDF13:

https://www-ssl.intel.com/content/www/us/en/intel-developer-forum-idf/san-francisco/2013/idf-2013-san-francisco-whats-new.html?

Ethernet never seems to stop evolving to meet the needs of the computing world.  First it was a shared medium, then computers need more bandwidth and multi-port bridges became switches.  Over time, Ethernet speeds increased by four orders of magnitude.  The protocol was updated for the telecom network, then for data centers. 

 

Now it’s scaling down to meet the needs of microserver clusters.

 

Micro server clusters are boards with multiple low-power processors that can work together on computing tasks.  They are growing in popularity to serve a market that needs fast I/O but not necessarily the highest processing performance.

 

From its many evolutions, Ethernet has the right combination of bandwidth, routing, and latency to be the networking foundation for the microserver cluster application.

 

Bandwidth: For certain workloads, congestion can occur reducing system performance when processors are connected using 1GbE, so the preferred speed is 2.5 GbE.  If you’ve never heard of 2.5GbE, it’s because it was derived from the XAUI spec, but only uses a single XAUI lane.  The XAUI standard was implemented with the idea that four lanes of XAUI could be used to transmit 10GbE signals from chip to chip for distances longer than the alternative (which capped out at 7 cm).  XAUI specifies 3.125 Gbps, which provides for overhead and a 2.5 Gbps full duplex data path.  XAUI had a claim to fame in 2005, when it because a leading technology to improve backplane speeds in ATCA chassis designs.  By using a single lane of XAUI, it’s the perfect technology for a microserver cluster.

 

Density: Density and performance are key attributes for micro server clusters. Bandwidth is related to total throughput of the switch in that packet forwarding engine must be robust enough to switch all ports at full speed.  For example, the new Intel® Ethernet Switch Family FM5224 has up to 64 ports of 2.5 GbE and another 80Gb of inbound/outbound bandwidth (either eight 10GbE ports or two 40GbE ports).  Thus the FM5224 packet-processing engine handles 360 million packets per second to provide non-blocking throughput.

 

Distributed Switching: Some proprietary micro server cluster technologies advocate layer two switching, perhaps under the assumption that the limited number of devices on a shelf eliminates the need for layer three switching. But traffic will need to exit the shelf – maybe even to be processed by another shelf – and so to make this true, the shelf would depend on an outside top-of-rack (TOR) switch to route traffic between shelves or back out onto the Internet.  This would change the nature of a TOR switch, which now is  “pizza box” style switch with between 48-72 ports.  With an average of 40 servers per shelf, a TOR would need 400 or more ports to connect all of the shelves.  But Ethernet routing can be placed in every cluster (micro server shelf) to provide advanced features such as load balancing and network overlay tunneling while reducing the dependency on the TOR switch.

 

Latency: To be considered for high-performance data center applications, Ethernet vendors needed to reduce latency from microseconds to nanoseconds (Intel lead this industry effort and is the low latency leader at 400 ns).  That work is paying dividends in the microserver cluster.  Low latency means better performance with small packet transmissions and also with storage-related data transfers.  For certain high performance workloads, the processors on micro server clusters must communicate constantly with each other, making low latency essential. 

 

With the perfect combination of bandwidth, routing and latency, Ethernet is the right technology for microserver networks.  Check out our product page to take a look at the Intel Ethernet Switch FM5224 that is built just for this application.

Filter Blog

By date:
By tag: