1 2 Previous Next

The Data Stack

16 Posts authored by: Brian Yoshinaka

Ethernet marked its 40th birthday in 2013, a remarkable journey from a hand-drawn sketch to one of many networking protocols to the de facto networking standard for data centers and campus networks across the planet.

 

composit-ethernet-sketch.jpg

 

In recent years, Ethernet has made the jump from computers and servers to consumer electronics (TVs, Blu-ray players, game consoles), and there are plenty of places you might be surprised to find the little standard that could: telecom infrastructure, in-vehicle entertainment and, heck, even the Space Shuttle used Ethernet.

 

Over the past couple of years, I've spent a fair amount of time blogging about Ethernet in the data center. With Embedded World coming up next week in Nuremberg, Germany, however, I thought I’d take some time to discuss one of those areas where you might not expect to find Ethernet – the machines and robots that make up the world of industrial automation.

 

It’s a great example of how adaptable Ethernet is and how it can be used in applications beyond the data center.

OK, for starters, what is industrial automation? Here’s a quick definition from PC Magazine:

 

Making products under the control of computers and programmable controllers. Manufacturing assembly lines as well as stand-alone machine tools (CNC machines) and robotic devices fall into this category.

 

In popular culture, this is where the Charlie Chaplin “Modern Times” image of a production line dissolves into that of the J.A.R.V.I.S-powered robot that helped to create Iron Man.

 

Over its long history, Ethernet has shown an amazing ability to accommodate expanded functionality. So, how has it evolved to make it a great choice for robotic automation?

 

The first step was to augment the standard TCP/IP best-effort protocol with other industrial protocols that provide increased packet delivery predictability and performance.

 

These are needed because on a high-tech assembly line, each machine has a specific task (or tasks), that needs to be completed in a specific order. You've seen the videos of the conveyer belt moving along as jointed, robotic arms whir in and out doing their work, right? That’s high-precision stuff.

With TCP, data packets can arrive at a destination out of order, which is no problem for many network applications, but causes havoc in industrial applications. So industrial Ethernet turns to purpose-built protocols such as PROFINET and Ethernet PowerLink to provide determinism. Intel has been working closely with leaders in industrial Ethernet to ensure that Intel® Ethernet Controllers bring outstanding performance to industrial applications.

 

PROFINET utilizes three protocols simultaneously, applying the right protocol depending on the application requirements. TCP/IP is used if the application needs a 100 ms response time, a real-time (RT) protocol kicks in for data needing 10 ms response times and, finally, the PROFINET isochronous real-time (IRT) protocol is used for data with cycle times of less than 1 ms. Want to see PROFINET in action? Check out this video showing a robot solving a Rubik’s cube from last year’s Embedded World.

 

Ethernet PowerLink uses a centralized Ethernet polling approach that is combined with time slicing to differentiate between data types and ensure that the most time-sensitive data gets to its destination. The PowerLink managing node is a central controller that initiates a data cycle by polling every node in the network. The poll features two data phases; an isochronous phase for all real-time data and an asynchronous phase for ad hoc data. Separating the two makes it easier to prioritize the real-time data and deliver it in less than 200 microseconds.

 

The availability of an isochronous data channel in both of these protocols helps to eliminate jitter – the packet transmission delay caused by congestion and other factors. With robots depending on each data packet to know what to do next, jitter can upset the entire operation.

 

Maybe industrial Ethernet seems boring, but I have to admit that having robots on the network is cool. Industrial Ethernet is also a foundational element of the emerging “Internet of Things” that lets machines and non-traditional connected devices leverage the ‘Net for communication. To find out more about Intel’s technology for the industrial Ethernet and the Internet of Things see our intelligent systems page.

Several months ago, our flagship 10 Gigabit Ethernet (10GbE) server adapters underwent a big change. We retired the Intel® Ethernet 10 Gigabit Server Adapter family name and replaced it with Intel® Ethernet 10 Gigabit Converged Network Adapter family. The adapters themselves didn’t change a bit; they’re the same reliable products that have led us to the top spot among 10GbE adapter suppliers[1].

 

Why did we rename such a successful product line? Quite simply, Converged Network Adapter (CNA) is a much more accurate description of our 10GbE adapters and the features they offer. As IT organizations upgrade their data center networks, we want to make sure they know that these Intel Ethernet adapters meet not only their Ethernet networking needs, but also their converged networking needs.

 

For you non-networking folks, a CNA is a 10GbE adapter that supports standard LAN traffic as well as Fibre Channel over Ethernet (FCoE) traffic and iSCSI traffic. Traditional LANs and Fibre Channel (FC) storage area networks (SANs) use completely separate network infrastructures, requiring storage-specific network adapters, switches, and cabling. Converged or unified networks allow LAN and SAN traffic to use or even share a 10GbE fabric, greatly simplifying the infrastructure. CNAs connect servers to these converged networks and eliminate the need for separate, dedicated storage network adapters.

 

So with all of these great benefits, why didn’t call our adapters CNAs from the start? Answering that one requires a history lesson.

 

Work on the FCoE standard began in 2007, and the first CNAs appeared several months later. For these early designs, the traditional storage adapter vendors modified their FC host bus adapter (HBA) designs to include the Intel® 82598 10 Gigabit Ethernet Controller alongside their proprietary FC processors. With the FCoE standard still in draft form and a thin ecosystem, these first-generation adapters were little more than proof of concept vehicles.

 

Meanwhile, we at Intel were working to enable FCoE on the Intel 82598 10 Gigabit Ethernet Controller – the same controller that provided the Ethernet functionality for those early CNAs. We realized, however, that enabling FCoE on that controller (and adapters based on it) would require releasing our own FCoE software stack. Introducing yet another proprietary FCoE solution into the market would have made life harder for IT, and that’s the last thing we wanted to do. So we launched our new adapters as 10GbE adapters, not CNAs, and set about making FCoE easier for IT to deploy.

 

In 2008, Intel founded the Open FCoE project and released our FCoE initiator code to the open source community. Our goal was to get the Open FCoE initiator integrated into the Linux* kernel and help accelerate the adoption of FCoE. Any adapter vendor could use that native support to develop a CNA, giving customers more hardware options and allowing them to use a common set of OS-based management tools. The industry had gone through a similar process with the successful integration of iSCSI, another storage protocol, in every major OS and hypervisor.

 

In March 2009, after modifications from the Linux community, the Open FCoE initiator was integrated into version 2.6.29 of the Linux kernel and soon found its way into major distributions, including Red Hat and SLES.

 

In early 2011, we announced that our newest 10 Gigabit Controller, the Intel® 82599 10 Gigabit Ethernet Controller and the integrated FCoE initiators in Linux and Windows had been qualified by EMC, Cisco, and NetApp. The Intel® Ethernet Server Adapter X520 family, which is powered by the Intel 82599 10 Gigabit Ethernet Controller, was included in this announcement.

 

And in August 2011, VMware announced Open FCoE integration as part of the vSphere 5.0 launch. With that launch, the Intel Ethernet Server Adapter X520 and Intel 82599 10 Gigabit Ethernet Controller had FCoE support in every major operating system and hypervisor. We felt it was important for customers to understand that Intel Ethernet 10 Gigabit Server Adapters were full CNAs, so later that year, we decided to rename them, and the Intel Ethernet Converged Network Adapter family was born.

 

Our 10GbE CNAs are the industry’s top-selling 10GbE adapters. Earlier this year, we expanded the family by adding the Intel® Ethernet Converged Network Adapter X540, our fourth-generation 10GBASE-T adapter. Through our Open FCoE efforts, we have given the industry another option for enabling FCoE – an option that doesn’t depend on proprietary hardware and software, can use standard OS-based tools, and scales with advancements in server architectures.

 

It has been a long journey getting to this point, but sometimes it takes time to do things right.

 

If you’d like to learn more about the advantages of Open-FCoE-based solutions, check out this blog post on Open FCoE in VMware vSphere.



[1] Crehan Research, Server-class Adapter and LOM, 2Q12

Back in March, I introduced you to the new I/O and networking-related advancements in Intel® Xeon® Processor E5 family-based server platforms: Intel® Integrated I/O, Intel® Data Direct I/O Technology (Intel® DDIO), and the Intel® Ethernet Controller X540. I’ve since provided a closer look at the Intel Ethernet Controller X540 and how integrating 10GBASE-T connectivity in server motherboards will help drive 10 Gigabit Ethernet adoption.

 

But what happens to network data inside the server, before or after it’s on the network? That’s where Intel Integrated I/O and Intel DDIO come in. Intel designed these architectural changes to the Intel® Xeon® processor specifically to improve I/O performance. They are significant advancements, and it’s worth taking a closer look to understand how they work and why they’re important.

 

Intel Integrated I/O, a feature of all Intel Xeon processor E5-based servers, consists of three features:

  • An integrated I/O hub
  • Support for PCI Express* 3.0
  • Intel DDIO

 

The graphic below shows the architectural differences between a previous-generation Intel Xeon processor-based server and a server based on the Intel Xeon processor E5 family.

 

IO_flow.gif

 

In older systems, as shown on the left, a discrete I/O hub managed communications between the network adapter and the server processor. With Intel Integrated I/O, the I/O hub has been integrated into the CPU, making for a faster trip between the network adapter and the processor. Tests performed by Intel have shown a reduction in latency of up to 30 percent for these new systems.

 

The integrated I/O hub includes support for PCI Express (PCIe*) 3.0, the latest generation of the PCIe specification. The PCIe bus is the data pathway used by I/O devices, such as network adapters, to communicate with the rest of the system, and PCIe 3 delivers effectively twice the bandwidth of PCIe 2.0. This greater bandwidth will be important when quad-port 10GbE adapters start finding their way into servers in the coming months.

 

The remaining component of Intel Integrated I/O is Intel DDIO. It’s another significant architectural change to the Intel Xeon processor, and perhaps the best way to explain its benefits is to again compare an Intel Xeon processor E5-based platform to a previous-generation system.

 

io_copies.gif

In those older systems, as shown on the left, data coming into the system via the network adapter was copied into memory and then written to processor cache when the processor was ready to use it. Outbound data took a similar journey through memory before reaching the network adapter.

 

Not very efficient, but that model originated when CPU cache allotments were relatively small and network speeds weren’t fast enough to be affected by these potential bottlenecks in the server. Today, however, we’re in a much different place, with growing 10GbE adoption and Intel Xeon processors that support caches of up to 20MB.

 

With Intel DDIO, Intel has re-architected the processor to allow Intel Ethernet adapters and controllers to talk directly to cache, eliminating all of those initial memory copies. (Once the cache is filled, least-recently-used data will be retired to system memory.) You can see the trips data takes in and out of the system on the right side of the graphic above. Much simpler, isn't it?

 

What does all of this mean for your servers? In a nutshell, better performance. In our labs, we’ve seen a 3x improvement in I/O bandwidth headroom within a single Intel Xeon processor when compared to previous-generation systems. And with fewer trips to memory and a direct path to cache, you’ll see reduced power utilization and lower latency. Specific performance improvements will vary based on application type.

 

If you’d like to learn more, we’ve put together some great resources:

 

For the latest and greatest, follow us on Twitter: @IntelEthernet

The Summer IT industry event season has kicked into high gear, and this week I’m at Cisco Live! 2012 in San Diego. Networking products and technologies are, of course, one of the primary focus areas of the show, so it’s no surprise that Cisco’s announcement earlier this week included a number of new networking products. One particularly significant product is the Nexus 5596T switch, a new data center switch and the first 10GBASE-T member of the Nexus 5000 family.

 

If you’ve read some of my previous posts, you know that 10GBASE-T is 10 Gigabit Ethernet over twisted-pair copper cabling – the stuff that’s installed in nearly every data center today – and that it uses the familiar RJ-45 connector. 10GBASE-T has been a big topic for us here at Intel, especially with the launch of the Intel® Ethernet Controller X540 in March.

 

Need proof? Here’s a word cloud of my posts for the last year. Other than the common words like “Intel,” “Ethernet,” and “10GbE,” the term I used the most was “10GBASE-T.” See it there on the left?

 

        Wordle: A Year of Ethernet Blog Posts

 

But I digress. (And I drop in the occasional overused phrase.)

 

Kaartik Viswanath, product manager for the Nexus 5000 family, was kind enough to take a few minutes to answer some questions about the new switch and Cisco’s views on 10GBASE-T.

 

BY: Kaartik, thanks for taking the time to answer a few questions today. Tell me about the Nexus 5596T switch and 10GBASE-T module that Cisco announced yesterday.


KV: Sure. We’re very excited about the new Nexus 5596T switch. It’s the first 10GBASE-T member of the Nexus 5000 family, and it’s coming at the perfect time, with 10GBASE-T LOM (LAN on motherboard) connections now being integrated onto mainstream server motherboards. LOM integration will help drive 10GbE adoption, and all those new 10GBASE-T ports need a high-performance, high-port-density switch to connect to. The Nexus 5596T has 32 fixed 10GBASE-T ports, and through the addition of the new 12-port 10GBASE-T Cisco Generic Expansion Module (GEM), it can support up to 68 total 10GBASE-T ports in a two-RU (rack unit) design. Plus, customers can deploy any of the existing GEMs in any of the Nexus 5596T’s three GEM slots.

 

The Nexus 5596T also includes 16 fixed SFP+ ports, which customers can use to connect to aggregation switches, servers, or Nexus 2000 Fabric Extenders using optical fiber or direct attach copper connections. With the Nexus 5596T switch, our customers have the flexibility to deploy both 1/10GBASE-T Ethernet on Copper and FC/FCoE/Ethernet on SFP+ ports on the same chassis.

 

BY: Are you hearing a lot of interest in 10GBASE-T from your customers?


KV: Yes, definitely, and I think there are a couple of major reasons for that. First, 10GBASE-T offers the easiest path for folks looking to migrate from One Gigabit Ethernet (GbE). 10GBASE-T uses the same twisted-pair copper cabling and RJ-45 connectors as existing GbE networks, and it’s backwards-compatible with all the 1000BASE-T products out there today. That means you can replace your existing 1000BASE-T switch with a Nexus 5596T and connect to both 10GBASE-T and 1000BASE-T server connections. And as you’re ready, you can upgrade servers to 10GBASE-T.

 

I think the other big reason 10GBASE-T is so appealing is the deployment flexibility it offers; 100 meters of reach is sufficient for the vast majority of data center deployments, whether it’s top-of-rack, middle-of-row, or end-of-row.  Plus, twisted-pair copper cabling is much more cost-effective than the fiber or direct-attach copper cabling that’s used in the majority of 10GbE deployments today.

 

BY: Cisco and Intel both support multiple 10GbE interfaces in their products. How do you see 10GBASE-T fitting into the mix?


KV: We’ll support whichever interfaces our customers want to use. However, there are some general guidelines that most folks seem to be following. For longer distances – over 100 meters – SFP+ optical connections are really the only choice, given their longer reach.  But fiber costs really don’t lend themselves to broad deployment. Today, most 10GbE deployments use the top of rack model, where servers connect to an in-rack switch using SFP+ direct attach copper (DAC) connections. DAC reach is only seven meters, but that’s plenty for any intra-rack connections.

 

10GBASE-T hits sort of a sweet spot because of its distance capabilities. It can connect switches to servers in top of rack deployments, with cables that are less expensive than SFP+ DAC, or it can be used for the longer runs where fiber is being used today – up to 100 meters, of course.

 

There are cases where SFP+ has some advantages, particularly for latency-sensitive applications or if the customers are sensitive to power consumption, but when it comes to deployment flexibility, costs, and ease of implementation, 10GBASE-T is well-positioned as the interface of choice for broad adoption.

 

BY: How about Fibre Channel over Ethernet? Does the Nexus 5596T switch support FCoE over 10GBASE-T?


KV: Great question. FCoE is a key ingredient in Cisco’s unified fabric vision, and it’s supported in our 10 Gigabit Nexus and UCS product lines. The Nexus 5596T hardware is FCoE-capable like all of our Nexus 5000 switches, and we’re working on FCoE characterization in our labs. We’ve been working closely with Intel to verify FCoE interoperability with the Intel Ethernet Controller X540.

 

There’s been a fair amount of discussion in the industry around whether 10GBASE-T is a suitable fabric for FCoE. Our collaboration with our ecosystem partners, including Intel, network cable vendors, and storage vendors, will help ensure there aren’t any issues before we enable the feature on the Nexus 5596T. Assuming everything goes well, we’ll also enable FCoE over 10GBASE-T in our 12-port 10GBASE-T GEM Module as well as our fabric extender line with the upcoming Nexus 2232TM-E Fabric Extender.

 

BY: Kaartik, a couple of quick final questions for you. Before you joined the Nexus team, you worked on the campus network side of Cisco, correct?


KV: Yes, that’s right. I was the Product Manager in the Unified Access Business Unit, managing the Fixed 10/100 business.

 

BY: How is the data center networking world different than the campus networking world?


KV: One thing that stands out is the faster speeds of the interconnects in data center switching products. In campus networks, the vast majority of connections are at Gigabit or Fast Ethernet (100Mbps) speeds. Our Nexus product line, by contrast, has switches with 96 10 Gigabit ports and 40 Gigabit and 100 Gigabit uplinks. So the speed of individual links is greater, but a campus network typically connects many more machines than a data center network, as there are more client PCs than there are servers.

 

Another big difference is the technologies that are supported in each class of product. There’s certainly some overlap, but technologies like Fibre Channel over Ethernet, Data Center Bridging, I/O virtualization – those are mostly confined to the data center world. Similarly, technologies like Power over Ethernet Plus and Universal Power over Ethernet today are more prevalent in campus access type of deployments and are not so common in the data center world.

 

BY: Thanks for taking the time to chat today, Kaartik. We’re looking forward to continuing our work with Cisco.


KV: No problem. I’m looking forward to it, too.

 

 

We at Intel have been talking about 10GBASE-T for a long time now, and it’s great to see the ecosystem continuing to grow with new products like the Nexus 5596T switch. I’d like to thank Kaartik for taking the time to answer these questions for us.

 

For the latest, follow us on Twitter: @IntelEthernet.

When I last discussed network technologies, I said the launch of the Intel® Ethernet Controller X540 ushered in the age of 10 Gigabit Ethernet (10GbE) LAN on motherboard (LOM). That might sound a bit grandiose, but to networking and IT folks who have been have been anticipating 10GbE LOM for several years, this is an important milestone.

 

LOM integration is one of the keys to bringing a new generation of Ethernet to the masses, because it means customers no longer need to buy an add-in adapter to get a faster network connection. This, of course, leads to greater and accelerated adoption of the new technology, placing it on track to eventually overtake its predecessor. We saw this play out with Fast Ethernet (100 megabits per second) and Gigabit Ethernet (GbE), and we’ll see the same thing with 10GbE.

 

But if 10GbE LOM is so important, why did it take us so long to get here? And what do the Intel Ethernet Controller X540 and 10GBASE-T bring to the show that wasn't here before?

 

Prior to the launch of the Intel Ethernet Controller X540, 10GBASE-T solutions required two chips: a media access controller (MAC) and a physical layer controller (PHY). Adapters based on these two-chip designs were notoriously power hungry, with a single-port card consuming nearly the 25 watt maximum allowed by the PCI Express* specification. These early products were also expensive, costing around $1,000 per port. With power requirements and costs like those, no server vendor was going to include 10GBASE-T LOM. Newer generations of 10GBASE-T products retained two-chip designs and power needs that, while lower, still weren’t suitable for LOM.

 

CP.GIF

A first-generation 10GBASE-T adapter. Note the cooling fan.

 

 

The Intel Ethernet Controller X540 is the first 10GBASE-T product to fully integrate the MAC and PHY in a single-chip package. As a result, it’s the first 10GBASE-T controller that has the proper cost, power, and size characteristics for LOM implementation. Each of its two ports draws a quarter of the power required by first-generation 10GBASE-T adapters, and its 25mmx25mm package is cost-effective and requires minimal real estate. Add advanced I/O virtualization, storage over Ethernet (including NFS, iSCSI, and Fibre Channel over Ethernet), and support for I/O enhancements on the new Intel® Xeon® processor E5 family, and you can see why we’re excited about this product.

 

X540.gif

The Intel Ethernet Controller X540


 

But that’s just part of the story. Let’s talk about 10GBASE-T, the 10GbE standard supported by the Intel Ethernet Controller X540; it’s going to play a major role in the growth of 10GbE.

 

Last year I described the various 10GbE interface standards. They all have their strong points, but limitations such as reach or cost have prevented each from achieving mainstream status. 10GBASE-T hits a sweet spot, making it a logical choice for broad 10GbE deployments:

 

  • 10G BASE-T supports the twisted-pair copper cabling and RJ-45 connecters used in most data centers today, meaning expensive “rip and replace” infrastructure upgrades aren’t necessary.
  • It’s compatible with existing GbE equipment, providing a simple upgrade path to 10GbE. You can connect 10GBASE-T-equipped servers to your current GbE network, and they’ll connect at GbE speeds. When you’re ready to upgrade to 10GbE, you can replace your GbE switch with a 10GBASE-T switch, and your servers will connect at the higher speed.
  • It supports distances of up to 100 meters, giving it the flexibility required for various data center deployment models, including top of rack, where servers connect to a switch in the same rack, and middle of row or end of row, where servers connect to switches some distance away.

 

So with 10GBASE-T LOM, are we seeing the start of something big? It sure looks that way. Crehan Research projects 10GbE adapter and LOM port shipments will rise to nearly 40 million in 2015, compared to five and a quarter million in 2011[1]. And 10GBASE-T will account for over half of those 40 million ports.

 

Impressive numbers, aren’t they? We certainly think so.

 

2012 is going to be a big year for 10GbE. 10GBASE-T LOM is getting us off to a great start, and you can expect to see a new generation of 10GBASE-T switches to connect servers to the network as the ecosystem continues to grow.

 

If you’d like to learn more about advancements in 10GBASE-T product design, check out this article in EE Times (see page 38), penned by Intel architect and technical chair for The Ethernet Alliance 10GBASE-T Subcommittee, Dave Chalupsky.  Or listen to Brian Johnson discuss the latest Intel Ethernet Technologies on a recent episode of Intel Chip Chat below.

 

   A Consolidated Fabric for the Data Center – Intel® Chip Chat episode 178 by Intel Chip Chat

 

 

For the latest and greatest, follow us on Twitter: @IntelEthernet.

 

 

[1] Crehan Research Server-class Adapter & LOM/Controllers Long-range Forecast, 1/31/2012

Coming into today’s announcement of the Intel® Xeon® processor E5-2600/1600 product family, Intel has been sharing some striking statistics for computing trends that point to a need to improve server networking, bandwidth, and overall performance.

 

According to Intel projections, by 2015, the number of networked devices will be in the neighborhood of 15 billion, with more than 3 billion users. Worldwide mobile data traffic alone is expected to have increased 18-fold by that time, driven by a jump in streamed content, mobile connections, enhanced computing of devices, faster mobile speeds and the proliferation of mobile video, according to the recently released Cisco Visual Networking Index (VNI) Global Mobile Data Traffic Forecast for 2011 to 2016.

 

All of this means IT departments will need to increase data center bandwidth and enable faster data flow to servers if they want to meet the needs of demanding applications and avoid bandwidth meltdowns. Fortunately, a number of I/O innovations are here to help with that endeavor.

 

As part of the Intel Xeon processor E5 product family launch, Intel announced two significant I/O innovations: the Intel® Ethernet Controller X540 and an important server platform feature, Intel® Data Direct I/O Technology.

 

The Intel Ethernet Controller X540 is the industry’s first fully integrated 10GBASE-T controller and was designed specifically for low-cost, low-power 10 Gigabit Ethernet (10GbE) LAN on motherboard (LOM) and converged network adapter (CNA) designs. I’ve been dropping hints about this product for a while now, and I’m thrilled to say that 10 Gigabit LOM is finally here.

 

X540.jpg

The Intel Ethernet Controller X540

 

 

With the Intel Ethernet Controller X540, Intel is delivering on its commitment to drive down the costs of 10GbE. We’ve ditched two-chip 10GBASE-T designs of the past in favor of integrating the media access controller (MAC) and physical layer (PHY) controller into a single chip. The result is a dual-port 10GBASE-T controller that’s not only cost-effective, but also energy-efficient and small enough to be included on mainstream server motherboards. Several server OEMs are already lined up to offer Intel Ethernet Controller X540-based LOM connections for their Intel Xeon processor E5-2600 product family-based servers. This new controller also powers the Intel® Ethernet Converged Network Adapter X540, a PCI Express* adapter for rack and tower servers.

 

X540-T2.jpg

The Intel Ethernet Converged Network Adapter X540-T2

 

 

10GBASE-T solutions based on the Intel Ethernet Controller X540 are backward-compatible with Gigabit Ethernet networks, giving customers an easy upgrade path to 10GbE by allowing them to deploy 10GBASE-T adapters in servers today and add switches when they’re ready. Another major benefit of 10GBASE-T is its ability to use the cost-effective, twisted-pair copper cabling that most data centers use today, meaning an expensive cabling upgrade won’t be required, and 10GBASE-T’s support for cable distances of up to 100 meters provides flexibility for top-of-rack or end-of-row deployments in data centers.

 

The Intel Ethernet Controller X540, like other members of the 10 Gigabit Intel Ethernet product family, supports advanced I/O virtualization and unified networking, including NFS, iSCSI and Fibre Channel over Ethernet (FCoE). Intel Ethernet controllers and converged network adapters are also optimized for new I/O advancements in the Intel Xeon processor E5 product family. One key related feature that should generate some attention is Intel® Data Direct I/O Technology (Intel® DDIO).

 

Intel Data Direct I/O Technology

 

Intel DDIO allows Intel Ethernet controllers and adapters to talk directly to processor cache, avoiding the numerous memory transactions required by the previous-generation systems. Eliminating these trips to and from system memory delivers improvements in server bandwidth, power consumption, and latency.

 

Intel DDIO is a key component of Intel® Integrated I/O, which integrates the PCI Express* controller directly into the processor to further reduce I/O latency. The level of improvement enabled by these technologies when combined with Intel Ethernet 10 Gigabit Controllers can be pretty extraordinary. Together they can deliver more than three times the bandwidth of a previous generation server, and tests in Intel labs have shown even higher peak performance levels[1].

 

There’s much more to say about the Intel Ethernet Controller X540 and Intel DDIO, so I’m going to spend my next couple of posts doing just that. Next week I’ll take a closer look at the Intel Ethernet Controller X540 and 10GBASE-T, and the following week, I will dig deeper into Intel DDIO and Intel Integrated I/O.

 

For the latest updates, follow us on Twitter: @IntelEthernet

 


 


 

[1] (I/O Bandwidth) Source: Intel internal measurements of maximum achievable I/O R/W bandwidth (512B transactions, 50% reads, 50% writes) comparing Intel® Xeon® processor E5-2680 based platform with 64 lanes of PCIe* 3.0 (66 GB/s) vs. Intel® Xeon® processor X5670 based platform with 32 lanes of PCIe* 2.0 (18 GB/s). Baseline Configuration: Green City system with two Intel® Xeon® processor X5670 (2.93 GHz, 6C), 24GB memory @ 1333, 4 x8 Intel internal PCIe* 2.0 test cards. New Configuration: Rose City system with two Intel® Xeon processor E5-2680 (2.7GHz, 8C), 64GB memory @1600 MHz, 2 x16 Intel internal PCIe* 3.0 test cards on each node (all traffic sent to local nodes).

If you’ve read any of my previous posts, it should be pretty clear that I think 10 Gigabit Ethernet (10GbE) is where the action is today. It’s growing (over one million server ports shipped in each quarter of 2011, and an estimated 90 percent growth vs. 2010(1)), big things are happening (you’ll see 10GBASE-T LAN on Motherboard connections soon), and 10GbE adapter ports are projected to outship GbE ports in the data center in 2014(2). In a recent article, Network World called 10GbE “perhaps the hottest growth segment of data center networking.” I agree, and clearly, there’s a lot to talk about when it comes to 10GbE.

 

If you follow networking, however, you’ve probably heard some discussion of 40 Gigabit Ethernet (40GbE) and 100 Gigabit Ethernet (100GbE). Those are some big numbers. Are 40GbE and 100GbE just hype or is there a real need for this much bandwidth?

 

[For you wonks, the IEEE 802.3ba standard, which includes both 40 and 100GbE, was ratified in June 2001 and marked the first time two Ethernet speeds were defined in a single standard.]


For starters, let’s take a simplified (or maybe simplistic) look at a traditional data center architecture.

 

In a typical data center, servers (often dozens of them) connect to an access switch (or multiple switches for redundancy purposes), sometimes referred to as an “edge” switch, which resides in the lowest layer in a multi-tiered switching environment. Access switches, in turn, connect to upstream switches, typically called “aggregation” or “distribution” switches. These switches literally aggregate traffic from multiple access switches. Uplinks from an access switch to an aggregation switch often operate at a higher speed than the access switch’s base ports (the ones connecting to servers). This higher rate of speed requires fewer cables to connect the two switches, simplifying connectivity.

 

Similarly, connections from an aggregation switch to a core switch typically require the fastest available speeds. The core switch is the final aggregation point in the network and ensures traffic can get from one point to another as quickly as possible. You can think of it as the nerve center of a network.

 

These switch-to-switch links, access to aggregation and aggregation to core, are where we’ll see the bulk of 40 and 100GbE used in the near and medium term. Integrated 10GBASE-T connections on the next generation of Intel® Xeon® processor-based servers will bring 10GbE to the mainstream, and that means we’ll see many more servers connecting to the network using 10GbE. This, of course, will require more 10GbE switch ports, and getting all that traffic to the aggregation switches is going to need some big pipes – 40GbE and 100GbE pipes. Recent product announcements regarding cloud ready switches and product launches around cloud networking from companies such as Cisco are addressing these bandwidth needs.

 

And how about 40 or 100GbE server connectivity? The transition to 10GbE has been gradual (the original 802.3ae standard was published in 2002), but 10GbE is going mainstream now –  we’re even seeing systems shipping with up to four 10GbE ports. Clearly, servers need their bandwidth, and those needs will continue to grow. We anticipate a faster transition from 10GbE to 40GbE than we saw with GbE to 10GbE, and you’ll likely see 40GbE server connectivity adoption start in the next few years, coinciding with the next major server refresh.

 

40 and 100GbE are definitely for real, especially for infrastructure connectivity, and they’ll have their day. But like I said at the top, 10GbE is where the action is today – bandwidth needs are increasing, costs are coming down, and customers can choose from multiple 10GbE interface options to meet their needs. And stay tuned – there’s lots of good stuff coming this year.

 

For the latest information, follow us on Twitter: @IntelEthernet

 

 

 

1)     Crehan Research: Server-class Adapter and LOM Market, November 2011

2)     Crehan Research: Server-class Adapter and LOM - Long Range Forecast December 2011

*Please not a version of this blog first apeared as an Intel industry perspective on Data Center Knowledge as Tips for Simplifying Your Cloud Network.

 

 

"Ethernet is the backbone of the Cloud."

 

 

Bold statement? Not at all. Any data center, cloud or otherwise, depends on its Ethernet network to allow servers, storage systems, and other devices to talk to each other. No network means no data center. Today, as IT departments prepare to deploy internal cloud environments, it’s important for them to consider how network infrastructure choices will impact their cloud’s ability to meet its service level agreements (SLAs). Terms commonly used to describe cloud computing capabilities, such as agility, flexibility, and scalability, should absolutely apply to the underlying network as well.

 

With that in mind, I’d like to take a look at some recommendations for simplifying a private cloud network. You can consider this post a sort of CliffsNotes* version of a white paper we completed recently; you’ll get a basic idea of what’s going on, but you’ll need to read the full piece to get all the details. It’s a great paper, and I recommend reading it.

 

Consolidate Ports and Cables


Most cloud environments are heavily virtualized, and virtualization has been a big driver of increasing server bandwidth needs. Today it’s common to see virtualized servers sporting eight or more Gigabit Ethernet (GbE) ports. That, of course, means a lot of cabling, network adapters, and switch ports. Consolidating the traffic of those GbE connections onto just a couple of 10 Gigabit Ethernet (10GbE) connections simplifies network connectivity while lowering equipment costs, reducing the number of possible failure points, and increasing the total amount of bandwidth available to the server.

 

 

GbE Configuration.jpg

With 10 Gigabit Ethernet, you can consolidate this . . .

 

10GbE Configuration.jpg

. . . down to this.

 

 

Converge Data and Storage onto Ethernet Fabrics


10GbE’s support for storage technologies, such as iSCSI and Fibre Channel over Ethernet (FCoE), takes network consolidation a step further by converging storage traffic onto Ethernet. Doing so eliminates the need for storage-specific server adapters and infrastructure equipment. IT organizations can combine LAN and SAN traffic onto a single network or maintain a separate Ethernet-based storage network. Either way, they’ve made it easier and more cost-effective to connect servers to network storage systems, reduced equipment costs, and increased network simplicity.

 

Maximize I/O Virtualization Performance and Flexibility


Once you have a 10GbE unified network connecting your cloud resources, you need to make sure you’re using those big pipes effectively. Physical servers can host many virtual machines (VMs), and it’s important to make sure bandwidth is allocated and balanced properly between those VMs. There are different methods for dividing a 10GbE port into smaller, virtual pipes, but they’re not all created equal. Some methods allow these virtual functions to scale and use the available bandwidth of the 10GbE connection as needed, while others assign static bandwidth amounts per virtual function, limiting elasticity and leaving unused capacity in critical situations.

 

Enable a Solution That Works with Multiple Hypervisors


It’s likely that most cloud deployments will consist of hardware and software, including hypervisors, from multiple vendors. Different hypervisors take different approaches to I/O virtualization—and it’s important that network solutions optimize I/O performance for those various software platforms; inconsistent throughput in a heterogeneous environment could result in bottlenecks that impact the delivery of services. Intel Virtualization Technology for Connectivity, included in Intel Ethernet server adapters and controllers, includes Virtual Machine Device Queues (VMDq) and support for Single Root I/O Virtualization (SR-IOV), to improve network performance across all major hypervisors.

 

Utilize Quality of Service for Multi-Tenant Networking


Like a public cloud, a private cloud provides services to many different clients, ranging from internal business units or departments within the company to customers, and they all have performance expectations of the cloud.  Quality of Service (QoS) helps ensure that clients’ requirements are met.

 

Technologies are available that provide QoS on the network and within a physical server. QoS between devices on the network is delivered by Data Center Bridging (DCB), a set of standards that defines how bandwidth is allocated to specific traffic classes and how those policies are enforced. For traffic between virtual machines in a server, QoS can be controlled in either hardware or software, depending on the hypervisor. When choosing a network adapter for your server, support for these types of QoS should be taken into consideration.

 

Again, keep in mind that these are high-level looks at the recommendations. The white paper I’m summarizing goes into much greater detail on the hows and whys behind each recommendation. If you’re thinking about deploying a cloud data center, it’s highly recommended reading.

 

 

Follow @IntelEthernet for the latest updates on Intel Ethernet technologies.

If you were in Seattle last week, you shared the Emerald City with Supercomputing 2011, one of the high-performance computing (HPC) community’s biggest shows. Attendees heard about the latest generation of supercomputers, the technologies that enable them, and advances that will make future high-performance systems even faster than the current models. In this post, I’ll take a quick look at the role of networking in HPC and share some Intel Ethernet information from the show.

 

Today’s typical HPC systems are custom-designed clusters of many physical servers, often hundreds or even thousands of them. These many compute nodes process and analyze massive amounts of data for traditional supercomputer tasks, such as climate research, quantum physics, and aerodynamic simulations, as well as financial applications, including high-frequency stock trading and data warehousing. As with any compute cluster environment, the network is a critical element of an HPC cluster. Each machine communicates constantly with its peers in the cluster.

 

Specialized fabrics have been the traditional method for networking HPC clusters for several years, but many organizations turned to Ethernet for these environments. The reasons are pretty straightforward:

 

  • Ethernet is everywhere. It’s a familiar, well-understood technology, and practically every server includes an integrated Ethernet connection. Ethernet’s ubiquity and a solid, established ecosystem make it easy to deploy in any environment, including HPC.

 

  • Ethernet is flexible and adaptable. Over the years, Ethernet expanded to incorporate additional traffic types, including video, voice over IP, and storage (NFS, iSCSI, and Fibre Channel over Ethernet). Enhancements such as 10 Gigabit Ethernet (10GbE) and iWARP (for ultra-low latency performance) have made Ethernet a viable solution for HPC.

 

  • Ethernet enables simplification. Consolidating multiple fabrics onto Ethernet eliminates the need to maintain and manage disparate network fabrics and the required equipment.

 

Looking for an example of someone deploying 10 Gigabit Ethernet in an HPC environment? Want to know more about upcoming products? Let me give you a few examples of what the Intel Ethernet team showcased at SC11:

 

NASA Case Study: HPC in the Cloud

Yep, HPC in the cloud. In the Intel booth, Hoot Thompson from NASA’s Center for Climate Simulation discussed how NASA moved modeling and simulation applications to a cloud-based cluster. This cloud deployment offered greater elasticity and agility than their bare-metal cluster, but they wanted to know how their applications would perform in the cloud. NASA combined the cloud cluster’s management and backbone networks onto 10GbE using Intel Ethernet 10 Gigabit Server Adapters, and found network performance to be comparable to the bare-metal cluster. It even exceeded it in some tests. Support for single-root I/O virtualization (SR-IOV) in Intel Ethernet 10 Gigabit Server Adapters played a key role in these performance levels. Results like these show that not only is Ethernet a viable fabric for HPC, but also that cloud environments are suitable for HPC deployments.

 

Low Latency Switching: Intel® Ethernet Switch FM6000

The Intel® Ethernet Switch FM6000 is the latest 10GbE and 40GbE Ethernet switch controller developed by Fulcrum Microsystems, which Intel acquired earlier this year. One of this product’s key features is its low latency performance. Switch latency is the amount of time it takes to forward a received packet on to its destination, and it’s an important ingredient in high performance computing, where nodes in a cluster constantly communicate with each other through the switch. At SC11, we displayed a reference board that will make it easier for switch vendors to design products based on the Intel Ethernet Switch FM6000. We’ll have someone from that product team blog more about it soon, so stay tuned.

 

The Public Debut of Integrated 10GBASE-T

If you’ve read any of my previous blogs posts, you’ve heard of Intel’s upcoming 10GBASE-T controller, which brings integrated 10GbE to rack and tower servers in 2012. At SC11, Supermicro and Tyan showcased next-generation motherboards that feature this controller, marking the first time motherboard integration has been shown publicly. We’re excited about this product because it will allow you to connect mainstream servers to 10GbE networks, including HPC clusters, without the expense of an add-in adapter.

 

And now, ladies and gentlemen, I give you integrated 10GBASE-T powered by Intel® Ethernet.

 

SMLOM.jpg

Supermicro motherboard with integrated 10GBASE-T (the two ports on the right)


tyanlom.jpg

Tyan motherboard with integrated 10GBASE-T ports on the upper left

 

10GbE in the Top 500

Need more proof that Ethernet is ready for HPC? Twice each year, the Top 500 organization publishes a list of the 500 fastest supercomputers on the planet. Number 42 on the list published last week is an Amazon EC2 cluster. It’s powered by the Intel® Xeon processor X5570 series, contains 7,040 cores, and is connected together by 10GbE. Check out High Performance Computing Hits the Cloud by Amazon Web Services’ James Hamilton to learn how Amazon used their 10GbE pipes.

 

Those are just a few of the technologies that help make Ethernet a viable network fabric in HPC environments. While HPC systems perform vastly different tasks than typical data center servers, many of those systems rely on the same Ethernet technologies and products used in data centers. We expect this number to grow as more organizations understand the benefits of moving to Ethernet and use it to connect their HPC systems -- systems that predict how big the next hurricane will be, processing thousands of stock transactions in less than a second, or modeling what happens when two black holes collide.

 

 

Follow us on Twitter to learn more: @IntelEthernet

There are always plenty of new terms looking to join the data center vernacular. You’re familiar with many that have taken root over the past several years: virtualization, cloud, consolidation, mission critical... the list goes on and on. A hot term you’ve probably heard recently is Big Data. There’s a lot of buzz around this one, as evidenced by the many companies introducing products designed to meet the challenges of Big Data. In this post, I’m going to take a look at Big Data and see what its implications could be for data center networks.

 

33042_Rack-server-aisle_8273_5400.jpg

 

First things first: what is Big Data? The specifics may vary depending whom you ask, but the basic idea is consistent: Big Data means large sets of data that are difficult to manage (analyze, search, etc.) using traditional methods. Of course, there’s more to it than that but that’s a decent enough answer to make you sound semi-intelligent at a cocktail party. Unless it’s a Big Data cocktail party.

 

A logical follow-up question is “why is Big Data so...big?” The main cause for these massive data sets is the explosive growth in unstructured data over the past several years. Digital images, audio and video files, e-mail messages, Word or PowerPoint files – they’re all examples of unstructured data, and they’re all increasing at a dizzying rate. Need a first-hand example? Think about your home PC. How many more digital photos, MP3s, and video files are on your hard drive compared to a few years ago? Tons, right? Now imagine that growth on an Enterprise scale, where thousands of employees are each saving gigabytes worth of presentations, spreadsheets, e-mails, images, and other files. That’s a lot of data, and it’s easy to see how searching, visualizing, and otherwise analyzing it can be difficult.

 

[Structured data, for those of you wondering, is data organized in an identifiable structure. Examples include databases or data within a spreadsheet, where information is grouped into columns and rows.]

 

So what to do? There’s no shortage of solutions billed as the answer to the Big Data problem. Let’s take a look at one that’s getting a lot of attention these days: Hadoop.

 

Hadoop is an open source software platform used for distributed processing of vast amounts of data. A Hadoop deployment divides files and applications into smaller pieces and distributes them across compute nodes in the Hadoop cluster. This distribution of files and applications makes it easier and faster to process the data, because multiple processors are working in parallel on common tasks.

 

Let’s take a quick look at how it works.

 

Two major software components comprise Hadoop, the Hadoop Distributed File System (HDFS) and the MapReduce engine.

  • HDFS runs across the cluster and facilitates the storage of portions of larger files on various nodes in the cluster. It also provides redundancy and enables faster transactions by placing a duplicate of each piece of a file elsewhere in the cluster.
  • The MapReduce engine divides applications into small fragments, which are then run on nodes in the cluster. The MapReduce engine attempts to place each application fragment on the node that contains the data it needs, or at least as close to that node as possible, reducing network traffic.

 

So who’s using Hadoop today and why? You’ve heard of the big ones – Yahoo!, Facebook, Amazon, Netflix, eBay. The common thread? Massive amounts of data that need to be searched, grouped, presented, or otherwise analyzed. Hadoop allows organizations to handle these tasks at lower costs and on easily scalable clusters. Many of these companies have built custom applications that run on top of HDFS to meet their specific needs, and there’s a growing ecosystem of vendors selling applications, utilities, and modified file systems for Hadoop. If you’re a Hadoop fan, the future looks bright.

 

What are the network implications of Hadoop and other distributed systems looking to tackle Big Data? Ethernet is used in many server clusters today, and we think it will continue to grow in these types of deployments, as Ethernet's ubiquity makes it easy to connect these environments without using specialized cluster fabric devices. The same network adapters, switches, and cabling that are being used for data center servers can be used for distributed system clusters, simplifying equipment needs and management. And while Hadoop is designed to run on commodity servers, hardware components, including Ethernet adapters, can make a difference in performance. Dr. Dhabaleswar Panda and his colleagues at Ohio State University have published a research paper in which they demonstrate that 10GbE makes a big difference when combined with an SSD in an unmodified Hadoop environment. Results such as these have infrastructure equipment vendors taking notice. In its recent Data Center Fabric announcement, for example, Cisco introduced a new switch fabric extender aimed squarely at Big Data environments. You can expect to see more of this as distributed system deployments continue to grow.

 

Big Data isn’t going away. It’s going to keep getting bigger. We’ll see more products, both hardware and software, that will be designed to make your big data experience easier, more manageable, and more efficient. Many will use a distributed model like Hadoop, so network infrastructure will be a critical consideration.

 

So, here are some questions for you, dear reader: Have you deployed a Hadoop cluster or are you planning to do so soon? What network considerations did you take into account as you planned your cluster?

 

We’d love to hear your thoughts.

 

Follow us on Twitter: @IntelEthernet

On July 12th, VMware announced vSphere 5, the new version of its enterprise virtualization suite. There are many new features and capabilities in this product. From a networking point of view, one capability that’s pretty darn compelling is the native Fibre Channel over Ethernet (FCoE) support delivered through the integration of Open FCoE.

 

Not quite sure what that means? Read on.

 

In January, Intel announced the availability of certified Open FCoE support on our Intel Ethernet Server Adapter X520 family of 10GbE products as a free, standard feature. We had great support from our partners – Cisco, EMC, Dell, NetApp, and others – and we received plenty of positive press. Now, with the launch of vSphere 5, Intel and VMware have taken things a step further by the integration of Open FCoE in the industry’s leading server virtualization suite.

 

So what is Open FCoE and why is it important?

 

Open FCoE enables native support for FCoE in an operating system or hypervisor. Integrating a native storage initiator has some key benefits for customers who look to simplify their networks and converge LAN and storage traffic:

 

  • It enables storage over Ethernet support on a standard Ethernet adapter; no need for costly converged network adapters (CNAs) powered by hardware offload engines
  • Performance scales with server platform advancements, as opposed to CNA performance, which is limited by the capabilities of its offload processor
  • It enables FCoE on any compatible10 Gigabit Ethernet adapter, which helps prevent vendor lock-in.

 

Open FCoE support in vSphere 5 means VMware customers can now use a standard 10 Gigabit Ethernet adapter, such as the Intel Ethernet Server Adapter X520, for 10GbE LAN and storage traffic (including NAS, iSCSI, and FCoE), which ultimately simplifies infrastructures and reduces equipment costs.

 

The idea of integrating native storage over Ethernet support isn’t new; most operating systems and hypervisors have included a native iSCSI initiator for several years. We’ve watched as dedicated iSCSI adapters gave way to iSCSI running on standard Ethernet adapters with performance that increases with each bump in processor speed and platform architecture improvement. We expect Open FCoE to bring similar benefits to FCoE traffic.

 

Intel worked closely with VMware to integrate Open FCoE in vSphere and to qualify it with the industry’s leading storage vendors. We’re excited to see it incorporated into vSphere 5, and we feel confident that VMware customers will appreciate its benefits.

 

VMware’s Vijay Ramachandran, Group Manager of Infrastructure Product Management and Storage Virtualization, offered some thoughts on Open FCoE in vSphere and why it’s a good thing for VMware customers.

 

Is unified networking and combining LAN and storage traffic on Ethernet important to your customers?

Absolutely. Virtualization is a major driver of 10 Gigabit adoption, and network convergence on 10GbE is very important for our customers who look to increase bandwidth and simplify their infrastructures.

 

There are other ways to support FCoE in vSphere. Why is Open FCoE integration significant?

Integrating Open FCoE into vSphere is important because it makes FCoE available to all of our customers, just as iSCSI has been for years. When customers upgrade to vSphere 5, they get FCoE support on any compatible 10GbE adapter they have installed. That’s important because choice is a key pillar of VMware’s private cloud vision. With its support for standard 10GbE adapters and compatibility with FCoE-capable network devices, Open FCoE supports that vision. vSphere 5 has several new storage and networking features that increase performance and improve management, and with Open FCoE, we have a native solution with performance that will scale with advancements in vSphere and server platforms.


Can you tell us about the work Intel and VMware did to enable Open FCoE in vSphere?

We wanted to implement FCoE in a way that offered the best benefits to our customers. VMware worked closely with Intel for over two years to integrate Open FCoE into vSphere and to validate compatibility with their 10GbE adapters. It’s nice to see the results of that work in vSphere 5.

 

I’d like to thank Vijay for taking the time to answer these questions for us.

 

If you’re interested in learning more, see Intel and VMware: Enabling Open FCoE in VMware vSphere 5.

 

Follow us on Twitter for the latest updates: @IntelEthernet

Hello again. Welcome to my second post from Cisco Live! 2011. It has been a great show so far, loaded with activity at the Intel booth and a lot of exciting showcases of new technologies. With today’s post, I’d like to talk about a particular technology that seems to come up at every show: The 10GBASE-T ethernet standard.

 

In previous posts, I’ve mentioned that Intel’s “Twinville” 10GBASE-T controller will be the industry’s first single-chip 10GBASE-T controller and will power the 10GBASE-T LAN on motherboard (LOM) connections for mainstream servers later this year. This integration, along with 10GBASE-T’s backwards compatibility with Gigabit Ethernet and support for already deployed copper cabling, leads us to believe that 10GBASE-T will ultimately be the dominant 10GbE interface in terms of ports shipped.

 

On Tuesday, Cisco nudged all of us a bit closer to that reality by announcing the Nexus 2232TM Fabric Extender, its first Nexus platform that supports 10GBASE-T. Let’s take a closer look.

 

Cisco Nexus 2000 Fabric Extenders behave as remote line cards for Nexus parent switches. They connect to the parent switch via 10GbE fiber uplinks and are centrally managed by that parent, creating a distributed modular switch – distributed because the parent switch and fabric extenders are not physically constrained by a chassis, and modular because additional fabric extenders can be added to increase switch capacity.

 

The Nexus 2232TM has 32 10GBASE-T ports as well as eight SFP+ ports for connecting to its parent, in this case a Nexus 5000 series switch. With that many 10GBASE-T ports, the Nexus 2232TM can connect to every server in a typical rack. Integration of 10GBASE-T LAN on motherboard (LOM) ports on these servers will drive adoption of 10GbE and 10GBASE-T over the next few years. Since 10GBASE-T is backwards-compatible with existing Gigabit Ethernet (GbE) equipment, IT departments can upgrade to 10GBASE-T all at once or even server-by-server and use the same fabric extender for all of the servers.

 

Here at Cisco Live, the Nexus 2232TM and Twinville-based 10GBASE-T adapters are key ingredients in a joint demo from Intel, Cisco, and Panduit. There’s quite a bit more to the demo (iSCSI, FCoE, live migration), but I don’t really have space to cover it here. I’ll see if I can post a short video of the demo in the near future.

 

Earlier this week, I spoke with Aurelie Fonteny, product manager for the Nexus 2232TM, who kindly answered a handful of questions.

 

BY: Aurelie, how does Cisco see 10GBASE-T growing over the next few years?


The past few years have been marked with a trend towards 10 Gigabit Ethernet at the server access. Virtualization, consolidation of multiple 1Gig cables, price, and higher performance CPUs have all been drivers towards that trend. 10GBASE-T will accelerate that trend with the additional flexibility of connectivity options. Ultimately, LOM integration will drive the exponential volumes of 10GBASE-T platforms.

 

BY: What benefits does the Nexus 2232TM offer to your customers?


AF: Cisco is excited to introduce the Nexus 2232TM, the first 10GBASE-T product in the Nexus Family of Data Center switches. In total, 768 1/10GBASE-T ports can be managed from one single point of management. As such, the Nexus 2232TM combines the benefits of the FEX architecture together with the benefits of 10GBASE-T: 10G consolidation, 1G to 10G migration simplicity, and cabling simplicity.

 

 

How do you anticipate customers deploying this product vs. the SFP+ version of the Nexus 2232?


AF: The Nexus 2232PP (the fiber version with Direct attach copper options) and the Nexus 2232TM share the same architecture and have the same number of host interfaces and network interfaces. The choice between the two platforms will be a trade-off between power, latency, cabling type, price, and FCoE support (not supported on Nexus 2232TM at FCS).

 

BY: We’ve heard folks say that 10GbE is too expensive. Can you talk about pricing for the Nexus 2232TM?


AF: The Nexus 2232TM is priced at a small premium over the Nexus 2232PP (fiber version with Direct Attach copper options). Total Cost of Ownership includes not only a point product but also cabling, server adapter, and power. As such, both solutions with 10G servers attached via direct attach copper (Twinax) from server to the Fabric Extender or 10G servers attached via 10GBASE-T from server to the Fabric Extender are about the same price today. Some of the decision factors would be requirements for FCoE consolidation, distances between servers and network access, cabling preference and existing cabling structure, and mix of 1G and 10G ports required at Top of Rack.

 

BY: How have Cisco and Intel’s work together advanced 10GBASE-T?


AF: Our strong collaboration with Intel from the early stages of 10GBASE-T on Catalyst and Nexus platforms has been critical to the ecosystem interoperability at the server access and to the overall high level quality achieved in 10GBASE-T product integration.

 

We are working with Cisco and Panduit on a white paper that includes key deployment models for 10GBASE-T in the data center. I will post a short blog when the paper is available.

 

Follow us on Twitter for the latest updates: @IntelEthernet

Greetings from Cisco Live! 2011!

 

 

I’m in Las Vegas at another one of the IT industry’s big shows. Here, I will meet with customers and partners to talk about Intel Ethernet products and some of the new technologies that change the data center. Over the next few days, I’ll bring you updates on some of the significant network technology announcements that take place here, and I will explain why they are important.

 

Today’s topic: Energy Efficient Ethernet (also known as IEEE 802.3az, EEE, or triple-E).

 

The name is fairly self-explanatory, but I’ll give you a bit of detail. The EEE standard allows an Ethernet device to transition to and from a low power state in response to the changes in network demand. This means an Ethernet port that supports EEE can drop into its low power state (known as Low Power Idle or “LPI”) during periods of low activity, and then return to normal when conditions require it to do so. That is as deep as I’m going to go, but if you want nuts and bolts, check out “Energy Efficient Ethernet: Technology, Application, and Why You Should Care” from Intel’s Jordan Rodgers.

 

Intel supports EEE across our Intel® Ethernet Gigabit product family for both client and server connections, including the recently launched Intel® Ethernet Controller I350 and the Intel® 82579 Gigabit Network Connection:

 

  • The Intel Ethernet Controller I350 supports EEE across four Gigabit Ethernet ports integrated onto a single chip. In LPI state, power consumption drops by 50 percent. This controller powers the Intel Ethernet Server Adapter I350 family, and all of those adapters enjoy the same EEE benefits.

 

  • The Intel® 82579 Gigabit Network Connection is designed for client systems and in LPI state, its power consumption drops by nearly 90 percent. It’s also a key power-saving component of second generation Intel® Core™ vPro™ processor family systems, which also include power-saving enhancements in the CPU and chipset. For a real world example of how these features help save power and money, check out this case study: Pioneering Public Sector IT.

 

That takes care of the client side, but what about the other end of the wire? EEE only works when devices on both sides are EEE-compliant. That’s where Cisco comes in.

 

Earlier today, Cisco announced a number of enhancements (including EEE) to its Catalyst 4500E switch family. These switches are designed for campus deployments, which means they connect primarily to client systems – desktops and laptops. When you consider that many companies deploy thousands of client systems, the benefits of EEE are obvious: big energy savings.

 

Earlier this week I asked Anoop Vetteth, senior product manager for the Catalyst 4000 switch family, a few questions about Cisco’s support for EEE.

 

BY: Anoop, why is EEE support in the Catalyst 4500E line important to customers?


AV: Energy efficiency and minimizing power consumption to meet corporate sustainability goals seem to be top of mind for most of Cisco’s enterprise customers. To this effect, Cisco has delivered energy-efficient platforms targeted for both Campus and Data Center. Moreover, through applications like EnergyWise, Cisco has also enabled its customers to look beyond networking equipment to monitor and regulate the power consumed by the entire campus. The piece that was missing was a mechanism to dynamically reduce the network power consumption based on link utilization. Energy Efficient Ethernet, or IEEE 802.3az, address this and is probably the only green standard in the industry today. The EEE standard was ratified late last year, and we expect to see market leaders for end devices like Intel offer EEE as a standard feature starting mid to late 2011. Moreover, we also expect EEE to fast become a requirement from certification agencies for corporate compliance.

 

Catalyst 4500E is the world’s most widely deployed modular access switch and Cisco’s leading Campus access layer switch. This platform has leapfrogged the industry time and again in terms of being first to deliver many industry standard and pre-standard technologies. With EEE support, the Cisco Catalyst 4500E platform delivers the most energy-efficient platform in its class in the Campus and future proofs customer deployments for compliance with emerging regulatory requirements.

 

BY: Can you tell me how the collaboration between Cisco and Intel is making a difference for energy efficiency across the network?


AV: From the get go, Cisco and Intel have been working closely together to deliver a solution that is compliant with the IEEE standard and to weed out any deployment-impacting issues. The EEE end-to-end solution will first be offered for the enterprise campus due to the nature of traffic profile and high impact, in terms of power savings, that we expect in this environment. The sheer volume of end devices coupled with the low link utilization in campus environments makes it ideal for the introduction of EEE technology. Testing with real life traffic profiles on Cisco Catalyst 4500E switches and Intel EEE-capable network controllers reveal that EEE can help save on an average as much as 1W per link. EEE in conjunction with Cisco EnergyWise translates to considerable savings in campus environments with tens of thousands of end devices.

 

BY: Can you tell me how Intel and Cisco have worked together to support EEE?


AV:  Cisco and Intel have been big proponents of the EEE standard at the IEEE 802.3 working group, and our representatives contributed collaboratively towards the successful culmination and ratification of this standard. The collaboration did not stop there. The EEE standard defines a new signaling mechanism between the host and end device to communicate EEE capability and negotiate precise timing parameters, including when to enter into LPI state and the corresponding duration. With no precedence and no governing body to check compliance, it became necessary to form an alliance to test and validate each other’s implementation. Cisco and Intel have been in lock step during this validation process to ensure that implementation is in compliance with the IEEE 802.3 standard. Finally, both companies have also come together to engage top customers collaboratively as part of Early Field Trial (EFT) or beta program.

 

BY: What results have you seen from your early field trial customers?


AV: We are collaboratively running EFT programs with some of our key customers from North America and Europe. This program started in mid-June and is well underway. Cisco and Intel provided the technical support required to get the setup up and running so that customers can use it to run traffic patters/profiles and measure the power savings with and without enabling EEE. The feedback from customers has been overwhelming in terms of interest in this technology as well as the power savings they are seeing by using this technology. There have been some enhancements that both Cisco and Intel have incorporated into our products based on some of the valuable suggestions that we have received from our EFT customers.

 

BY: Will Cisco support EEE in other switches?


AV: Cisco considers EEE to be a strategic technology and will extend EEE support beyond the Catalyst 4500E platform. Next generation stackable Catalyst switches are expected to support EEE, and this will extend EEE support across all the Cisco campus access platforms. The relevance of EEE in data center is expected to be more prominent and pronounced to customers as they transition to 10GBase-T links for server access.

 

For more information, see this white paper from Intel and Cisco: IEEE 802.3az Energy Efficient Ethernet: Build Greener Networks.

 

Watch for another update from Cisco Live! 2011 later this week.

 

Follow us on Twitter for the latest updates: @IntelEthernet

Back in March 2008, my colleague Ben Hacker wrote up a blog post that compared and contrasted the 10 Gigabit Ethernet (10GbE) interface standards. It was a geeky dive into the world of fiber and copper cabling, media access controllers, physical layers, and other esoteric minutiae. It was also a tremendously popular edition of Ben’s blog and continues to get hits today.

 

So here we are three years later. How have things shaken out since that post? Which interfaces are most widely deployed and why? Ben has left the Ethernet arena for the world of Thunderbolt™ Technology, so I’ve penned this follow-up to bring you up to date. I’ll warn you in advance, though – this is a long read.

 

Still here? Let's go.

 

In “10 Gigabit Ethernet – Alphabet Soup Never Tasted So Good!” Ben examined six 10GbE interface standards: 10GBASE-KX4, 10GBASE-SR, 10GBASE-LR, 10GBASE-LRM, 10GBASE-CX4, and 10GBASE-T. I won’t go into the nuts and bolts of each of these standards; you can read Ben’s post if you’re looking for that info. I will, however, take a look at how widely each of these standards is deployed and how they’re being used.

 

10GBASE-KX4/10GBASE-KR

These standards support low-power 10GbE connectivity over very short distances, making them ideal for blade servers, where the Ethernet controller connects to another component on the blade. Early implementations of 10GbE in blade servers used 10GBASE-KX4, but most new designs use 10GBASE-KR due to its simpler design requirements.

 

Today, most blade servers ship with 10GbE connections, typically on “mezzanine” adapter cards. Dell’Oro Group estimates that mezzanine adapters accounted for nearly a quarter of the 10GbE adapters shipped in 2010, and projects they’ll maintain a significant share of 10GbE adapter shipments in the future.

 

10GBASE-CX4

10GBASE-CX4 was deployed mostly by early adopters of 10GbE in HPC environments, but shipments today are very low. The required cables are bulky and expensive, and the rise of SFP+ Direct Attach with its compact interface, less expensive cables, and compatibility with SFP+ switches (we’ll get to this later) have left 10GBASE-CX4 an evolutionary dead end. Dell’Oro Group estimates that 10GBASE-CX4 port shipments made up less than two percent of total 10GbE shipments in 2010, which is consistent with what Intel saw for our CX4 products.

 

The SFP+ Family: 10GBASE-SR, 10GBASE-LR, 10GBASE-LRM, SFP+ Direct Attach

This is where things get more interesting. All of these standards use the SFP+ interface, which allows network administrators to choose different media for different needs. The Intel® Ethernet Server Adapter X520 family, for example, supports “pluggable” optics modules, meaning a single adapter can be configured for 10GBASE-SR or 10GBASE-LR by simply plugging the right optics module into the adapter’s SFP+ cage. That same cage also accepts SFP+ Direct Attach Twinax copper cables. This flexibility is the reason SFP+ shipments have taken off, and Dell’Oro Group and Crehan Research agree that SFP+ adapters lead 10GbE adapter shipments today.

 

10GBASE-SR

“SR” stands for “short reach,” but that might seem like a bit of a misnomer; 10GBASE-SR has a maximum reach of 300 meters using OM3 multi-mode fiber, making it capable of connecting devices across most data centers. A server equipped with 10GBASE-SR ports is usually connected to a switch in a different rack or in another part of the data center. 10GBASE-SR’s low latency and relatively low power requirements make it a good solution for latency-sensitive applications, such as high-performance compute clusters. It’s also a common backbone fabric between switches.

 

For 2011, Dell’Oro Group projects SFP+ fiber ports will be little more than a quarter of the total 10GbE adapter ports shipped. Of those ports, the vast majority (likely more than 95 percent) will be 10GBASE-SR.

 

10GBASE-LR

10GBASE-LR is 10GBASE-SR’s longer-reaching sibling. “LR” stands for “long reach” or “long range.” 10GBASE-LR uses single-mode fiber and can reach distances of up to 10km, though there have been reports of much longer distances with no data loss. 10GBASE-LR is typically used to connect switches and servers across campuses and between buildings. Given their specific uses and higher costs, it’s not surprising that shipments of 10GBASE-LR adapters are much lower than shipments of 10GBASE-SR adapters. My team tells me adapters with LR optics modules account for less than one percent of Intel’s 10GbE SFP+ adapter sales. It’s an important one percent, though, as no other 10GbE interface standard provides the same reach.

 

10GBASE-LRM

This standard specifies support for 10GbE over older multimode fiber (up to 220m), allowing IT departments to milk older cabling. I’m not aware of any server adapters that support this standard, but there may be some out there. Some switch vendors ship 10GBASE-LRM modules, but support for this standard will likely fade away before long.

 

SFP+ Direct Attach

SFP+ Direct Attach uses the same SFP+ cages as 10GBASE-SR and LR but without active optical modules to drive the signal. Instead, a passive copper Twinax cable plugs into the SFP+ housing, resulting in a low-power, short-distance, and low latency 10GbE connection. Supported distances for passive cables range from five to seven meters, which is more than enough to connect a switch to any server in the same rack. SFP+ Direct Attach also supports active copper cables, which support greater distances while sacrificing a small amount of power and latency efficiency.

 

A common deployment model for 10GbE in the data center has a "top-of-rack" switch connecting to servers in the rack using SFP+ Direct Attach cables and 10GBASE-SR ports connecting to end-of-row switches that aggregate traffic from multiple racks.

 

This model has turned out to be tremendously popular thanks to the lower costs of SFP+ Direct Attach adapters and cables. In fact Dell'Oro estimates Direct Attach adapter shipments overtook SFP+ fiber adapter shipments in 2010 and will outsell them over 2.5:1 in 2011.

 

10GBASE-T

Last, let’s take a look at 10GBASE-T. This is 10GbE over the twisted-pair cabling that’s deployed widely in nearly every data center today. It uses the familiar RJ-45 connection that plugs into almost every server, desktop, and laptop today.

 

RJ-45 Cable End

RJ-45: Look familiar?

Alternate title: Finally, a picture to break up the text

 

 

In Ben’s post, he mentioned that 10GBASE-T requires more power relative to other 10GbE interfaces. Over the last few years, however, manufacturing process improvements and more efficient designs have helped reduce power needs to the point where Intel’s upcoming 10GBASE-T controller, codename Twinville, will support two 10GbE ports at less than half the power of our current dual-port 10GBASE-T adapter.

 

This lower power requirement along with a steady decrease in costs over the past few years mean we’re now at a point where 10GBASE-T is ready for LOM integration on mainstream servers – mainstream servers that you’ll see in the second half of this year.

 

I’m planning to write about 10GBASE-T in detail next month, but in the meantime, let me give you some of its high-level benefits:

  • It’s compatible with existing Gigabit Ethernet network equipment, making migration easy. SFP+ Direct Attach is not backward-compatible with GbE switches.
  • It’s cost-effective. List price for a dual-port Intel Ethernet 10GBASE-T adapter is significnatly lower than the list price for an Intel Ethernet SFP+ Direct Attach adapter. Plus, copper cabling is less expensive than fiber.
  • It’s flexible. Up to 100 meters of reach make it an ideal choice for wide deployment in the data center.

 

We at Intel believe 10GBASE-T will grow to become the dominant 10GbE interface in the future for those reasons. Crehan Research agrees, projecting that 10GBASE-T port shipments will overtake SFP+ shipments in 2013-2014.

 

If you’re interested in learning about what it takes to develop and test a 10GBASE-T controller, check out this Tom's Hardware photo tour of Intel’s 10 Gigabit “X-Lab." It's another long read, but at least there are lots of pictures.

 

In the three years that have passed since Ben’s post, a number of factors have driven folks to adopt 10GbE. More powerful processors have enabled IT to achieve greater consolidation, data center architects are looking to simplify their networks, and more powerful applications are demanding greater network bandwidth. There’s much more to the story than I can cover here, but if you are one of the many folks who read that first article and have been wondering what has happened since then, I hope you found this post useful.

 

 

Follow us on Twitter for the latest updates: @IntelEthernet

Greetings from Interop 2011, here in Las Vegas.  For those of you not in the know, Interop is billed as “the meeting place for the global business technology community,” and it’s one of the IT industry’s major tradeshows. Technology companies from all over the world are showcasing their latest products here this week, and networking companies are no exception. Bandwidth, Gigabit, 10 Gigabit, iSCSI, Fibre Channel over Ethernet, I/O virtualization – all of these networking terms (and many more) can be heard as one walks through the exhibitor expo. Why? Because networking is an essential element of many of the technology areas being highlighted here this week, and people want to understand how new networking technologies will benefit and affect them.

 

So with that in mind, I thought I’d share a handful of the questions we on the Intel Ethernet team are hearing at this show. And I’ll answer them for you, of course.

 

What Ethernet solutions are available from Intel?

The Intel® Ethernet product line offers have pretty much any adapter configuration you could want – Gigabit, 10 Gigabit, copper, fiber, one port, two ports, four ports, custom blade form factors, support for storage over Ethernet, enhancements for virtualization . . . the list goes on and on. We’ve been in the Ethernet business for 30 years, and we’re the volume leader for Gigabit Ethernet (GbE) and 10 Gigabit Ethernet (10GbE) adapters. We’ve shipped over 600 million Ethernet ports to date.  I have to think even Dr. Evil would be happy with that number.

 

 

interop rack3.jpg

The latest and greatest: a display of our 10GbE adapters for rack and blade servers

 

 

Why do I need 10GbE?

Quite simply, deploying 10GbE helps simplify your network while reducing equipment needs and costs. A typical virtualized server contains up to 10 or 12 GbE ports and two storage network ports, often Fibre Channel. 10GbE allows you to consolidate the traffic of those dozen or more ports onto just two 10GbE ports. This consolidation means fewer network and storage adapters, less cabling, and fewer switch ports to connect that server to the network, and those reductions translate into lower equipment and power costs.

 

What role will Ethernet play in the cloud?

Ethernet is the backbone of any data center today, and that won’t change as IT departments deploy cloud-optimized infrastructures. In fact, Ethernet is actually extending its reach in the data center. Fibre Channel over Ethernet for storage network traffic and iWARP for low latency clustering traffic are two examples of Ethernet expanding to accommodate protocols that used to require specialized data center fabrics. The quality of service (QoS) enhancements delivered by the Data Center Bridging standards are largely responsible for these capabilities.

 

As I mentioned above, converging multiple traffic types onto 10GbE greatly simplifies network infrastructures. Those simpler infrastructures make it easier to connect servers and storage devices to the network, and the bandwidth and 10GbE will help ensure the performance needed to support new cloud usage models that require fast, flexible connectivity.

 

How quickly is 10GbE growing?

10GbE is growing at a healthy rate as more IT departments look to simplify server connectivity and increase bandwidth. According to the Dell’Oro Group’s Controller & Adapter Report for 4Q10, 10GbE port shipments rose to over 3,000,000 in 2010, a 250 percent increase over 2009.

 

All the major network adapter and switch vendors are showing 10GbE products here this week. One of those companies, Extreme Networks, announced two new switches on Tuesday as a part of their Open Fabric Data Center Architecture for Cloud-Scale Networks. The BlackDiamond* X8 switch platform supports up to 768 10GbE ports per chassis, and the Summit X670 switches are available in 64- and 48-port configurations. Sounds like a big vote of confidence in 10GbE, doesn’t it?

 

Isn’t 10GbE still pretty expensive?

It might seem that way, but 10GbE prices have fallen steadily over the past few years. You’ll have to check with other companies for their pricing info, but I can tell you that at less than $400 per port, Intel Ethernet 10 Gigabit server adapters are less expensive than our GbE adapters in terms of cost per Gigabit.

 

When will I see 10GbE shipping as a standard feature on my servers?

10GbE connections are available in many blade systems today, and integrated 10GbE LAN on motherboard (LOM) connections will be widespread in rack servers with the launch of Sandy Bridge servers in the second half of 2011. When 10GbE becomes the default connection on those volume servers, all of the benefits of 10GbE – simpler networks, higher performance, lower costs – will be free, included with the cost of the server. There’s a ton I could say in this answer, and I’ll go into more detail in a future post.

 

There you go - that’s a quick sample of some of the questions we’ve heard here at the show. Many of the questions we’ve heard this week, including those above, will make for some interesting blog posts. I’ll get to as many as I can in the coming weeks.

Filter Blog

By date:
By tag: