Skip navigation

Data center traffic growth is poised for a six-fold increase over the next four years to reach 6.6 zettabytes. To know how this will impact data center infrastructure, though, means better understanding what types of data are growing fastest.


In the latest Cisco* Global Cloud Index report, we learn that almost two-thirds of that 6.6 zettabytes is cloud computing traffic.  But even more interesting is that 76% of data center traffic will stay within the data center.  This is the so-called “east-west” data traffic that is the result of data exchanges and requests between servers or between servers and storage.


Why so much east-west traffic? The Cisco report does not break down the details, but we can surmise that this comes from applications such as web transaction processing, recommender systems, cloud clustering services and big data analytics. 


The response time for these applications can be impacted by network latency, which means low-latency switches (like our Intel® Ethernet Switch FM6000 family) will play a key role in the data centers that will be built to support this data explosion.

It used to be said that low latency networks weren’t needed for the data centers that ran big e-commerce or social media sites, as most people are willing to wait an extra few microseconds for the latest update on their friends and the network itself wasn’t a gating factor in performance.


Product recommendation technology, which powers the “you might also like” messages on e-commerce sites, however is changing that.  New “recommender” systems require increased computing performance to better factor more data into their recommendations.  And they need to do this all in the time it takes a webpage to load.


An article in the October 2012 issue of IEEE Spectrum by professors, and recommender system pioneers, Joseph A. Konstan and John Riedl chronicles the evolution of the technology and its dramatic impact on e-commerce sales.


The most popular recommender systems use either user-user or item-item algorithms – that is they compare your purchases, likes, clicks and page views with other people (user-user) or they compare the items you like with other items to see what buyers of those items also purchased (item-item). 


The two main problems with these approaches are that the algorithms are rigid and that tastes and preferences change, both of which lead to bad recommendations.


Dimensionality reduction is a new way to make both algorithms much more accurate.  This method builds a massive matrix of people and their preferences. Then it assigns attributes or dimensions to these items to reduce the number of elements in the matrix.


Let’s take food for example.  A person’s matrix might show that they rated filet mignon, braised short ribs, Portobello mushrooms and edamame with sea salt very highly.  At the same time, they give low ratings to both fried chicken wings and cold tofu rolls. The dimensionality reduction then seeks to determine that person’s taste preferences: 


“But how do you find those taste dimensions? Not by asking a chef. Instead, these systems use a mathematical technique called singular value decomposition to compute the dimensions. The technique involves factoring the original giant matrix into two “taste matrices”—one that includes all the users and the 100 taste dimensions and another that includes all the foods and the 100 taste dimensions—plus a third matrix that, when multiplied by either of the other two, re-creates the original matrix.”


So in our example, the recommender might conclude that you like beef, salty things and grilled dishes, but that you dislike chicken, fried foods and vegetables.


But the number of calculations grows dramatically as the matrices grow in size.  A matrix of 250 million customers and 10 million products takes 1 billion times as long to factor as a matrix of 250,000 customers and 10,000 products.  And the process needs to be repeated frequently as the accuracy of the recommendations decreases as new ratings are received.


This can spawn a lot of east-west data center traffic, which is needed to complete these large matrix calculations. Because users don’t spend much time on a given web page, the data center network latency is critical to providing recommendations in a timely manner (time is money).


Intel® Ethernet Switch Family FM6000 ICs are perfect for these types of data centers because of their pioneering low layer 3 cut-through switching latency of less than 400 ns.  So, the next time you get a great book recommendation, there might just be an Intel switch helping to power that suggestion.

In 2008, we launched our first single port PCI Express* (PCIe*) silicon designed directly for the embedded and entry level markets.  The Intel® 82574 Gigabit Ethernet Controller was an immediate and lasting success.  You can find this controller on thousands of different motherboards offered by hundreds of vendors.  The Intel 82574 Gigabit Ethernet Controller is used in information kiosks, medical equipment, household appliances, rail cars and tons of others applications.

But after four years, it is starting to show its age.


We spoke to our Intel 82574 Gigabit Ethernet Controller customers and with their input came up with our newest product:


The Intel® Ethernet Controller I210.


The Intel® Ethernet Controller I210 is fully integrated MAC/PHY in a single low power package that supports Embedded, High End Desktop (HEDT), Server, and MicroServer designs.  The device offers a fully integrated Gigabit Ethernet (GbE) media access control (MAC), physical layer (PHY) ports  (for the I210-AT, I210-IT, and I211-AT models), and SGMII/SerDes port (for the I210-IS model) that can be connected to an external PHY or backplane.  The Intel® Ethernet Controller I211-AT is also available in the same low power package and is targeted to support HEDT and Embedded designs .

The following are key feature of  the Intel® Ethernet Controller I210-AT, I210-IS, and I210-IT:

  • Small Package: 9mm x 9mm
  • PCIe v2.1 Gen1 (2.5GT/s) x1, with iSVR (integrated Switching Voltage Regulator)
  • SGMII/SerDes (I210-IS Only)
  • Platform Power Efficiency
  • IEEE 802.3az Energy Efficient Ethernet (EEE)
  • Proxy: ECMA-393 & Windows Logo for proxy offload
  • DMA Coalescing
  • Converged Platform Power Management (CPPM) Support Ready (requires platform-level tuning)
  • Advanced Features:
  • 0-70oC ambient temp (I210-AT)
  • -40-85oC Industrial temp (I210-IT and I210-IS)
  • Audio-Video Bridging
  • IEEE 1588/802.1AS Precision Time Synchronization
  • IEEE 802.1Qav Traffic shaper (w/SW extensions)
  • Time based transmission
  • Jumbo Frames
  • Interrupt Moderation, VLAN support, IP checksum offload
  • Four Software Definable Pins (SDPs)
  • 4 Transmit and 4 Receive queues
  • RSS & MSI-X to lower CPU utilization in multi-core systems
  • Advanced Cable Diagnostics, Auto MDI-X
  • ECC – Error Correcting Memory in Packet buffers
  • Manageability:
    • NC-SI for greater bandwidth pass through
    • SMBus low-speed serial bus to pass network traffic
    • Flexible FW Architecture w/secure NVM update
    • MCTP over SMBus/PCIe
    • PXE and iSCSI Boot

The following are key features of  the Intel® Ethernet Controller I211-AT:

  • Small Package: 9mm x 9mm
  • PCIe v2.1 Gen1 (2.5GT/s) x1, with iSVR (integrated switching voltage regulator)
  • Integrated non-volatile memory (iNVM)
  • Platform Power Efficiency
  • IEEE 802.3az Energy Efficient Ethernet (EEE)
  • Proxy: ECMA-393 & Windows Logo for proxy offload
  • Advanced Features:
  • 0-70oC ambient temp
  • IEEE 1588/802.1AS Precision Time Synchronization
  • Jumbo Frames
  • Interrupt Moderation, VLAN support, IP checksum offload
  • 2 Transmit and 2 Receive queues
  • RSS & MSI-X to lower CPU utilization in multi-core systems
  • Advanced Cable Diagnostics, Auto MDI-X
  • ECC – Error Correcting Memory in Packet buffer


The Intel Ethernet Controller I210 family can be used in server system configurations, such as rack mounted or pedestal servers, in an add-on NIC or LAN on Motherboard (LOM) design, in blade servers, and in various embedded platform applications.  The Intel Ethernet Controller I211-AT is also available for cost conscious customers looking for a reduced feature set and OS support.

Until this month, hot start-up companies - one of which was recently bought for $1.2 billion - have dominated the market for software-defined network (SDN) controllers. 


But HP’s announcement of its Virtual Application Networks SDN Controller signaled a sea change.  The credibility and worldwide reach that HP brings to the market opens the door for more mainstream customers to consider SDN.


For HP and other switch manufacturers, though, the growth in the number of controllers increases the need for flexible switch architectures. 


The Intel® Seacliff Trail SDN switch reference design can give these manufacturers the flexibility for the controller diversity that is emerging in this market.


One of Seacliff Trail’s key flexibility benefits is the use of the server-class Crystal Forest platform featuring the Gladden CPU.  With this computing power, the switch can be programmed for multiple controllers in the event that the data center supports multiple vendors.


Another interesting use case is running the controller on the switch itself.  This may seem counterintuitive, since the point of SDN is to separate the controller from the hardware, but in many large data centers there will be a need for multiple or hierarchical controllers, and the ability to deploy these without adding multiple servers is an attractive alternative.


The second key adaptability advantage that Seacliff Trail has is the use of the Intel® Ethernet Switch Family FM6000 switch chip itself.  In terms of flexibility, these chips feature the programmable Intel® FlexPipe™ frame-processing pipeline that can be updated by a switch maker to support their controller, as well as to support IP traffic.


The success of these early SDN controller companies will breed more competition. Intel is ready to offer switch platforms and reference designs that can support the changes that these new players will bring to this market.

Now that I’m finding time to focus on my passion – manageability, I’ve been able to spend some time working on some documents.  This is the announcement for my most recent whitepaper.


This paper discusses the problem that occurs when a server power action (on/off/reset) occurs and the Ethernet link is momentarily lost when the network port is being used by the BMC/MC.  Losing the link is a normal part of this process, it usually goes down once or twice during the power up sequence, once when the platform is initialized and again when the software driver loads and initializes the network device.  This loss of link usually lasts much less than a second each time.


During normal operations a momentary loss of link causes no issues.  However if you have the Spanning Tree Protocol (STP) deployed in your network, even a momentary loss of physical link can cause a loss of network connectivity to the BMC for over a minute.


This can present some pesky problems if say for example you rebooted your server so you can go do some BIOS configuration using KVM or Serial over LAN (SoL); if the connection is out for a minute or more, it is very difficult to press the keys to get into BIOS setup from your remote management console.


Intel® Ethernet devices have a feature called ‘Critical Session’ which the BMC can configure, which will keep the link up during these transitions, thus maintaining any active connections to the BMC.


To learn how to use this feature, I invite you to read the whitepaper:




-        Patrick

Just a quick note to let you know that we’ll be shutting down the Intel® Ethernet Twitter handle (@IntelEthernet) this week.


Does that mean you’ll no longer be able to get information on Intel Ethernet? Hardly. If you want to be in the loop for the latest and greatest, follow these two handles:

@thehevy: Brian Johnson, Intel® Ethernet solutions architect. Brian is one of our technology gurus and is responsible for many of our white papers, event speaking sessions, and customer education efforts.

@IntelITS: Intel® IT Solutions – Data Center & IT best practices, strategies, and tools from experts at Intel. That includes Intel Ethernet, of course.


Twitter was and remains a great way for you to learn about our latest products and technologies, as well as our thoughts on new developments in Ethernet and networking.


I hope you’ll continue to follow us using these new handles.


I’ll leave you with a few of my favorite pictures that we shared over the last few years.



The story in data center networking has always been about low latency. But the increasing importance of software-defined networks (SDN) and network virtualization is adding a new element to the narrative: flexibility.


That was driven home in the launch of the Arista 7150S* switch series, which is powered by the Intel® Ethernet Switch FM6000 family.  Network World* said that Arista “…lowered the latency and upped the software programmability of its switches with the introduction of the Arista 7150S series.”


Part of the reason for its increased programmability is the Intel® FlexPipe™ frame processing technology that is a key innovation in the FM6000 series.  FlexPipe has the performance to keep up with the new protocols used in SDN and is programmable to continue to evolve with network standards.


According Arista’s press release, the 7150S is a new series of first next-generation top-of-rack data center switches for SDN networks. The series features up to 64 10GE ports while also supporting 40GE ports, 1.28Tb/second of throughput and can switch 960 million packets per second with 350 ns of latency. In addition to OpenFlow, the switch includes API hooks to other third-party SDN and virtualization controllers from Arista partners.


The nature of data center traffic demands low latency, but the nature of SDN is where programmability becomes important.  SDN moves the control plane from the switch to an SDN controller using open communication standards such as OpenFlow, that can better see data traffic and shape that traffic across the switches to respond to congestion problems. 


OpenFlow makes the job of the switch much simpler as it only needs to examine the characteristics of the incoming packets and switch them into an SDN-defined flow. It no longer needs to maintain the state of the entire network using earlier protocols such as spanning tree or TRILL. FlexPipe supports both SDN protocols and IP switching simultaneously.  Its performance and programmability mean that the switch is agile in both supporting today’s traffic and changes to SDN standards over time.  Arista’s Martin Hull, a senior product manager, summed up this benefit in a news report:


“The real issue, says Hull, is that it takes too long for new protocols to be implemented because they are often tied very tightly to specific custom chips (ASICs) in the switches. So what Arista has created is a switch dog that can be taught new tricks as it gets old.”

Performance for today’s networks, and flexibility for tomorrow’s networks.  That’s a great way to summarize the benefits of the FlexPipe architecture.

Filter Blog

By date: By tag: