1 2 3 Previous Next

Wired Ethernet

32 Posts authored by: gmlee

The Intel Developer Forum (IDF) was held last week and it was an amazing collection of the brightest minds in the industry looking to push the technology envelope.

 

Like in years past, when it comes time to for Intel® to debut its new products at IDF they are always better and more powerful.  But this year we had one announcement that bucked that trend.  It was a new product that was better and less powerful.

 

I’m talking about microservers, a new trend in computing where multiple, lower performance, lower power processors are used as servers for a new class of computing tasks. It was one of the topics I presented at two poster chats on Tuesday and to about 60 attendees during my technical session on Wednesday.

 

The microserver initiative fits with Intel’s strategy of developing “workload-optimized” solutions.  There are a lot of computing tasks, such as memory caching, dedicated webhosting and cold storage, where the processing and I/O demands per server are light.

 

To meet these needs, we formerly introduced the Intel Atom™ C2000, at a special event one week before IDF.

 

The density of microservers on a board makes networking these new systems a challenge. The power profile of the Atom™ C2000, for example, allows data center shelves with 48 microservers per board. A standard telecom rack can hold 12 of these shelves for a total of 576 microservers per rack. That’s more network connections than many enterprise workgroups.

 

However, by using the Intel® Ethernet Switch FM5224 chip, all of the processors on a shelf can be internetworked so that there are only a few uplink connections needed to a top of-rack-switch.  This makes it manageable from a connectivity perspective. 

 

But there’s still traffic from all of the processors that needs to be routed.  That is why we’re evolving our Open Network Platform (ONP) software-defined networking (SDN) architecture to support microservers. 

 

My colleague Recep wrote a post on this blog describing how SDN works just recently, so I’ll just summarize here to say that SDN shifts the packet processing intelligence from the switch to a centralized controller. This offers the benefit of reducing the complexity of network design and increasing the throughput.

 

Many microservers today are sold with proprietary networking software.  The issue with this is vendor lock in and the potential for slower pace of innovation.  This last point is important since many of the applications for microservers are cloud-based and that market is evolving very quickly.

 

Intel’s Open Network Platform combines the performance of our Intel Ethernet Switch FM5000/FM6000 high-volume, high performance switching hardware with ONP software based on Windriver Linux.  In addition, there are open APIs for drivers to eliminate vendor lock in and APIs for controllers and a wide variety of third-party apps and OEM apps.

 

What this means to microserver OEMs is that they can bring their own unique differentiating software to their products while at the same time integrating with an SDN controller or other app from a third party.  Cost is kept at a minimum while functionality and differentiation is maximized. 

 

The reception to this message at IDF13 was good and already several OEMs are planning to develop microserver products.  We’ll take a look at some of those designs in a future blog post.

I’m putting the final touches on my presentation for IDF13 (session #CLDS006) which will look at the emerging networking requirements for new server form-factors, specifically microservers and rack scale architectures.

 

Microservers contain multiple servers on a single board and are an emerging solution for some targeted data center workloads requiring a high number of low-power processors.  As these servers evolve and new CPUs are introduced (like the Intel® Atom™ C2000 processor family), CPU density is increasing.  At IDF13, I’ll talk about the emerging networking requirements in this category and how the Intel Ethernet Switch family and the Open Networking Platform address these needs.

 

Another hot topic for cloud data centers is the concept of rack scale architecture systems – which are pre-configured, high-performance data center racks that can be rapidly deployed to meet new workload requirements.  In this part of my talk, I’ll cover how the Open Networking Platform is being extended to provide efficient connectivity within these high-performance systems.

 

Here’s an outline of my presentation:

•      Data Center Trends

•      New Microserver Solutions

•      Intel® Ethernet Switch Silicon Architecture

•      Rack Scale Architecture

•      Software-defined Infrastructure

•      Example applications and proof points

 

I hope to see you at my IDF session at 3:45PM on September 11th in San Francisco. You are also invited to my poster chats on Switching for High Density Microservers from 11:30-1:00 and from 3:00-4:30 on September 10th.

 

If you are still on the fence about the value of attending IDF – or now want to register – I have included some links to the IDF website below.

 

Why Attend IDF13:

https://www-ssl.intel.com/content/www/us/en/intel-developer-forum-idf/san-francisco/2013/why-attend-idf-2013-san-francisco.html

 

Registration now:

https://secure.idfregistration.com/IDF2013/

 

More information on the IDF13 keynote:

http://www.intel.com/content/www/us/en/intel-developer-forum-idf/san-francisco/2013/idf-2013-san-francisco-keynotes.html

 

Main IDF13 landing page:

https://www-ssl.intel.com/content/www/us/en/intel-developer-forum-idf/san-francisco/2013/idf-2013-san-francisco.html

 

What’s New at IDF13:

https://www-ssl.intel.com/content/www/us/en/intel-developer-forum-idf/san-francisco/2013/idf-2013-san-francisco-whats-new.html?

Ethernet never seems to stop evolving to meet the needs of the computing world.  First it was a shared medium, then computers need more bandwidth and multi-port bridges became switches.  Over time, Ethernet speeds increased by four orders of magnitude.  The protocol was updated for the telecom network, then for data centers. 

 

Now it’s scaling down to meet the needs of microserver clusters.

 

Micro server clusters are boards with multiple low-power processors that can work together on computing tasks.  They are growing in popularity to serve a market that needs fast I/O but not necessarily the highest processing performance.

 

From its many evolutions, Ethernet has the right combination of bandwidth, routing, and latency to be the networking foundation for the microserver cluster application.

 

Bandwidth: For certain workloads, congestion can occur reducing system performance when processors are connected using 1GbE, so the preferred speed is 2.5 GbE.  If you’ve never heard of 2.5GbE, it’s because it was derived from the XAUI spec, but only uses a single XAUI lane.  The XAUI standard was implemented with the idea that four lanes of XAUI could be used to transmit 10GbE signals from chip to chip for distances longer than the alternative (which capped out at 7 cm).  XAUI specifies 3.125 Gbps, which provides for overhead and a 2.5 Gbps full duplex data path.  XAUI had a claim to fame in 2005, when it because a leading technology to improve backplane speeds in ATCA chassis designs.  By using a single lane of XAUI, it’s the perfect technology for a microserver cluster.

 

Density: Density and performance are key attributes for micro server clusters. Bandwidth is related to total throughput of the switch in that packet forwarding engine must be robust enough to switch all ports at full speed.  For example, the new Intel® Ethernet Switch Family FM5224 has up to 64 ports of 2.5 GbE and another 80Gb of inbound/outbound bandwidth (either eight 10GbE ports or two 40GbE ports).  Thus the FM5224 packet-processing engine handles 360 million packets per second to provide non-blocking throughput.

 

Distributed Switching: Some proprietary micro server cluster technologies advocate layer two switching, perhaps under the assumption that the limited number of devices on a shelf eliminates the need for layer three switching. But traffic will need to exit the shelf – maybe even to be processed by another shelf – and so to make this true, the shelf would depend on an outside top-of-rack (TOR) switch to route traffic between shelves or back out onto the Internet.  This would change the nature of a TOR switch, which now is  “pizza box” style switch with between 48-72 ports.  With an average of 40 servers per shelf, a TOR would need 400 or more ports to connect all of the shelves.  But Ethernet routing can be placed in every cluster (micro server shelf) to provide advanced features such as load balancing and network overlay tunneling while reducing the dependency on the TOR switch.

 

Latency: To be considered for high-performance data center applications, Ethernet vendors needed to reduce latency from microseconds to nanoseconds (Intel lead this industry effort and is the low latency leader at 400 ns).  That work is paying dividends in the microserver cluster.  Low latency means better performance with small packet transmissions and also with storage-related data transfers.  For certain high performance workloads, the processors on micro server clusters must communicate constantly with each other, making low latency essential. 

 

With the perfect combination of bandwidth, routing and latency, Ethernet is the right technology for microserver networks.  Check out our product page to take a look at the Intel Ethernet Switch FM5224 that is built just for this application.

With the growing popularity of micro servers, networking has moved from connecting computers on desktops or servers in a rack to connecting processors on a board. 

 

That’s a new way of thinking about networking that takes a new kind of switch chip, which is why we’ve recently introduced the Intel® Ethernet Switch FM5224. To meet these new market needs takes a device that features a new design, mixed with legacy networking strengths.

 

What’s New

Micro servers are part of an emerging computing platform architecture that includes many low power processor modules in a single enclosure.  The micro server operating system parcels out computing tasks to the various processors and coordinates their work. For certain distributed data center workloads, there is a lot of interest in this approach.

 

However, designing a dense micro server cluster calls for a significant uplink data bandwidth combined with high port count interconnectivity between the processor modules.  Enter the new FM5224, which we call a high port-count Ethernet switch, to meet these needs.

 

The device can support up to 64 nonblocking ports of 2.5GbE along with up to eight 10GbE uplink ports (or two 40GbE ports). 

 

Why 2.5GbE?  This speed was popularized by blade server systems, but never made an official Ethernet standard. Our analysis of the bandwidth needs of micro servers shows that many workloads need more than 1 GbE per server module, which makes our 2.5GbE switch ports ideal for this application.

 

What’s the Same

While micro servers are new, they still communicate using Ethernet.  The FM5224 is built using Intel’s Alta switch architecture, which brings the benefit of some advanced features for micro server applications.

 

The I/O –heavy nature of micro servers makes non-blocking, low latency performance very important.  The FM5224 is built with Intel’s FlexPipe packet processing technology that delivers 360 million pps forwarding rate.  The device also offers less than 400 ns of latency, independent of packet size or features enabled. 

 

Combined, this performance makes it possible for each processor to pass data at wire rate, even small packets, which are expected to make up most of the data passed between processors. In addition, the FM5224 has excellent load distribution features that can be used to efficiently spread the workload across multiple micro server modules.

 

For the micro server chassis uplinks, OEMs have their choice of four 10GbE or two 40GbE ports that can directly drive direct-attach copper cables up to 7 meters without the need for an external PHY.

 

With the FM5224, OEMs have a tremendously flexible chip that is fine tuned for micro server applications.

 

The Intel® Development Forum in Beijing took place two weeks ago and the interest in my two SDN-related presentations was very high.

 

My poster chat drew about 40 or so people who stopped by in groups of 6-8 to hear the high-level overview to the Intel SDN story.  The attendance more than doubled for my conference session, where I went a bit deeper into Intel’s new data center and telecom network transformation initiative – giving a preview of the three product announcements made at the Open Networking Summit.

 

One challenge that is unique to China is scaling new web services for a potential market of 1.3 billion people – almost four times that of the U.S.  There were a lot of questions on this topic from top service providers, which I took to indicate that scaling is very important.

 

The other difference I noticed is that with less of a legacy network infrastructure than in the U.S., Chinese network managers are very open to trying new things to get the scalability and performance they need to deliver great service levels.

 

One key element of my presentation was a deep dive into why new networking platforms, like ONP introduced at ONS, are so necessary to advance the state of the art of SDN, and to providing ease of scaling in these high performance data centers.

 

As the server virtualization trend expanded into network virtualization, building high-performance, low-latency networks became much more complex for enterprises and data center operators.  New IP protocols like TRILL helped, but maintaining server/network coherency became very labor intensive.

 

To network managers dealing with this challenge, the SDN promise of separating the network control plan into a central network controller architecture was an immediate solution to a nagging problem.  And, first-generation networking products delivered on this promise by layering SDN onto existing switches. 

 

But the promise of SDN is much bigger than that; it’s nothing short of opening networks to a wave of innovation around new software functionality along with additional network cost-per-bit reductions. That’s the total story that ONP delivers on.

 

The potential of SDN for network innovation mirrors the transition from proprietary mini computer to the PC, which spawned countless innovations thanks to its combination of standard processors and operating systems and value-added applications.

 

In the network version of this story, enterprises evolve from vertically integrated networking platforms that are closed and slow to innovate, to a more open system with standardized switch silicon that has an open API to the control plane (or control planes for specialized applications).  These control planes then communicate through another API with apps running on a virtual server. 

 

This means that a network that had to be architected around special appliances to do packet inspection or provide security can now have those applications running on a high-performance server. The global controller will know what packets need to be processed by that application and will direct them to the application before forwarding them to their destination.

 

This architecture breaks down many of the barriers to entry in this market.  For new players, all they need are their software skills to develop their application. They can sell it into any network that supports the open API – regardless of the manufacturer.  On the other side of the coin, an existing software company can use standard hardware to easily develop its own complete solution, speeding time to market.

 

For every company that needs to scale quickly and keep network costs and complexity low – especially in fast growing economies like China – this is really good news.

Later this week, I’ll board a 13-hour flight from Los Angeles to Beijing to take part in the Intel® Developer Forum on April 10-11.

 

If you are going to the event and are interested in what Intel is doing in the data center and connected systems market, I recommend that you first go hear our General Manager Diane Bryant give her keynote talk about the future of our business, on April 10 between 9 and 11:00am.

 

Then, you can hear me talk at two times during the conference: at my Poster Chat on April 10 at 2pm, and at my April 11 session presentation at 3:45pm (where I will pair up with Shashi Gowda of our Wind River Systems division).

 

In both talks, I’m going to be sharing how Intel sees the future of the software defined network (SDN) market and what product plans are in place to help OEMs participating in this market.

 

In this blog post, I’ll touch on my poster chat, and next week, I’ll provide an overview of the session presentation.  If you’ve never been to a poster chat, it is exactly what the words say – I have created a large poster and I describe it and answer any questions that come up.

 

My poster for IDF Beijing covers the following topics:

  • The evolution from traditional IP networks to SDN networks and the advantages that come from that.
  • A description of the Intel Ethernet Switch FM6000 functionality.  Here I will talk about how we get low latency and discuss our Seacliff Trail 48-port 10GB/40GB Ethernet switch reference design.
  • From there, I’ll go into a discussion of our software architecture that starts with APIs to open the FM6000 to SDN controllers, and contains the operating system and other software components necessary to fully implement SDN switching.
  • Then, I want to dig deeply into our FlexPipe frame forwarding architecture, which is built with advanced frame header processing that makes it flexible for an evolving standard like SDN.

 

It’s a lot to talk about in an hour, but I’m looking forward to providing a high-level overview that I can then explore further in my workshop.  More on that next week.

The success of Arista Networks* is proof that there’s always room for an innovative start up – even in markets dominated by large players that execute well. But great innovation often requires market disruption to gain a foothold with customers.

 

In a recent interview with Network World, Arista President and CEO Jayshree Ullal talked about the market trends that helped Arista take off.  She said:

 

“Arista saw three disruptions in the market: a hardware disruption; a software disruption; and a customer buying disruption, which in my mind is the most important thing.”

 

Two of these trends are interesting to me because we’ve been participating in them.  First, the hardware disruption she mentions is the rise of merchant network switch silicon that has performance and features comparable to ASIC switches.

 

Our Intel® Ethernet switch family is pioneering these merchant switch devices.  We not only provide throughput that is equal to or better than that of an ASIC, but our layer 3 latency is the industry’s lowest as well.

 

With a switch chip that provided competitive throughput and features to compete, Arista didn’t need to spend the large amount of resources on an ASIC development program, unlike some of its large competitors.

 

Instead, they were able to differentiate themselves with software – the second disruption on Jayshree’s list.  Arista developed its own operating system – the Extensible Operating System – leveraging Linux as the foundation.

 

Our Intel Ethernet switch FM6000 series silicon contributes to software innovation through its programmable FlexPipe™ frame processing technology.  FlexPipe’s configurable microcode allows switch manufacturers to update features or support new standards even on systems that are already in the field.

 

In order for our customers to evaluate the advanced FM6000 series features, we also provide our Seacliff Trail reference design, which has a Crystal Forest-based control plane processor on board.  Crystal Forest can be used as a standard control plane processor, or as a SDN controller host, or even to experiment with Intel’s Data Plane Development Kit (DPDK™).

 

It’s been great to have played a role in the market changes that have given Arista – and other companies – a chance to launch and to flourish.  Viva la Disruption!

I am pleased to announce the release of an SDK update for the Intel® Ethernet Switch FM6000 Family that adds support for several advanced data center standards.

 

The key new features in SDK version 3.3.0 include better support for network virtualization, improved network reliability and precision time stamping for data center latency measurement.  Here are some more details about some of these new features:

 

VxLAN Support: Large cloud data centers are hosting virtual networks for each tenant and now need to support tens of thousands or more of these tenants. Traditionally, these tenants were logically separated using unique VLAN identifiers, but with only 4,096 VLANs available, new methods are needed. VxLAN is a new protocol championed by VMware* and Cisco* among others that provides encapsulation (tunneling) for millions of tenants while also providing increased virtual network flexibility.

 

Edge Virtual Bridging (EVB) Support using VEPA: Server virtualization is improving data center efficiency, but it needs the cooperation of the top-of-rack switch to properly interconnect all of the virtual machines using the same sets of rules that are used elsewhere in the network. The virtual Ethernet port aggregator (VEPA) standard utilizes the rich set of resources available in the Ethernet bridges attached to the servers to redirect all traffic (including local VM-to-VM traffic) to the correct attached bridge.

 

TRILL Support: One of the changes needed for Ethernet to really work in the data center was the replacement of the spanning tree protocol, which helped to ensure loop-free networks but did so by setting up redundant links that resulted in wasted bandwidth.  The successor protocol is called transparent interconnect of lots of links (TRILL), and gets around the limitations of spanning tree. It establishes loop-free multi-link connections between RBridges (TRILL-capable switches) using a special encapsulation protocol.

 

Time Stamping Support:  Time stamps can now be added to data packets within 10nS from when they ingress or egress the FM6000 switch. This allows attached FPGAs or CPUs to access information on precisely when packets enter or leave the switch. This can be used in applications such as IEEE 1588 precision time protocol, which can distribute master clock time signals throughout the network, or to measure latency within a data center network.

 

Technology standards are evolving rapidly to keep up with the needs of data centers. We want to stay ahead of the technology curve, and with this SDK update, the Intel Ethernet Switch FM6000 Family of switches offers one of the most comprehensive data center feature sets available.

What do hero pilot Sully Sullenberger, humorist Dave Barry and software-defined networking have in common? They all packed the house at the 2012 Gartner Data Center Summit event that I attended earlier this month. While Sullenberger talked about leadership and Barry kept it funny, Gartner analysts presented their research showing that a rethinking of data center infrastructure and operations can lead to dramatically reduced costs. Part of that rethinking includes adopting SDN

              

That got a lot of data center managers thinking and asking questions about what SDN is and what it can do.  At least that’s the response I saw as we staffed the Intel® booth at the summit solution showcase.  We were there showing our Seacliff Trail (SCT) 10 Gbps/40 Gbps top-of-rack switch reference design along with our 10G Ethernet converged network adapters. You can read more about the SCT reference design here, which is based on the Intel Ethernet Switch FM6700 series.

 

The FM6700 series provides up to 72 10GbE ports or up to 18 40GbE ports and can forward frames at 960Mpps, while maintaining L3 latencies of around 400nS under all conditions. This product line is part of our FM6000 family, which continues our history of providing Ethernet switching silicon optimized for the data center. The 6700 series has been enhanced with advanced features for SDN such as large flow tables and support for VxLAN and NVGRE tunneling.

 

The data center managers I spoke with had heard Gartner’s message about reducing cost and improving network efficiency and had a lot of questions about how to turn the theory into action. This is another sign of the extreme excitement around SDN, and it was nice to see that many were becoming aware of Intel’s commitment to providing advanced SDN-enabled components.

 

Gartner itself is famous for its “hype cycle,” a graph that tracks the hype of a product over its lifecycle.  Exciting products emerge from a “technology trigger” and rise to the “peak of inflated expectations” before dropping in the “trough of disillusionment,” then emerging into the upward “slope of enlightenment.” In Gartner’s model, it's only after the products emerge from the trough that the market becomes real.

 

I’m not sure where SDN is along that curve, but after a few days at the summit it sure felt like the attendees were seeking enlightenment for how they could apply SDN in their data centers.

Data center traffic growth is poised for a six-fold increase over the next four years to reach 6.6 zettabytes. To know how this will impact data center infrastructure, though, means better understanding what types of data are growing fastest.

 

In the latest Cisco* Global Cloud Index report, we learn that almost two-thirds of that 6.6 zettabytes is cloud computing traffic.  But even more interesting is that 76% of data center traffic will stay within the data center.  This is the so-called “east-west” data traffic that is the result of data exchanges and requests between servers or between servers and storage.

 

Why so much east-west traffic? The Cisco report does not break down the details, but we can surmise that this comes from applications such as web transaction processing, recommender systems, cloud clustering services and big data analytics. 

 

The response time for these applications can be impacted by network latency, which means low-latency switches (like our Intel® Ethernet Switch FM6000 family) will play a key role in the data centers that will be built to support this data explosion.

It used to be said that low latency networks weren’t needed for the data centers that ran big e-commerce or social media sites, as most people are willing to wait an extra few microseconds for the latest update on their friends and the network itself wasn’t a gating factor in performance.

 

Product recommendation technology, which powers the “you might also like” messages on e-commerce sites, however is changing that.  New “recommender” systems require increased computing performance to better factor more data into their recommendations.  And they need to do this all in the time it takes a webpage to load.

 

An article in the October 2012 issue of IEEE Spectrum by professors, and recommender system pioneers, Joseph A. Konstan and John Riedl chronicles the evolution of the technology and its dramatic impact on e-commerce sales.

 

The most popular recommender systems use either user-user or item-item algorithms – that is they compare your purchases, likes, clicks and page views with other people (user-user) or they compare the items you like with other items to see what buyers of those items also purchased (item-item). 

 

The two main problems with these approaches are that the algorithms are rigid and that tastes and preferences change, both of which lead to bad recommendations.

 

Dimensionality reduction is a new way to make both algorithms much more accurate.  This method builds a massive matrix of people and their preferences. Then it assigns attributes or dimensions to these items to reduce the number of elements in the matrix.

 

Let’s take food for example.  A person’s matrix might show that they rated filet mignon, braised short ribs, Portobello mushrooms and edamame with sea salt very highly.  At the same time, they give low ratings to both fried chicken wings and cold tofu rolls. The dimensionality reduction then seeks to determine that person’s taste preferences: 

 

“But how do you find those taste dimensions? Not by asking a chef. Instead, these systems use a mathematical technique called singular value decomposition to compute the dimensions. The technique involves factoring the original giant matrix into two “taste matrices”—one that includes all the users and the 100 taste dimensions and another that includes all the foods and the 100 taste dimensions—plus a third matrix that, when multiplied by either of the other two, re-creates the original matrix.”

 

So in our example, the recommender might conclude that you like beef, salty things and grilled dishes, but that you dislike chicken, fried foods and vegetables.

 

But the number of calculations grows dramatically as the matrices grow in size.  A matrix of 250 million customers and 10 million products takes 1 billion times as long to factor as a matrix of 250,000 customers and 10,000 products.  And the process needs to be repeated frequently as the accuracy of the recommendations decreases as new ratings are received.

 

This can spawn a lot of east-west data center traffic, which is needed to complete these large matrix calculations. Because users don’t spend much time on a given web page, the data center network latency is critical to providing recommendations in a timely manner (time is money).

 

Intel® Ethernet Switch Family FM6000 ICs are perfect for these types of data centers because of their pioneering low layer 3 cut-through switching latency of less than 400 ns.  So, the next time you get a great book recommendation, there might just be an Intel switch helping to power that suggestion.

The story in data center networking has always been about low latency. But the increasing importance of software-defined networks (SDN) and network virtualization is adding a new element to the narrative: flexibility.

 

That was driven home in the launch of the Arista 7150S* switch series, which is powered by the Intel® Ethernet Switch FM6000 family.  Network World* said that Arista “…lowered the latency and upped the software programmability of its switches with the introduction of the Arista 7150S series.”

 

Part of the reason for its increased programmability is the Intel® FlexPipe™ frame processing technology that is a key innovation in the FM6000 series.  FlexPipe has the performance to keep up with the new protocols used in SDN and is programmable to continue to evolve with network standards.

 

According Arista’s press release, the 7150S is a new series of first next-generation top-of-rack data center switches for SDN networks. The series features up to 64 10GE ports while also supporting 40GE ports, 1.28Tb/second of throughput and can switch 960 million packets per second with 350 ns of latency. In addition to OpenFlow, the switch includes API hooks to other third-party SDN and virtualization controllers from Arista partners.

 

The nature of data center traffic demands low latency, but the nature of SDN is where programmability becomes important.  SDN moves the control plane from the switch to an SDN controller using open communication standards such as OpenFlow, that can better see data traffic and shape that traffic across the switches to respond to congestion problems. 

 

OpenFlow makes the job of the switch much simpler as it only needs to examine the characteristics of the incoming packets and switch them into an SDN-defined flow. It no longer needs to maintain the state of the entire network using earlier protocols such as spanning tree or TRILL. FlexPipe supports both SDN protocols and IP switching simultaneously.  Its performance and programmability mean that the switch is agile in both supporting today’s traffic and changes to SDN standards over time.  Arista’s Martin Hull, a senior product manager, summed up this benefit in a news report:

 

“The real issue, says Hull, is that it takes too long for new protocols to be implemented because they are often tied very tightly to specific custom chips (ASICs) in the switches. So what Arista has created is a switch dog that can be taught new tricks as it gets old.”

Performance for today’s networks, and flexibility for tomorrow’s networks.  That’s a great way to summarize the benefits of the FlexPipe architecture.

Last week at IDF, we took the wraps off of two exciting new products that OEMs and ODMs can use to develop switch systems for the emerging market for software-defined networks (SDN) in the data center.

 

At the chip level, we launched the new Intel® Ethernet Switch FM6700 series, which is a 10G/40G SDN-optimized switch family that provides up to 64 10GbE or up to 16 40GbE ports. 

 

The FM6700 series can support both SDN and legacy networks.  Thus it can be used in top-of-rack switch SDN applications in the data center, or in network appliances or video distribution switches (thanks to its built in load balancing features).

 

For all applications, the switch features a pioneering low-latency architecture, built on the programmable Intel® FlexPipe™ frame-processing pipeline and single output queued shared memory architecture.  Both of these technologies combine to deliver highly deterministic packet forwarding with a maximum layer 3 latency of about 400nS.

 

The switch supports NAT and IP tunneling features for use in both IP and SDN applications.  For the SDN networks, the FlexPipe frame processor can be used to parse and process SDN packets.  The switch also supports 4,000 complete OpenFlow 12-tuple table entries that can be searched in a single pass for added performance.  There are also flexible tagging and tunneling options, including the ability to provide both an SDN and tunneling proxy for connected hosts.

 

At the platform level, we’ve introduced Seacliff Trail, a top-of-rack switch network reference platform for OEMs and ODMs that is based on the FM6764.  It offers 48 SFP+ 10GBE ports and four QSFP+ 40GBE ports, and can drive up to 7m of direct attach copper without the need for additional PHY chips on the board. 

 

It’s an all-Intel platform as well with a control plane based on the Crystal Forest AMC module that features an Intel® Xeon® processor, Intel communications chipset and Intel® 82599 10GBE controller. Intel’s Wind River subsidiary provides the open and extensible software framework based on its Linux OS.  This provides both easy SDN integration and also direct API access to add third-party apps for rapid innovation.

 

Seacliff Trail is a major step forward in fulfilling Intel’s SDN vision of the next-generation of networking.  That vision combines standardized, high-volume hardware with an open and extensible software framework that allows OEMS/ODMs to add their own value-added functionality.

 

Last week we had good crowds coming to see these products at our IDF booth along with two sessions where we presented both the FM6700 series SDN features along with its features for server load balancing.. 

The Intel Developer Forum is this week and the excitement is high. We’ve talked a lot about the Intel® Switch and Router Division’s solutions for software-defined networks (SDN), but here’s a chance for IDF attendees to see them first-hand and talk to some of the brains behind the technology. 

 

Why SDN? As the enterprise data center has evolved into virtualized cloud datacenter, the supporting networks have grown increasingly complex. In many cases, it means that data center designers are bound to a single vendor because there is no interoperability for the advanced functionality required. SDN promises to provide the needed management without the vendor lock in.  SDN orchestrates the network from an independent software controller which allows datacenter operators to pick the best networking equipment for each part of the network. 

 

If you are going to IDF and want to hear more about the Intel SDN story, stop by booth (#1122) to see our new SDN-enabled Intel Ethernet FM6764 switch silicon first hand in a top of rack switch reference design code named Seacliff Trail.  Also, you can join us at the following presentations to learn more:

 

Enabling Cloud Networks with Software Defined Networking: As part of the Cloud Computing Evolution of the Data Center track at IDF you can hear from our own Mike Zeile on what switches need to deliver to enable the next generation of cloud network fabric using SDN.  This presentation will be on Sept. 13 at 10:15 a.m.

 

Server Load Balancing in the ToR Switch Using Intel Ethernet Switch Silicon: Server load balancing is critical in the modern data center to help distribute heavy loads among multiple servers to achieve faster response times.  In this presentation, SRD’s Oscar Ham will talk about the server load balancing features that are built into the Intel Ethernet Switch FM6000 chips and what that means for networks. This presentation will be on Sept. 13 at noon.

 

Poster Chat: And finally, you can come talk to me at a poster chat on how SDN and server load balancing enables cloud networks.  I’ll be at poster chat station #7 at 11:45 am and 3:00 pm on Sept. 12.

 

IDF promises to be a great show and a great opportunity to show just how Intel can help networking manufacturers to implement SDN in systems they sell to next-generation data centers.  And stay tuned here as well for more information on the new products that we’ll launch at IDF.

In my last blog post, I discussed virtualized network protocols NVGRE and VXLAN – two essential components in data centers that are transforming into virtualized environments. 

 

Another important component is balancing the traffic on each virtual server to optimize response time and overall resource loading.  Many data centers have installed expensive load balancing appliances in the network.  They are very surprised to find out that many of these same features are built into our Intel® Ethernet Switch FM6000 Series products.  Here’s a little bit more about how it works.

 

Load balancing in the FM6000 Series architecture is done using advanced symmetric hashing mechanisms along with network address translation (NAT) to convert the IP address of the load balancer to the IP address of the virtual machine (VM) after determining the optimal virtual machine (or virtual service) to process the request.  After the transaction is processed by the VM, the load balancer modifies the source IP address to its own address so that the client sees it as a single, monolithic server. 

 

The FM6000 also provides fine-grain bandwidth allocation and fail-over mechanisms to each egress port using a flexible hash-based load distribution architecture. This avoids round-robin service distribution schemes, which may be less than optimal, and provides the ability to monitor the health of VMs and virtual services, so that failed ones can be quickly removed from the resource pool.  These switches also come with connection persistence intelligence to know when not to load balance, as in the case of FTP requests that must stay connected to the same virtual service.

 

Some other load balancing functionality built into the switches includes:

 

Network Security:  The frame filtering and forwarding unit (FFU) inside FM6000 Series can be used for network security, in addition to frame forwarding. It can be configured using bit masks to read any part of the L2/L3/L4 header. If there is a match, the switch can route, deny, modify, count, log, change VLAN or change priority of the packet to protect the network.  The switch can also use access control lists to prevent denial of service attacks and other security violations.

 

Performance: The FM6000 series switches are the lowest latency switches on the market, which means they can connect to the network, to servers and to storage arrays with real-time performance. In addition, it’s extremely low L3 latency means that the load balancing and NAT functions act as a “bump on the wire”, minimizing the impact on network performance compared to coupling a ToR switch with a discrete load balancer.

 

Fail Over: FM6000 series chips use a link mask table to determine how to distribute the load across multiple egress ports. They also contain several mechanisms to detect link failure such as loss-of-signal (LOS) or CRC errors. As the packet header is processed, the forwarding unit resolves to the address of a pointer, which points to an entry in the mask table. If a link or connected device fails, this pointer can be quickly changed by software so that the failing link is no longer part of the load distribution group. Since distribution is flow based, only flows to the failed device will be affected.

 

As you can see, the FM6000 Series switches have full-featured, low latency load balancing capabilities, another feature that makes them the ideal solution for top-of-rack switch systems.

Filter Blog

By author: By date:
By tag: