1 2 Previous Next

Wired Ethernet

16 Posts authored by: rozdag

Software-defined networking (SDN) is a major revolution in networking that is just starting to move from bleeding edge to leading edge customer adoption. 

 

But already, Intel® is starting to think about what comes next and how the software defined model can be more pervasive in cloud-scale data centers. A key factor in those plans is the Open Networking Platform (ONP) that we announced in April.

 

That was my takeaway from the announcement about Intel’s cloud infrastructure data center strategy.  If you read the press release and the related presentations and watch the videos, you will see that the emphasis is on the strategy and several new microprocessors, including the Avoton and Rangely 22nm Atom processors, and the 14nm Broadwell SoC.

 

I want to unpack a bit about how the ONP fits in this next-generation data center strategy.  The architecture of the next-generation cloud infrastructure data center is built on three technology pillars:

  • Workload-optimized technologies: Examples here include deploying servers with the different CPU, memory and I/O capabilities based on the workload.
  • Composable resources: Moving from building server racks from discrete servers, networking equipment, etc to deploying a more integrated solution. Intel is making strides here with its Rack-Scale Architecture initiative.
  • Software-defined infrastructure: this is using a software controller to direct data flows to available resources which helps overcome bottlenecks keeping data centers from overprovisioning.

 

The ONP initiative combines our low-latency network processing, switching and interface hardware with a customizable software stack that works with third-party SDN controllers and network applications.

 

Already, the ONP “Seacliff Trail” 10GbE / 40GbE SDN top-of-rack switch plays a key role in the Rack Scale Architecture.

 

But the ONP also provides the foundation for a future where the SDN controller evolves into a workload orchestration controller – directing data flows not only to network resources but also orchestrating computing, memory and storage resources as well. 

 

Our open approach means that ONP infrastructure can support new controllers or orchestration applications.  The switching architecture of the Intel Ethernet Switch FM6000 chip family is designed for evolving network standards with industry-low L3 latency (400ns), high throughput and microcode programmability that gives it plenty of ability to support future standards.

 

Like the Intel strategy for next-generation cloud data center infrastructure, ONP is both comprehensive and high performance, with the openness and flexibility that allows our customers to innovate as well. 


It can be argued that there’s never been a time when innovation was needed more in data center networks.

 

The increased reliance on information served up from cloud or enterprise data centers has made it vital to bring new security, performance management, privacy, traffic segregation, service-level agreement and revenue generation applications to the network.

 

Software-defined networking (SDN) and the Intel® Open Network Platform (ONP) help to open the network to this innovation, which brings us to the last in a series of blog posts about SDN use cases.  This one focuses on the way this new network paradigm helps to simplify network application / service deployment.

 

SDN’s primary innovation is that it separates a network’s control plane from its data plane in order to centralize packet processing in a software controller. That makes it easier to deploy network-wide policies or services because the controller has a comprehensive perspective on data flows, including congestion information, QoS, security etcetera.

 

Logically, the SDN controller sits above the infrastructure layer, but below an application layer with standardized APIs that allow easy integration of business applications.

 

Thus, applications that used to require standalone hardware and could only see the traffic passing through them, can now be run on a server and can manage an entire network using information available from the controller.

 

It's the disaggregation of the network into these logical layers that makes this possible. Intel has taken that concept one step further with its Open Network Platform announcement, which opens up key elements at each layer so that any OEM can leverage this infrastructure in the development of an SDN application. Take a look at www.intel.com/onp to get more information about ONP.

 

Applications bring added user features to a network, but also impact the manageability and performance of the network.  SDN lays the foundation for companies like Intel to open up the network for this innovation.

In recent months, I’ve been to several events where software-defined networking has been the main topic of discussion.  Now that I think about it, all of those events have been in the Silicon Valley, the epicenter of SDN.

 

Recently, I left that bubble to go to Interop in Las Vegas - and I’m finding that SDN is a big deal but it’s not the only thing going on in networking.

 

Of course, I talked about SDN because I’m focused that way.  But the overall show was about all of the topics that networking managers are concerned with like application intelligence, staffing, cloud vendor selection and how to kill Spanning Tree.  In fact, my recent search of the Interop site for the acronym SDN turned up only 19 references.

 

Why is this important?  While there is a huge amount of fire behind SDN in certain circles, the market at large has only a passing acquaintance with the technology.  I felt that Interop was a great opportunity for me to educate the wider audience on the benefits of the technology.

 

I took advantage of that opportunity by giving an overview of SDN technology and why it’s important and then discussing the Intel solution in a bit more detail.  This includes top-of-rack switch reference design called the Open Network Platform, the software elements and open APIs we have for OEMs dubbed Open Network Software and finally the Intel Ethernet Switch FM6000 series, the low-latency 10G/40G switch IC that is at the heart of the entire solution.

 

With this solution, we want to unleash the power of SDN for data center networking. One key step is education and that’s why the presentation at Interop was important.

“An inflection point is an event that changes the way we think and act.”

 

With this quote from Andy Grove in her keynote speech, Rose Schooler, VP of Intel Architecture Group and GM of Intel's Communications and Storage Infrastructure Group, launched the next phase of Intel’s SDN-based data center and telecom network transformation initiative at the Open Networking Summit this week.

 

The speech capped a great show for me and for the Intel Communications and Storage Infrastructure Group.  Our goal going into the event was to help the industry see how to get the full value out of the open networking paradigm shift. 

 

In just under an hour, Rose accomplished that by providing an overview of our datacenter and telecom network strategies, introduced three new products and let partner VMWare and customer Verizon demonstrate how they are using products from Intel.

 

Here’s a summary of the key products that were launched at the show (you can read more in our press release):

 

Intel® Open Network Platform Switch Reference Design: You’ve read about “Seacliff Trail” in this blog before.   What’s new is a tight integration with Wind River Open Network Software and a customizable software stack using Wind River Linux.  Seacliff Trail is now the ideal reference design for OEMs, offering a combination of customizable software and fast, low latency merchant silicon for next-generation SDN data center networking systems.

 

Intel® Data Plane Development Kit Accelerated Open vSwitch: With this project, Intel wants to help Open vSwitch developers speed up performance of small packets. The bulk of the packets in virtual machine networking applications are 64 Kbytes, and with the accelerated Open vSwitch we’re able to boost small-packet port-to-port performance by 10 times and accelerate virtual machine-to-virtual machine performance by five times.

 

Intel® Open Network Platform Server Reference Design: This innovative virtual network server reference design called “Sunrise Trail” will allow OEMs to create virtual appliances on standard Intel-architecture servers.  The reference design combines the DPDK Open vSwitch, support for SDN and NFV standards along with the Intel® Xeon® processor, Intel 82599 Ethernet Controller and Intel Communications Chipset 89xx series.  More to come on this new reference design as the first alpha units are due later this year.

 

Rose closed her speech with another Andy Grove maxim about inflection points – that they reflect a change in customer’s values and preferences. 

 

As you can see from these significant new products, Intel is moving fast to be in a good position to address these customer value and preference changes. 

Last year Intel® participated in its first demonstration of software-defined networking (SDN) at the Open Network Summit.  This year we’re coming back to the show with an expanded SDN product strategy and a new Open Networking Platform that will open up the entire SDN value chain for data centers.

 

The show opened yesterday in the Santa Clara Convention Center with a full day of tutorials and then an evening reception amongst the exhibits.  The program starts today and runs through Wednesday.

 

Last year, we said that the SDN revolution “starts at the switch.” This year, we’re bringing together the other elements of an SDN solution that will enable customers to dramatically boost the fabric performance in virtualized data center environments.

 

Starting on Monday, I’ll be staffing the booth at the show to demonstrate our SDN solution. In addition, our Seacliff Trail 48-port 10GE top-of-rack switch will be in a multivendor demonstration of NEC’s Programmable Flow Controller. 

 

On Wednesday, Rose Schooler, Vice President of the Intel Architecture Group and General Manager of the Communications and Storage Infrastructure Group, will make a keynote presentation that will cover key aspects of our Open Network Platform. She will announce some new components of the platform and talk about the customer need it meets. 

 

If you are not going to the show, but would like to hear Rose speak, you can watch the live webcast of her presentation.

 

The world of open networking moves fast.  The progress we’ve shown since ONS 2012 shows that Intel is keeping pace.  Check back for more blog updates later this week.

We started our blog series on software defined networking (SDN) use cases by looking at how SDN might enable network virtualization (http://communities.intel.com/community/wired/blog/2013/03/01/sdn-use-cases-network-virtualization).  We continue on a somewhat similar application by looking at SDN and virtual network appliances. 

 

Many networks utilize specialized appliances, such as firewalls, load balancers, WAN accelerators and others, to provide specialized network packet processing and functionality.

 

Many of these are now being turned into virtual appliances.  The term “virtual appliance” was coined to describe a self-contained virtual machine powered by an operating system that had a pre-configured application on top.

 

In the enterprise data center, one of the challenges of the virtual network appliance is that it often needs to be in the data path – either before the router, in the case of a firewall, or before a server, in the case of a load balancer. 

 

SDN replaces IP routing functionality on each switch with a network controller that can see all data and resources on the network and directs data flows.  With this global view, it can redirect data packets to virtual network appliances directly. In essence, it creates a data path for each data flow in order to direct it to the right virtual appliance.

 

This is also important in cloud applications where multi-tenant virtual servers may need dramatically different resources and where virtual data paths complicate the sharing of virtual appliances.

 

The network flow flexibility that SDN brings to datacenter and cloud networks makes virtual network appliances an even more viable and cost-effective way to deliver the network processing needed for secure and high-performance networks.

2013 is the year that software-defined networking gets its early installations, having gone through product trials and technology evolution over the past several years. Now, as vendors roll out their SDN networking offerings and companies start the buying process, the question for many is: “How can I best use this technology in my network?”

 

Network use cases, in fact, were one of the most asked for items at the Gartner Data Center Summit meeting my colleague Gary Lee went to in December 2012. In order to give a sense of what problems SDN can solve in your network I am starting a series of blog posts on key SDN use cases, starting with network virtualization.

 

First, let’s discuss server virtualization, which has helped to power cloud services by allowing data centers to scale computing power at a lower cost. Server virtualization dramatically reduces the cost of computing services and allows multiple customers to leverage a single server. Both server virtualization and SDN are tied in terms of being high-profile solutions that can have dramatic impact on a data center.

 

Network virtualization logically divides a 10GB Ethernet connection into multiple lower speed connections so that each virtual machine in a server can have its own dedicated connection without requiring a separate NIC and cable.

 

In an IP network, this is done using virtual LANs, but that doesn’t scale well across a heterogeneous network unless each vendor supports the same VLAN protocols. By replacing the per-switch IP decision making with a central SDN controller over the entire network, virtualized network connections can be more easily made from one end of the network to the other.

 

Virtualization.jpg

 

How does this work? In the diagram, the three virtual networks are shown in their logical grouping, but below that we see how the actual switching infrastructure is organized to make this happen. Top-of-rack switches have connections to virtual machines located in the different servers. The connections through the physical infrastructure are guided by the SDN controller. The controller must map the logical network connections to the physical network across all of the switches. This is a very complex state-management task that SDN is particularly good at.

 

Going forward, as server connections increase from 10GB Ethernet to 40GB Ethernet, there will be even more headroom for virtualized network connections, making for dramatically complex network designs. But SDN is intended to simplify that complexity, so that all cloud networks can use this technology to maximize their networking investment.

I was at the Open Compute Project Summit last week where the news broke about expanded details on the Open Compute Project.

 

While OCP is focused on making servers more flexible, it also will have an impact on data center networking. If you want to know more about OCP, this InfoWorld article has some good details.

 

What I want to concentrate on are the networking aspects of the proposed new system. In a nutshell, the OCP initiative will result in new standards for interoperable components (like processor boards, power supplies, etc.) that allow more flexibility in server designs. So you could imagine a common processor slot, for instance, that allows a company to define a very granular level of processing power.

 

On the networking front, the OCP proposal envisions a board-level switch that has a network connection to each processor / microserver on the board, with another network connection of up to 100 Gbps based on Intel®’s silicon photonics technology. This provides a fast, very low latency connection for up to 50 meters, easily reaching access switches.

 

So what happens to top-of-rack switches? Nothing … for now. First off, many commentators say that OCP equipment could be limited to large Internet and cloud service providers – like OCP founder Facebook. And thus, the TOR switch will remain in other data center networks indefinitely.

 

Even if that is not the case, there are still several years before OCP-based network servers will hit the market, as the Open Compute Project is still building out its ecosystem and talking to partners about the details of the various server components.

 

That leaves TOR switches, like our SeaCliff Trail 10G/40G top-of-rack switch reference design, as the chief building block for data center networks.

 

But this new architecture has a lot of promise and so companies concerned about the future transition to OCP switches should look again at their plan for deploying software-defined networking (SDN).

 

Because SDN moves network flow control and management from within the switches to a controller on a server, it can easily integrate SDN-enabled OCP systems into the network and allow data centers to migrate to the new architecture at their own speed.

On most days, I talk with people who are focused on the latest in open networking in the data center. But this week I’m seeing that data center openness means different things if you design servers.

 

That’s because I’m at Open Compute Summit with the Intel® SeaCliff Trail 10G/40G top-of-rack switch reference design – tucked in to a small corner of an large Intel booth filled with boards and server designs, including microservers.  It feels like I’m one of the few networking folks in a sea of server experts. 

 

The annual event is a Facebook-initiated effort to establish open hardware standards for data center servers. Some key initiatives include new open standards for virtual IO, storage, hardware management, data center racks and power supplies.  There’s also an OCP standard for an Intel motherboard.  Server virtualization has increased the efficiency of data center servers, and the Open Compute projects are designed to extend that efficiency to every element of the hardware design.

 

I think the Seacliff Trail will be a great fit at this show.  Servers and switches are the two essential elements in the data center, so it makes sense for them to be together at OCP Summit.  And Seacliff Trail is a very open switch design – most notably with support for software-defined networking. 

 

.

Recently, I participated in a webinar on software-defined networks (SDN) that reminded me of the importance of interoperability and performance testing for SDN.

 

Like many, it’s easy for me to get caught up in the features of SDN technology and the excitement of how it will change networking.  But that is only part of the story.

 

During the webinar, which was put on by network test leader Ixia*, industry analyst Jim Metzler reminded the audience that one of the potential market inhibitors for SDN, according to his surveys of IT leaders, is a fear that the technology will become proprietary.  Jim emphasized that a high degree of interoperability is needed for the technology to become mainstream.

 

This is why Intel® is committed to ensuring our Intel® Ethernet Switch FM6700 Family is fully interoperable with the SDN controllers from all manufacturers.

 

Most recently, our Barcelona OEM reference design was part of the Open Network Forum Plug Fest that took place at the Interop conference last spring. The test involved managing the switch using SDN controllers from a variety of vendors and demonstrating discovery, topology detection and fail-over functionality. 

 

Barcelona tested as perfectly interoperable.  But, since we had the stage we also wanted to demonstrate how the switch could execute these tests while operating at full throughput with our low latency of 300 ns.

 

SDN can change networking, but only if vendors continue to deliver on the promise of adopting the standard and contributing new features back to the community. Proving interoperability of controllers from all vendors is where the rubber meets the road in ensuring that SDN works as advertised.

Data center network architectures are in a state of flux with new technologies driving some very significant and far-reaching changes.  The data center network of the future could be based on software-defined networking (SDN). Or, it could be IP.  Or it could be a hybrid.  Either way there will be new protocols to support and constant changes to improve security and support new applications.

 

When it comes to network equipment for this environment, more flexibility is better.  We’ve written before about how the Intel Ethernet FM6000 Series switch silicon has microcode upgradability for flexible support of new and changing network standards.  Now, we’d like to look at the flexibility built into our Seacliff Trail (SCT) top-of-rack switch customer reference design that features a high-performance processor.

 

The SeaCliff Trail Customer Reference Board was announced in Sept. 2012 at IDF 2012, and provides 48x10GbE and 4x40GbE connections using the Intel Ethernet FM6000 Series switch silicon. This customer evaluation board also comes with a two-core Gladden (Sandy Bridge class) Intel processor and the Cave Creek chipset, which provides, among other things, CPU off-load capability features such as encryption and decryption for security. The CPU subsystem that consists of the Gladden and Cave Creek is also known as the Crystal Forest Platform.

 

The Crystal Forest Platform is connected to the rest of the system using an AMC connector, allowing OEMs or ODMs to swap in a different AMC with a different Intel processor, depending on whether they need more performance or less power. An Intel processor in a switch opens up many possibilities. Another way to look at it is to think of it as a server with switching capabilities.

 

In legacy networking use cases, in which traditional L2 and L3 network stacks run on the CPU to provide autonomous functionality, Gladden can provide more than enough processing. Things get more interesting in an SDN environment: The separation of the data and control planes, in simple terms, means that the SDN controller can be moved to any host in the network with the switch being reduced to a data forwarding device with little need for a powerful CPU.

 

The Intel Ethernet FM6000 series switch silicon provides unparalleled SDN support at ultra low latencies and provides the data plane functionality.

 

In fairly large data centers, a single SDN controller may become inefficient, resulting in the need for a hierarchical solution involving a number of SDN controllers. Rather than using valuable servers in the data center, these tiered controllers can be run right on the switch with the power, familiar programming ecosystem and x86 instruction set provided by Gladden. Moving the controller to the switch CPU also provides lower latency in reacting to network changes as well as reduces the time to populate the pattern-matching tables of the switch silicon.

 

This may sound like we’re moving the switching intelligence back into the switch, but that’s not really the case.  The controller is still centralized because the processor is still separate from the switch silicon and the controller would only be needed on a few switches in the network.  The other switches could be equipped with simpler IA-based processors for network management, security or other non-controller applications.

 

Security is a critical factor forboth SDN and IP networks, and the Crystal Forest Platform gives switch designers some options for improved security.  In SDN environments, the control messaging between the controller, that can be anywhere in the data center, and the switch silicon may need to be encrypted for security purposes.In Seacliff Trail, Cave Creek can offload this from Gladden.  In an IP network, since the CPU is also a server, we can run security applications on the switch. For example, we may want to have an ACL or firewall application on the switch.

 

The possibilities for new functionality go on and on.  Building this power and performance into SCT means our customers have the opportunity to build something very differentiated that meets the needs of their customers.

In the last several blog posts we’ve been focusing on the evolution of data center networks to more virtualization and to software-defined networking (SDN). Another source of the transformation is big data and the emergence of the Hadoop application.

 

Hadoop is distributed processing on steroids to deal with the incredibly large data sets that are needed to solve business, scientific, law enforcement and other real world problems.

 

Some data center applications are called on to process data sets on the order of Exabytes of data, which strains the capabilities of traditional relational data base systems.  Hadoop emerged as a way to distribute the workload of these jobs to various servers or virtual servers for processing then to compile the responses and to present the result to the user.

 

From a network perspective, Hadoop needs the ability to scale to up to thousands of processors, in many cases, and that means a layer-three routed network.  With this size of data set spread across so many servers, end-to-end latency can become prohibitive unless a low-latency network is in place.  One of the reasons Hadoop is popular is that it reduces processing time – without a low-latency network that advantage will disappear.

 

As a company with the first switch silicon to feature latency of less than 400 ns, the Intel® Ethernet FM6000 switch silicon family, is ready to help networking systems develop products for the new needs of a big data / Hadoop world.

Until this month, hot start-up companies - one of which was recently bought for $1.2 billion - have dominated the market for software-defined network (SDN) controllers. 

 

But HP’s announcement of its Virtual Application Networks SDN Controller signaled a sea change.  The credibility and worldwide reach that HP brings to the market opens the door for more mainstream customers to consider SDN.

 

For HP and other switch manufacturers, though, the growth in the number of controllers increases the need for flexible switch architectures. 

 

The Intel® Seacliff Trail SDN switch reference design can give these manufacturers the flexibility for the controller diversity that is emerging in this market.

 

One of Seacliff Trail’s key flexibility benefits is the use of the server-class Crystal Forest platform featuring the Gladden CPU.  With this computing power, the switch can be programmed for multiple controllers in the event that the data center supports multiple vendors.

 

Another interesting use case is running the controller on the switch itself.  This may seem counterintuitive, since the point of SDN is to separate the controller from the hardware, but in many large data centers there will be a need for multiple or hierarchical controllers, and the ability to deploy these without adding multiple servers is an attractive alternative.

 

The second key adaptability advantage that Seacliff Trail has is the use of the Intel® Ethernet Switch Family FM6000 switch chip itself.  In terms of flexibility, these chips feature the programmable Intel® FlexPipe™ frame-processing pipeline that can be updated by a switch maker to support their controller, as well as to support IP traffic.

 

The success of these early SDN controller companies will breed more competition. Intel is ready to offer switch platforms and reference designs that can support the changes that these new players will bring to this market.

Last month, I was part of the Intel team that participated in Interop Tokyo, and demonstrated Intel’s software-defined networking (SDN)-compatible switch silicon.

 

Like here in the US, interest is high in Japan for SDN, but from what I saw during my time at the show and in meeting with several customers, I believe the market for SDN products will emerge more quickly in Japan.

 

The market is excited about SDN, and the leading SDN protocol OpenFlow, because it gives users a vendor-neutral way to control and manage the network.  Instead of using L2/L3 protocols for routing data, an SDN controller manages the data flows.  This should provide the same level of services as an IP network without vendor-proprietary protocols that can limit advanced network services in a multi-vendor network.

 

In Japan, local networking providers have good market share and their emphasis hasn’t been on high-margin routing software.  Therefore, these companies are aggressively embracing SDN.

 

The proof of this was in the strong turnout at the booth (estimated at 200 people), who came to see Intel and NTT Data* demonstrate a network built using NTT Data’s OpenFlow controller and Intel’s Barcelona 10/40 GBE switch reference platform.  The Barcelona platform uses the Intel® Ethernet FM6000 Switch silicon that provides 72 10GbE/18 40GbE ports and supports non-blocking switching and routing with a latency of less than 400ns.

 

Interop Tokyo was my first visit to Japan and I really appreciated the hospitality and the food in Tokyo.  If my prediction about the success of SDN in Japan holds true, I don’t think it will be my last trip.

A lot has happened since the Interop* Las Vegas conference in May, where Intel demonstrated its support for software-defined networking (SDN).

 

Now, as we get ready to take the demos to Interop Tokyo, nearly all of the major switching vendors have pledged to support SDN, and International Data Corporation (IDC)* has predicted that the market will grow from $200 million in 2013 to $2 billion in 2016.

 

The primary growth driver, according to IDC Group Vice President Lee Doyle is “highly virtualized network environments and customers who need programmable networks. Customers have always wanted to tune the network, but network management tools have been poor or non-existent."

 

SDN, and its leading protocol OpenFlow, change all of that, giving more management control to multi-vendor networks.

 

At Interop Japan, we will team up with NTT Data* to demonstrate that management control using a combination of NTT Data’s OpenFlow controller and the Intel Barcelona 10GbE top-of-rack switch.

 

The demo will show how Barcelona performs when it is controlled by the NTT Data OpenFlow software control plane.  We’re expecting that Barcelona will deliver its full non-blocking performance on all 48 10GbE ports and four 40GbE ports with latency of only 400ns for L3 switching.

 

Interop Tokyo will run from June 12-15, and I will be there, along with other technical and local experts to answer your questions.

Filter Blog

By author: By date:
By tag: