1 2 3 14 Previous Next

Wired Ethernet

202 Posts
dougb

Intel® Ethernet and Security

Posted by dougb Apr 16, 2014

There has been a very famous tale of a security issue in the news lately.  Others have done a great job explaining it, so I won't even dare to even try.   Some people are concerned that Intel® Ethernet products might be at risk.  Coming to this blog is a great first step, but there is a much better place for you to look.  Intel® Product Security Center.  It lists all the advisories that Intel is currently tracking.  A great resource, it has clear version and update suggestions for products that have issues.  It even has a mailing list option so you can get updates when they come out.  The one in the news is listed as Multiple Intel Software Products and API Services impacted by CVE-2014-0160, and !spoilers alert! Intel Ethernet doesn't have any products listed.  If we did have any security related issues, you would find them there.  I strongly suggest you add the Intel® Product Security Center to your bookmarks, and sign up for the e-mail.  Vigilance is the first step to better security, and Intel tries to make it easier for busy IT professionals to stay informed.

The following blog post was originally published on NetApp’s SANbytes blog to commemorate the launch of the NetApp X1120A-R6 10-BASE-T adapter – the latest milestone in the long and fruitful relationship between the companies. We’re reposting it here because it's a good overview of the state of the art in Ethernet storage networking.

 

When two leaders like Intel and NetApp work together on storage networking, the industry should expect big things. Intel® Xeon® processor-based storage systems from NetApp, for example, are delivering new levels of performance for customers around the world who are trying to keep up with the ever-increasing amounts of data generated by their users and applications. Intel and NetApp have also collaborated on many engineering efforts to improve performance of storage protocols including iSCSI and NFS.

 

This week’s announcement of the NetApp X1120A-R6 10GBASE-T adapter, which is based on the Intel® Ethernet Controller X540, is another significant development for Ethernet-based storage. Virtualization and converged data and storage networking have been key drivers of the migration to 10 Gigabit Ethernet (10GbE), and NetApp was an early adopter of the technology. Today, many applications are optimized for 10GbE. VMware vSphere, for example,
allows vMotion (live migration) events to use up to eight Gigabits of bandwidth and move up to eight virtual machines simultaneously. These actions rely on high-bandwidth connections to network storage systems.

 

10 Gigabit connectivity in these systems isn’t new, so why is the NetApp X1120A-R6 adapter special? For starters, it’s the first 10GBASE-T adapter supported by NetApp storage systems (including the FAS3200, FAS6200,
and the new FAS8000 lines), and we believe 10GBASE-T will have a huge appeal to data center managers who are looking to upgrade from one Gigabit Ethernet to a higher-speed network. 

 

There are a few key reasons for this:

 

  • 10GBASE-T allows IT to use their existing Category 6/6A twisted-pair copper cabling. And for new installations, this cabling is far more cost-effective than other options.
  • Distance flexibility: 10GBASE-T supports distances up to 100 meters and can be field-terminated, making it a great choice for short or long connections in the data center.
  • Backwards-compatibility: Support for one Gigabit Ethernet (1000GBASE-T) allows for easy, phased migrations to 10GbE.

The NetApp X1120A-R6 adapter gives data center operators a new option for cost-effective and flexible high-performance networking. For the first time, they’ll be able to use 10GBASE-T to connect from server to switch
to storage system.

 

Intel and NetApp have worked together to drive the market transition to 10GbE unified networking for many years, and this announcement is another example of our commitment to bringing these technologies to our customers.

 

If you’d like to learn more about the benefits of 10GBASE-T, here are a couple of great resources:

 

 

Follow me on Twitter @Connected_Brian 

Intel is pleased to announce the Intel® Ethernet Server Adapter X520 Series for the Open Compute Project.

 

Available in both single and dual-port SKU’s, these adapters deliver a proven, reliable solution for deployments of Ethernet for high bandwidth, low cost, 10GbE network connections.  Increased I/O performance with Intel® Data Direct I/O Technology (DDIO) and support for intelligent offloads make this adapter a perfect match for scaling performance on Intel® Xeon® processor E5/E7 based servers.

 

The best-selling Intel® Ethernet Converged Network Adapter X520 Series is known for its high performance, low latency, reliability, and flexibility.  The addition of the Intel® Ethernet Server Adapter X520 Series for Open Compute Project to the family delivers all the X520 capabilities, in an Open Compute Project (OCP) form factor.  The Open Compute Project (OCP) is a Facebook* initiative to openly share custom data center designs to improve both cost and energy efficiency across the industry.  The OCP uses a minimalist approach to system design thereby reducing complexity and cost, allowing data centers to scale out more effectively.  By publishing the designs and specifications of this low-power, low-cost hardware, it can reduce the cost of infrastructure for businesses large and small.

 

For more information on the Intel® Ethernet Server Adapter X520 for Open Compute Project, visit:  http://www.intel.com/content/www/us/en/network-adapters/converged-network-adapters/server-adapter-x520-da1-da2-for-ocp-brief.html

 

For more general information on the Open Compute Project initiative, visit:  http://opencompute.org/

The Intel Developer Forum (IDF) was held last week and it was an amazing collection of the brightest minds in the industry looking to push the technology envelope.

 

Like in years past, when it comes time to for Intel® to debut its new products at IDF they are always better and more powerful.  But this year we had one announcement that bucked that trend.  It was a new product that was better and less powerful.

 

I’m talking about microservers, a new trend in computing where multiple, lower performance, lower power processors are used as servers for a new class of computing tasks. It was one of the topics I presented at two poster chats on Tuesday and to about 60 attendees during my technical session on Wednesday.

 

The microserver initiative fits with Intel’s strategy of developing “workload-optimized” solutions.  There are a lot of computing tasks, such as memory caching, dedicated webhosting and cold storage, where the processing and I/O demands per server are light.

 

To meet these needs, we formerly introduced the Intel Atom™ C2000, at a special event one week before IDF.

 

The density of microservers on a board makes networking these new systems a challenge. The power profile of the Atom™ C2000, for example, allows data center shelves with 48 microservers per board. A standard telecom rack can hold 12 of these shelves for a total of 576 microservers per rack. That’s more network connections than many enterprise workgroups.

 

However, by using the Intel® Ethernet Switch FM5224 chip, all of the processors on a shelf can be internetworked so that there are only a few uplink connections needed to a top of-rack-switch.  This makes it manageable from a connectivity perspective. 

 

But there’s still traffic from all of the processors that needs to be routed.  That is why we’re evolving our Open Network Platform (ONP) software-defined networking (SDN) architecture to support microservers. 

 

My colleague Recep wrote a post on this blog describing how SDN works just recently, so I’ll just summarize here to say that SDN shifts the packet processing intelligence from the switch to a centralized controller. This offers the benefit of reducing the complexity of network design and increasing the throughput.

 

Many microservers today are sold with proprietary networking software.  The issue with this is vendor lock in and the potential for slower pace of innovation.  This last point is important since many of the applications for microservers are cloud-based and that market is evolving very quickly.

 

Intel’s Open Network Platform combines the performance of our Intel Ethernet Switch FM5000/FM6000 high-volume, high performance switching hardware with ONP software based on Windriver Linux.  In addition, there are open APIs for drivers to eliminate vendor lock in and APIs for controllers and a wide variety of third-party apps and OEM apps.

 

What this means to microserver OEMs is that they can bring their own unique differentiating software to their products while at the same time integrating with an SDN controller or other app from a third party.  Cost is kept at a minimum while functionality and differentiation is maximized. 

 

The reception to this message at IDF13 was good and already several OEMs are planning to develop microserver products.  We’ll take a look at some of those designs in a future blog post.

Software-defined networking (SDN) is a major revolution in networking that is just starting to move from bleeding edge to leading edge customer adoption. 

 

But already, Intel® is starting to think about what comes next and how the software defined model can be more pervasive in cloud-scale data centers. A key factor in those plans is the Open Networking Platform (ONP) that we announced in April.

 

That was my takeaway from the announcement about Intel’s cloud infrastructure data center strategy.  If you read the press release and the related presentations and watch the videos, you will see that the emphasis is on the strategy and several new microprocessors, including the Avoton and Rangely 22nm Atom processors, and the 14nm Broadwell SoC.

 

I want to unpack a bit about how the ONP fits in this next-generation data center strategy.  The architecture of the next-generation cloud infrastructure data center is built on three technology pillars:

  • Workload-optimized technologies: Examples here include deploying servers with the different CPU, memory and I/O capabilities based on the workload.
  • Composable resources: Moving from building server racks from discrete servers, networking equipment, etc to deploying a more integrated solution. Intel is making strides here with its Rack-Scale Architecture initiative.
  • Software-defined infrastructure: this is using a software controller to direct data flows to available resources which helps overcome bottlenecks keeping data centers from overprovisioning.

 

The ONP initiative combines our low-latency network processing, switching and interface hardware with a customizable software stack that works with third-party SDN controllers and network applications.

 

Already, the ONP “Seacliff Trail” 10GbE / 40GbE SDN top-of-rack switch plays a key role in the Rack Scale Architecture.

 

But the ONP also provides the foundation for a future where the SDN controller evolves into a workload orchestration controller – directing data flows not only to network resources but also orchestrating computing, memory and storage resources as well. 

 

Our open approach means that ONP infrastructure can support new controllers or orchestration applications.  The switching architecture of the Intel Ethernet Switch FM6000 chip family is designed for evolving network standards with industry-low L3 latency (400ns), high throughput and microcode programmability that gives it plenty of ability to support future standards.

 

Like the Intel strategy for next-generation cloud data center infrastructure, ONP is both comprehensive and high performance, with the openness and flexibility that allows our customers to innovate as well. 

I’m putting the final touches on my presentation for IDF13 (session #CLDS006) which will look at the emerging networking requirements for new server form-factors, specifically microservers and rack scale architectures.

 

Microservers contain multiple servers on a single board and are an emerging solution for some targeted data center workloads requiring a high number of low-power processors.  As these servers evolve and new CPUs are introduced (like the Intel® Atom™ C2000 processor family), CPU density is increasing.  At IDF13, I’ll talk about the emerging networking requirements in this category and how the Intel Ethernet Switch family and the Open Networking Platform address these needs.

 

Another hot topic for cloud data centers is the concept of rack scale architecture systems – which are pre-configured, high-performance data center racks that can be rapidly deployed to meet new workload requirements.  In this part of my talk, I’ll cover how the Open Networking Platform is being extended to provide efficient connectivity within these high-performance systems.

 

Here’s an outline of my presentation:

•      Data Center Trends

•      New Microserver Solutions

•      Intel® Ethernet Switch Silicon Architecture

•      Rack Scale Architecture

•      Software-defined Infrastructure

•      Example applications and proof points

 

I hope to see you at my IDF session at 3:45PM on September 11th in San Francisco. You are also invited to my poster chats on Switching for High Density Microservers from 11:30-1:00 and from 3:00-4:30 on September 10th.

 

If you are still on the fence about the value of attending IDF – or now want to register – I have included some links to the IDF website below.

 

Why Attend IDF13:

https://www-ssl.intel.com/content/www/us/en/intel-developer-forum-idf/san-francisco/2013/why-attend-idf-2013-san-francisco.html

 

Registration now:

https://secure.idfregistration.com/IDF2013/

 

More information on the IDF13 keynote:

http://www.intel.com/content/www/us/en/intel-developer-forum-idf/san-francisco/2013/idf-2013-san-francisco-keynotes.html

 

Main IDF13 landing page:

https://www-ssl.intel.com/content/www/us/en/intel-developer-forum-idf/san-francisco/2013/idf-2013-san-francisco.html

 

What’s New at IDF13:

https://www-ssl.intel.com/content/www/us/en/intel-developer-forum-idf/san-francisco/2013/idf-2013-san-francisco-whats-new.html?

Ethernet never seems to stop evolving to meet the needs of the computing world.  First it was a shared medium, then computers need more bandwidth and multi-port bridges became switches.  Over time, Ethernet speeds increased by four orders of magnitude.  The protocol was updated for the telecom network, then for data centers. 

 

Now it’s scaling down to meet the needs of microserver clusters.

 

Micro server clusters are boards with multiple low-power processors that can work together on computing tasks.  They are growing in popularity to serve a market that needs fast I/O but not necessarily the highest processing performance.

 

From its many evolutions, Ethernet has the right combination of bandwidth, routing, and latency to be the networking foundation for the microserver cluster application.

 

Bandwidth: For certain workloads, congestion can occur reducing system performance when processors are connected using 1GbE, so the preferred speed is 2.5 GbE.  If you’ve never heard of 2.5GbE, it’s because it was derived from the XAUI spec, but only uses a single XAUI lane.  The XAUI standard was implemented with the idea that four lanes of XAUI could be used to transmit 10GbE signals from chip to chip for distances longer than the alternative (which capped out at 7 cm).  XAUI specifies 3.125 Gbps, which provides for overhead and a 2.5 Gbps full duplex data path.  XAUI had a claim to fame in 2005, when it because a leading technology to improve backplane speeds in ATCA chassis designs.  By using a single lane of XAUI, it’s the perfect technology for a microserver cluster.

 

Density: Density and performance are key attributes for micro server clusters. Bandwidth is related to total throughput of the switch in that packet forwarding engine must be robust enough to switch all ports at full speed.  For example, the new Intel® Ethernet Switch Family FM5224 has up to 64 ports of 2.5 GbE and another 80Gb of inbound/outbound bandwidth (either eight 10GbE ports or two 40GbE ports).  Thus the FM5224 packet-processing engine handles 360 million packets per second to provide non-blocking throughput.

 

Distributed Switching: Some proprietary micro server cluster technologies advocate layer two switching, perhaps under the assumption that the limited number of devices on a shelf eliminates the need for layer three switching. But traffic will need to exit the shelf – maybe even to be processed by another shelf – and so to make this true, the shelf would depend on an outside top-of-rack (TOR) switch to route traffic between shelves or back out onto the Internet.  This would change the nature of a TOR switch, which now is  “pizza box” style switch with between 48-72 ports.  With an average of 40 servers per shelf, a TOR would need 400 or more ports to connect all of the shelves.  But Ethernet routing can be placed in every cluster (micro server shelf) to provide advanced features such as load balancing and network overlay tunneling while reducing the dependency on the TOR switch.

 

Latency: To be considered for high-performance data center applications, Ethernet vendors needed to reduce latency from microseconds to nanoseconds (Intel lead this industry effort and is the low latency leader at 400 ns).  That work is paying dividends in the microserver cluster.  Low latency means better performance with small packet transmissions and also with storage-related data transfers.  For certain high performance workloads, the processors on micro server clusters must communicate constantly with each other, making low latency essential. 

 

With the perfect combination of bandwidth, routing and latency, Ethernet is the right technology for microserver networks.  Check out our product page to take a look at the Intel Ethernet Switch FM5224 that is built just for this application.

With the growing popularity of micro servers, networking has moved from connecting computers on desktops or servers in a rack to connecting processors on a board. 

 

That’s a new way of thinking about networking that takes a new kind of switch chip, which is why we’ve recently introduced the Intel® Ethernet Switch FM5224. To meet these new market needs takes a device that features a new design, mixed with legacy networking strengths.

 

What’s New

Micro servers are part of an emerging computing platform architecture that includes many low power processor modules in a single enclosure.  The micro server operating system parcels out computing tasks to the various processors and coordinates their work. For certain distributed data center workloads, there is a lot of interest in this approach.

 

However, designing a dense micro server cluster calls for a significant uplink data bandwidth combined with high port count interconnectivity between the processor modules.  Enter the new FM5224, which we call a high port-count Ethernet switch, to meet these needs.

 

The device can support up to 64 nonblocking ports of 2.5GbE along with up to eight 10GbE uplink ports (or two 40GbE ports). 

 

Why 2.5GbE?  This speed was popularized by blade server systems, but never made an official Ethernet standard. Our analysis of the bandwidth needs of micro servers shows that many workloads need more than 1 GbE per server module, which makes our 2.5GbE switch ports ideal for this application.

 

What’s the Same

While micro servers are new, they still communicate using Ethernet.  The FM5224 is built using Intel’s Alta switch architecture, which brings the benefit of some advanced features for micro server applications.

 

The I/O –heavy nature of micro servers makes non-blocking, low latency performance very important.  The FM5224 is built with Intel’s FlexPipe packet processing technology that delivers 360 million pps forwarding rate.  The device also offers less than 400 ns of latency, independent of packet size or features enabled. 

 

Combined, this performance makes it possible for each processor to pass data at wire rate, even small packets, which are expected to make up most of the data passed between processors. In addition, the FM5224 has excellent load distribution features that can be used to efficiently spread the workload across multiple micro server modules.

 

For the micro server chassis uplinks, OEMs have their choice of four 10GbE or two 40GbE ports that can directly drive direct-attach copper cables up to 7 meters without the need for an external PHY.

 

With the FM5224, OEMs have a tremendously flexible chip that is fine tuned for micro server applications.

 


It can be argued that there’s never been a time when innovation was needed more in data center networks.

 

The increased reliance on information served up from cloud or enterprise data centers has made it vital to bring new security, performance management, privacy, traffic segregation, service-level agreement and revenue generation applications to the network.

 

Software-defined networking (SDN) and the Intel® Open Network Platform (ONP) help to open the network to this innovation, which brings us to the last in a series of blog posts about SDN use cases.  This one focuses on the way this new network paradigm helps to simplify network application / service deployment.

 

SDN’s primary innovation is that it separates a network’s control plane from its data plane in order to centralize packet processing in a software controller. That makes it easier to deploy network-wide policies or services because the controller has a comprehensive perspective on data flows, including congestion information, QoS, security etcetera.

 

Logically, the SDN controller sits above the infrastructure layer, but below an application layer with standardized APIs that allow easy integration of business applications.

 

Thus, applications that used to require standalone hardware and could only see the traffic passing through them, can now be run on a server and can manage an entire network using information available from the controller.

 

It's the disaggregation of the network into these logical layers that makes this possible. Intel has taken that concept one step further with its Open Network Platform announcement, which opens up key elements at each layer so that any OEM can leverage this infrastructure in the development of an SDN application. Take a look at www.intel.com/onp to get more information about ONP.

 

Applications bring added user features to a network, but also impact the manageability and performance of the network.  SDN lays the foundation for companies like Intel to open up the network for this innovation.

In recent months, I’ve been to several events where software-defined networking has been the main topic of discussion.  Now that I think about it, all of those events have been in the Silicon Valley, the epicenter of SDN.

 

Recently, I left that bubble to go to Interop in Las Vegas - and I’m finding that SDN is a big deal but it’s not the only thing going on in networking.

 

Of course, I talked about SDN because I’m focused that way.  But the overall show was about all of the topics that networking managers are concerned with like application intelligence, staffing, cloud vendor selection and how to kill Spanning Tree.  In fact, my recent search of the Interop site for the acronym SDN turned up only 19 references.

 

Why is this important?  While there is a huge amount of fire behind SDN in certain circles, the market at large has only a passing acquaintance with the technology.  I felt that Interop was a great opportunity for me to educate the wider audience on the benefits of the technology.

 

I took advantage of that opportunity by giving an overview of SDN technology and why it’s important and then discussing the Intel solution in a bit more detail.  This includes top-of-rack switch reference design called the Open Network Platform, the software elements and open APIs we have for OEMs dubbed Open Network Software and finally the Intel Ethernet Switch FM6000 series, the low-latency 10G/40G switch IC that is at the heart of the entire solution.

 

With this solution, we want to unleash the power of SDN for data center networking. One key step is education and that’s why the presentation at Interop was important.

The Intel® Development Forum in Beijing took place two weeks ago and the interest in my two SDN-related presentations was very high.

 

My poster chat drew about 40 or so people who stopped by in groups of 6-8 to hear the high-level overview to the Intel SDN story.  The attendance more than doubled for my conference session, where I went a bit deeper into Intel’s new data center and telecom network transformation initiative – giving a preview of the three product announcements made at the Open Networking Summit.

 

One challenge that is unique to China is scaling new web services for a potential market of 1.3 billion people – almost four times that of the U.S.  There were a lot of questions on this topic from top service providers, which I took to indicate that scaling is very important.

 

The other difference I noticed is that with less of a legacy network infrastructure than in the U.S., Chinese network managers are very open to trying new things to get the scalability and performance they need to deliver great service levels.

 

One key element of my presentation was a deep dive into why new networking platforms, like ONP introduced at ONS, are so necessary to advance the state of the art of SDN, and to providing ease of scaling in these high performance data centers.

 

As the server virtualization trend expanded into network virtualization, building high-performance, low-latency networks became much more complex for enterprises and data center operators.  New IP protocols like TRILL helped, but maintaining server/network coherency became very labor intensive.

 

To network managers dealing with this challenge, the SDN promise of separating the network control plan into a central network controller architecture was an immediate solution to a nagging problem.  And, first-generation networking products delivered on this promise by layering SDN onto existing switches. 

 

But the promise of SDN is much bigger than that; it’s nothing short of opening networks to a wave of innovation around new software functionality along with additional network cost-per-bit reductions. That’s the total story that ONP delivers on.

 

The potential of SDN for network innovation mirrors the transition from proprietary mini computer to the PC, which spawned countless innovations thanks to its combination of standard processors and operating systems and value-added applications.

 

In the network version of this story, enterprises evolve from vertically integrated networking platforms that are closed and slow to innovate, to a more open system with standardized switch silicon that has an open API to the control plane (or control planes for specialized applications).  These control planes then communicate through another API with apps running on a virtual server. 

 

This means that a network that had to be architected around special appliances to do packet inspection or provide security can now have those applications running on a high-performance server. The global controller will know what packets need to be processed by that application and will direct them to the application before forwarding them to their destination.

 

This architecture breaks down many of the barriers to entry in this market.  For new players, all they need are their software skills to develop their application. They can sell it into any network that supports the open API – regardless of the manufacturer.  On the other side of the coin, an existing software company can use standard hardware to easily develop its own complete solution, speeding time to market.

 

For every company that needs to scale quickly and keep network costs and complexity low – especially in fast growing economies like China – this is really good news.

“An inflection point is an event that changes the way we think and act.”

 

With this quote from Andy Grove in her keynote speech, Rose Schooler, VP of Intel Architecture Group and GM of Intel's Communications and Storage Infrastructure Group, launched the next phase of Intel’s SDN-based data center and telecom network transformation initiative at the Open Networking Summit this week.

 

The speech capped a great show for me and for the Intel Communications and Storage Infrastructure Group.  Our goal going into the event was to help the industry see how to get the full value out of the open networking paradigm shift. 

 

In just under an hour, Rose accomplished that by providing an overview of our datacenter and telecom network strategies, introduced three new products and let partner VMWare and customer Verizon demonstrate how they are using products from Intel.

 

Here’s a summary of the key products that were launched at the show (you can read more in our press release):

 

Intel® Open Network Platform Switch Reference Design: You’ve read about “Seacliff Trail” in this blog before.   What’s new is a tight integration with Wind River Open Network Software and a customizable software stack using Wind River Linux.  Seacliff Trail is now the ideal reference design for OEMs, offering a combination of customizable software and fast, low latency merchant silicon for next-generation SDN data center networking systems.

 

Intel® Data Plane Development Kit Accelerated Open vSwitch: With this project, Intel wants to help Open vSwitch developers speed up performance of small packets. The bulk of the packets in virtual machine networking applications are 64 Kbytes, and with the accelerated Open vSwitch we’re able to boost small-packet port-to-port performance by 10 times and accelerate virtual machine-to-virtual machine performance by five times.

 

Intel® Open Network Platform Server Reference Design: This innovative virtual network server reference design called “Sunrise Trail” will allow OEMs to create virtual appliances on standard Intel-architecture servers.  The reference design combines the DPDK Open vSwitch, support for SDN and NFV standards along with the Intel® Xeon® processor, Intel 82599 Ethernet Controller and Intel Communications Chipset 89xx series.  More to come on this new reference design as the first alpha units are due later this year.

 

Rose closed her speech with another Andy Grove maxim about inflection points – that they reflect a change in customer’s values and preferences. 

 

As you can see from these significant new products, Intel is moving fast to be in a good position to address these customer value and preference changes. 

Last year Intel® participated in its first demonstration of software-defined networking (SDN) at the Open Network Summit.  This year we’re coming back to the show with an expanded SDN product strategy and a new Open Networking Platform that will open up the entire SDN value chain for data centers.

 

The show opened yesterday in the Santa Clara Convention Center with a full day of tutorials and then an evening reception amongst the exhibits.  The program starts today and runs through Wednesday.

 

Last year, we said that the SDN revolution “starts at the switch.” This year, we’re bringing together the other elements of an SDN solution that will enable customers to dramatically boost the fabric performance in virtualized data center environments.

 

Starting on Monday, I’ll be staffing the booth at the show to demonstrate our SDN solution. In addition, our Seacliff Trail 48-port 10GE top-of-rack switch will be in a multivendor demonstration of NEC’s Programmable Flow Controller. 

 

On Wednesday, Rose Schooler, Vice President of the Intel Architecture Group and General Manager of the Communications and Storage Infrastructure Group, will make a keynote presentation that will cover key aspects of our Open Network Platform. She will announce some new components of the platform and talk about the customer need it meets. 

 

If you are not going to the show, but would like to hear Rose speak, you can watch the live webcast of her presentation.

 

The world of open networking moves fast.  The progress we’ve shown since ONS 2012 shows that Intel is keeping pace.  Check back for more blog updates later this week.

We started our blog series on software defined networking (SDN) use cases by looking at how SDN might enable network virtualization (http://communities.intel.com/community/wired/blog/2013/03/01/sdn-use-cases-network-virtualization).  We continue on a somewhat similar application by looking at SDN and virtual network appliances. 

 

Many networks utilize specialized appliances, such as firewalls, load balancers, WAN accelerators and others, to provide specialized network packet processing and functionality.

 

Many of these are now being turned into virtual appliances.  The term “virtual appliance” was coined to describe a self-contained virtual machine powered by an operating system that had a pre-configured application on top.

 

In the enterprise data center, one of the challenges of the virtual network appliance is that it often needs to be in the data path – either before the router, in the case of a firewall, or before a server, in the case of a load balancer. 

 

SDN replaces IP routing functionality on each switch with a network controller that can see all data and resources on the network and directs data flows.  With this global view, it can redirect data packets to virtual network appliances directly. In essence, it creates a data path for each data flow in order to direct it to the right virtual appliance.

 

This is also important in cloud applications where multi-tenant virtual servers may need dramatically different resources and where virtual data paths complicate the sharing of virtual appliances.

 

The network flow flexibility that SDN brings to datacenter and cloud networks makes virtual network appliances an even more viable and cost-effective way to deliver the network processing needed for secure and high-performance networks.

Later this week, I’ll board a 13-hour flight from Los Angeles to Beijing to take part in the Intel® Developer Forum on April 10-11.

 

If you are going to the event and are interested in what Intel is doing in the data center and connected systems market, I recommend that you first go hear our General Manager Diane Bryant give her keynote talk about the future of our business, on April 10 between 9 and 11:00am.

 

Then, you can hear me talk at two times during the conference: at my Poster Chat on April 10 at 2pm, and at my April 11 session presentation at 3:45pm (where I will pair up with Shashi Gowda of our Wind River Systems division).

 

In both talks, I’m going to be sharing how Intel sees the future of the software defined network (SDN) market and what product plans are in place to help OEMs participating in this market.

 

In this blog post, I’ll touch on my poster chat, and next week, I’ll provide an overview of the session presentation.  If you’ve never been to a poster chat, it is exactly what the words say – I have created a large poster and I describe it and answer any questions that come up.

 

My poster for IDF Beijing covers the following topics:

  • The evolution from traditional IP networks to SDN networks and the advantages that come from that.
  • A description of the Intel Ethernet Switch FM6000 functionality.  Here I will talk about how we get low latency and discuss our Seacliff Trail 48-port 10GB/40GB Ethernet switch reference design.
  • From there, I’ll go into a discussion of our software architecture that starts with APIs to open the FM6000 to SDN controllers, and contains the operating system and other software components necessary to fully implement SDN switching.
  • Then, I want to dig deeply into our FlexPipe frame forwarding architecture, which is built with advanced frame header processing that makes it flexible for an evolving standard like SDN.

 

It’s a lot to talk about in an hour, but I’m looking forward to providing a high-level overview that I can then explore further in my workshop.  More on that next week.

Filter Blog

By author: By date:
By tag: