Full Unified Network Functionality at No Additional Cost


Today, Intel extends its leadership in Ethernet and Unified Networking with the addition of qualified Open Fibre Channel over Ethernet (Open FCoE) support to the Intel® Ethernet Server Adapter X520 and Intel® 82599 10 Gigabit Controller product family.


Adding Open FCoE to Intel’s robust 10 Gigabit Ethernet (10GbE) unified networking solutions enables IT departments to create data center superhighways that could reduce global IT spending by three billion dollars a year. A simple, unified data center network is a cornerstone of Intel’s Cloud 2015 vision, which was announced in October.


Intel Open FCoE is available as a free upgrade on Intel 10 Gigabit products and is qualified by EMC* E-lab and NetApp*. Industry support for Intel Open FCoE includes Cisco*, Dell*, EMC, NetApp, Oracle*, RedHat*, and Novell*.


Benefits of Open FCoE

The Open FCoE architecture uses a combination of FCoE initiators in Microsoft* Windows* and Linux* operating systems and in the VMware* ESX hypervisor, to deliver high-performing FCoE solutions over standard 10GbE Ethernet adapters.


This approach allows IT managers to simplify the datacenter and standardize on a single adapter for all LAN and SAN connectivity.  Further, the Intel 10GbE Server products (LOM and NIC) fully offload the FCoE data path to deliver a full featured Converged Network Adapter (CNA) functionality without compromising on power efficiency and interoperability. What’s more – the Open FCoE solution is available on Intel® Ethernet 10GbE products at no additional charge.


Key advantages of Intel Open FCoE solution are:

  • Scalable Performance: Since there are no proprietary hardware offloads, the Intel Open FCoE performance scales naturally with the server processor. For real-life applications Open FCoE delivers the performance that IT managers expect. Please review 3rd party performance evaluation by Demartek* of Intel® X520 Server Adapter compared to competitor CNAs
  • Ease of Use: Open FCoE approach uses standard 10GbE adapters so IT can leverage existing knowledge to configure and manage the adapters for FCoE deployments and standardize on Intel 10GbE server adapter
  • Cost Effectiveness: The intelligent  combination of hardware data plane offloads and software initiators enables full CNA functionality but at a fraction of the price
  • Reliability: Intel’s 30 years of Ethernet experience combined with certified FCoE initiators enable a reliable Open FCoE solution that just works.


Ecosystem Support

The Intel® Open FCoE solution is supported by leading OEMs and Storage vendors.

Certification Status (Open FCoE Compatibility)

Certification

Status

EMC E-lab

Available Now

NetApp Proven

Available Now

Microsoft WHOL

Available Now


Operating Systems Supported

  • Microsoft Windows Server* 2008 SP2 - Standard, DataCenter or Enterprise
  • Microsoft Windows Server* 2008 R2 - Standard, DataCenter or Enterprise
  • Novell SUSE* Linux Enterprise 11, SP1
  • Red Hat* Enterprise Linux 6


More Information

www.intel.com/go/unifiednetworking


Look for more FCoE articles in the coming weeks!

dougb

Joe Talks About Tom's

Posted by dougb Jan 24, 2011

Here at the Wired community, you know we like to talk to you about our Ethernet products.   We aren’t the only ones talking about it.  The good guys over at Tom’s Hardware* have been talking about it too.  Since there is almost always more to the story, I asked Joe Edwards, included in the article, what it was like to be profiled in such detail.


Q:  You really showed them around, how long were you at it?

Joe Edwards:  The writer came by the office around 9am and we talked his ear off until a bit after 4pm.  I think by 1:30 he was more than a little glazed over.


Q:  Glazed?

JE: <laughs> The Tom’s Hardware audience demands a lot of detail and we gave it to him!  Most people would have been overloaded by noon, but he kept with us for a while.


Q: So how did he overcome the glazing?

JE:  He had a digital recorder with him and we filled it up.  The writer commented something along the lines that he had thought he was a technical writer before but after Pete went on about signal theory he wasn’t sure anymore.


Q: Was it just the writer?

JE:  Samantha, the PR consultant, tagged along, as did Brian from Wired Ethernet Comms, the photographer, William the writer, Pete, and I and sometimes others all in the X-Lab.


Q:  That’s a lot of people!

JE:  It really made the moment feel important since this wasn’t just the normal tour of the X-Lab.   You could tell this was going to be bigger.


Q: What didn’t get into the article?

JE:  We talked a bit about the Intel® 82599 10 Gigabit Ethernet Controller and high speed serial interface characterization work that also happens in the X-Lab.  That is my area of expertise and I was hoping it would make an appearance in the article.  But the Intel® Ethernet Controller 10 Gigabit X540 is a game changer and was the clear choice for the article.  There is a possibility of a follow-up article on Fibre Channel over Ethernet and High Speed Serial topics, but nothing planned.  (Editor’s note:  If you want to see that type of article, please comment!!)


Q:  What does the X-Lab mean to Wired Intel® Ethernet quality?

JE:  The X-Lab, as amazing as it is, represents just one part of our validation coverage, the platform physical layer.  We have an army of people that do software testing, interoperability, higher level protocol testing not to mention physical testing.  We have labs of many shapes and sizes and not just the X-Lab.  The X-Lab might be the coolest, but it takes all of them to ship a quality product.


Joe Edwards is a high speed serial conformance engineer at Intel.

Updated SR-IOV Primer now Available

 

I began as the TME (Technical Marketing Engineer) for the Intel® Ethernet Virtualization Technologies late in 2007.  Spent the first months trying to learn what the technologies --Virtual Machine Device Queues (VMDq) and Single Root I/O Virtualization (SR-IOV) -- were and how they worked.  I find the best way to learn something is to teach it, and one way TMEs ‘teach’ is to write white papers.  I wrote a paper or two on VMDq which was the ‘hot’ topic at the time.

 

In mid 2008, I began writing a document designed to give a mid-level overview of what SR-IOV is, why it was created and how it works.  In December of 2008 I published the PCI-SIG* SR-IOV Primer paper.  I learned a lot while writing it.

 

Since then, this document seems to have become popular. If you search on Google* for “SR-IOV,” it will come up as the 1st in the list. (#4 on Bing* and Yahoo*, and #1 on ASK*).  Does this mean that millions of people are reading it?  I don’t know – all I know is it is review time around here at Intel and it seemed like a good idea to mention it. J

 

So back to seriousness – the document has become a bit dated, and my understanding of the technology has improved as Intel continues to add more support in our Ethernet controllers and I continue to speak with folks much brighter than I am.  So I have updated the Primer doc for your reading pleasure.  The core content is the same, filled out a few sections more fully, added some new content and made many new pretty pictures.

 

The doc is here, enjoy!

Another in the series of chalk talks about parameters you’ll find in the Intel® PROSet for Windows*  GUI to modify the behavior of the driver.

 

This time out:  The impressive sounding, but misunderstood Adaptive Inter-Frame Spacing.

 

Here the shot from my desktop.

AIFS_PROSET.jpg

 

 

(Yes I have a laptop and a desktop. You should too. )

 

What is it?

Adaptive Inter-Frame Spacing is a logical way for the driver to modify the Inter-Frame Spacing as time goes by.

 

What is Inter-Frame Spacing?

Inter-Frame Spacing (IFS) is sometimes called wait after win, but it is really concerned about the space between Ethernet frames.  Ethernet frames don’t run back to back without a gap; there is an idle gap required so the collision detection stuff back from its origins as a half duplex technology could work.  The time between frames has a legal minimum, but since frames might be forever apart, there is no maximum.  You just don’t get anymore frames.  The minimum case is the interesting one for this discussion, because if you shorten the time between frames, you can fit more frames into a specific time period.  Imagine cars moving down a busy highway.  The cars are the Ethernet frames.  The space between cars is the IFS.  The driver in both cases can adjust the space between the cars or the frames.  And just like in the highway, if you get the frames too close together you can’t tell where one begins and the other ends.  On a highway this leads to crashes; in Ethernet it’s way less messy, but the frames can get concatenated and thereby discarded.  And if there was a TCP/IP packet in the middle of a long file transfer and the transfer has to start over, then the extra packets gained by having less IFS is lost by having to retransmit the datagram.  Don’t be tempted to be penny wise and pound foolish.

 

When should I turn it on?

Adaptive Inter-Frame Spacing is always ‘Disabled’ as the default.  This isn’t because we don’t love the feature.  We do.  But Gigabit is a full duplex feature and the amount gained by turning it on is usually minor unless you’re doing full line rate for extended time periods.  For a few bursts a day, the reliability outweighs the extra packets you could fit in.  If you’re running that long at line rate, consider 10 Gigabit.  Back in the 10/100 days, AIFS could can you a nice performance gain across your network, but in a switch environment it becomes more risk than reward.  It can also lower latencies since the packets have less time between them.

 

What if I do turn it on?

If you do turn it on, know your infrastructure well.  In a race course, race cars can get really close together, but that’s because it’s a bunch of well know pieces. If you know all the pieces of your infrastructure can handle a lower IFS, and you need the extra performance, turn on AIFS.  But keep an eye on discards and other error counters to make sure the attempt at higher performance isn’t backfiring on you.  Race cars can be that close together because there aren’t a lot of bumps in the road and all of the drivers are highly trained professionals.  If you put me into the race track you’d better bet that everyone would give my car a bigger packet gap.  Same rule applies on a network.  If you don’t know it can take it, it’s not worth the risk.

 

Thanks for using Intel(R) Ethernet.

Filter Blog

By author: By date:
By tag: