Skip navigation
tstachura

iWARP?  Gesudheit!

Posted by tstachura Jan 28, 2010

iWHAT?  iWHO?  What is iWARP?

The acronym breakdown is internet Wide-Area RDMA Protocol.

And the acronym within the acronym…  RDMA is Remote Direct Memory Access.

In English, iWARP is ultra low-latency Ethernet. It’s worth a look if you are building a high-performance cluster on Ethernet.

Click here for more detailed information on how iWARP works and here for NICs Intel offers with this technology.

ericadams

Why test to IEEE specs?

Posted by ericadams Jan 21, 2010

It is because you want your network product to work seamlessly with other different manufacturers networking equipment. What does it really mean though to pass the IEEE 802.3® tests though? Does it guarantee that your network card or LOM (Lan on Motherboard) is going to actually work?  There are no guarantees in life and this is one of them!  However, if you have a NIC from Intel and NIC from another manufacturer and both of them pass all of the IEEE tests then there is a very high probability that both will pass traffic at near line speed over the 100m required by the spec and they will play nicely with each other in the networking sandbox. The spec is robust enough that even a very badly designed NIC would still pass traffic at a reasonable speed at less than 100m. The spec for 10 Megabit is so robust you can practically use a chain link fence as your network cable.  In fact, Cisco* has demonstrated 10Mb over barbed wire! The point is that you can fail some 10 Megabit IEEE tests and still link up at 200m. The gigabit tests are more sensitive to failure though.

 

Here at Intel we design our NIC’s and LOM’s to be 100% conformant to the IEEE spec.  If our results are marginal in performance to a particular IEEE test parameter then it would go into a spec update. To help assure that our customers’ designs are interoperable, we have set up IEEE test labs in 3 sites around the world where we validate a multitude of customer designs. Some designs are better than others and you can see the direct results of those design decisions in the IEEE test results. It is possible to have a couple of IEEE tests marginally fail but still perform well past 100m in bit error rate testing.

 

 

Over the next few months I will be writing a series of blogs comparing different types of designs and how they can impact the IEEE tests which ultimately impact how well your product will work within a LAN. In addition I am working with a core group of engineer experts with over 50 years of network experience to completely review every single design and placement requirement we have and revalidate them for our current and future products. We will be experimenting with EMI, near field noise, electrostatic discharge, etc… to further tune our recommendations to help designers build better boards.  Designing the analog front end for LAN into a noisy board optimized for digital traffic is no small feat!

 

 

For our next generation products our design guidance is only getting better and more accurate.  If you follow Intel’s schematic and layout design guidelines that are available to our customers under NDA then you will have a very high probability of passing all of these IEEE tests. This will lead to a better experience for the end user and provide a solid foundation for an Ethernet connection that just works

dougb

iSCSI Webcast Announcement

Posted by dougb Jan 12, 2010

This coming Thursday, January 14, Intel and Microsoft* will host a webcast on  server connectivity to iSCSI SANs and make an announcement of a breakthrough in iSCSI networking performance. 

 

We’ll discuss how:

·      Every server ships ready for immediate iSCSI connectivity with iSCSI support embedded in Intel® Ethernet Server Adapters and the Microsoft Server Operating System in both physical boot  and virtualized environments. 

·      Intel and Microsoft collaborated on the latest releases, Windows Server® 2008 R2, Intel®Xeon® 5500 Processor Server Platform and Intel® Ethernet 10GbE Server Adapters to deliver breakthrough performance and new I/O and iSCSI features. 

·      Intel and Microsoft are ensuring that native iSCSI products can scale to meet the demands of cost conscious medium size businesses and large Enterprise class data centers.

·      Intel® Ethernet Server Adapters provide hardware acceleration for native Windows Server Network and Storage protocols to deliver great iSCSI performance, while allowing IT Administrators to utilize native, trusted, fully compatible OS protocols.  This is critical because for any protocol or technology to be successful on Ethernet it must adopt the plug-and-plays economies of scale characteristics of the ubiquitous Ethernet network to be successful.  Native iSCSi in the OS and Intel adapters does just this, which had led to the tremendous growth iSCSI has experienced over the last 3 years. 

·      Intel and Microsoft are addressing the challenges of server virtualization network connectivity with Server 2008 R2 to deliver near-native performance and features.

 

Please join Jordan Plawner, Intel Senior Product Planner for Storage Networking and Suzanne Morgan, Microsoft Senior Program Manager, Windows Storage, to hear the big news.  

 

Use this link to register for the webcast: http://msevents.microsoft.com/CUI/WebCastEventDetails.aspx?EventID=1032432956&EventCategory=4&culture=en-US&CountryCode=US

 

 

 

 

 

 

 

A common question we get around the farm revolves around which driver supports what silicon.  In the old days, it was pretty easy, we had one driver and it did everything.  But that turned into a sustaining, development and release nightmare.  With three separate vectors (client development, server development and sustaining) all trying to act on the code at once, it became clear we needed to break it up.  Leaving the PCI-X generation to be in its own driver allowed most drivers to have some really neat code (like MSI-X) that would be extra and distracting for a lot of adapters.  Plus some of the Operating System (O/S) guys are really selective about what they put into the kernel.  By breaking it up we didn’t have to worry about getting all of our products kicked out if one hardware combination had problems.

 

Warning: HTML tables!

        Linux*(sourceforge.net/projects/e1000)

Family

Driver Name

Supported   Silicon

10GbE PCIe

ixgbe

82598, 82599

10Gbe PCI-X

ixgb

82597

1GbE PCIe Server

igb

82575, 82576, 82580, i350

1GbE PCIe Client

e1000e

82574, ICH10, 82567, 82571, 82563, 82573E/V, 82573L, 82567, 82577, 82578, 82583, 82579

PCI/PCI-X

e1000

82544, 82545, 82546, 82540, 82541, 82547, older stuff

 

Microsoft* Windows is a little bit messier because of the need for WHQL certification and some extra requirements that place on driver model.

 

Windows (networking.intel.com)

Family

Driver Name

Supported   Silicon

10GbE PCIe

ixe

82598

10 GbE PCIe

ixn

82599

10Gbe PCI-X

Ixgb

82597

2007+ 1GbE PCIe

e1q

82575, 82576, 82574, 82583

1GbE Server

e1r

82580, i350

1 GbE Client

e1c82579V, 82579LM

1 GbE Client

e1y

82567

1 GbE Client

e1k

82577, 82578

1GbE PCIe

e1e

82573E/V, 82573L, 82571, 82563

 

FreeBSD* driver does things its own way, just like the OS guys

 

                                            FreeBSD

Family

Driver Name

Supported   Silicon

10GbE PCIe

ixgbe

82598, 82599

10Gbe PCI-X

ixgb

82597

1GbE PCIe Server

igb

82575, 82576, 82580, i350

1GbE PCIe   Client, PCI/  PCI-X

em

ICH 8, ICH9, 82574, ICH10,   82567, 82571, 82563, 82573E/V, 82573L,   82546, 82541, 82540, 82545, older stuff like 82547, 82543, 82544, etc.

 

There are some extras supported in some of these, like the 82543, but that pretty old and I don’t want be too verbose.   For stuff that old, there aren’t any newer drivers.  I only list the 82597 because it’s on Sourceforge.   Otherwise I wouldn’t list it. 

The following list of Operating Systems are supported by the O/S vendor, please see them for the driver for our products.  You may have to upgrade the O/S version in some cases to get support.   These drivers are created by the O/S vendor, but we do provide guidance and access to our best known methods.  If you want your custom O/S added to this list, drop me a private e-mail and I’ll be sure to add it!

 

Solaris*
OpenSolaris*

VxWorks*

QNX*

Vmware*

 

Time to wrap it up:

1)      Using our handy chart, it’s easy to map driver to silicon product

2)      Intel® and others provide a sizable number of drivers for numerous operating systems

3)      Operating system vendors that elect to provide their own driver usually get assistance from Intel

4)      Thanks for using Intel® Ethernet

 

(Updated on Feb 9, 2010 to clean up the table layout and add in 82580)

(Updated again on June 3, 2011 to add the i350 product and 82579 products into all three tables.)

Filter Blog