Skip navigation

            Even though it powers most networked computing, IPv4 is showing its age.  At almost 30 years old, it was bound to happen.  IPv6 will help cure a lot of what ails IPv4.  IPv6 offers a staggering number of addresses.  It is the future of the Internet, but it’s also something that most people are not yet ready to work with.  Intel® Ethernet wired networking products have been working with IPv6 for many years now and hopefully by the end of this blog entry you’ll feel better about your IPv6 future with Intel® Ethernet wired networks.

You’re probably awaiting a long winded answer to the IPv6 issue.   Problem is it’s very easy:  Where the O/S supports IPv6, the driver will too. 

Now that you have the answer, I bet you want time to look for the fine print since the devil is always in the details.

            Here are the details:  IPv6 is very different from IPv4, and some of the notions that go along with IPv4 just don’t play in v6ville.  Offloads are the biggest difference.  The IPv6 header has no checksums.  With IPv6 offload engines have less value to add since packet reassembly (Large Receive Coalescing) will do the same thing without all the offload engine costs.

Check it out.  Here are the two headers (from here)

image002.jpg

 

IPv6 is really more of an operating system problem.  In worse case from a network driver perspective it will be just sent as raw data, without looking at the data.  IPv6, IPv4, IPX, as long as it’s Ethernet, we can send and receive it.  In our more recent products, we have made our data routing, filters, and manageability engines aware of IPv6. 

The operating system is the biggest challenge, and pretty much every one made since 2000 has support for it in some form or another.   Check Microsoft* or Linux* or BSD* or Solaris* for details.

Wrap up!

1)      Intel® Ethernet products co-exist nicely with IPv6

2)      IPv6 is something you should be getting ready to support sooner rather than later

3)      Thanks for your interest in Intel® Ethernet wired networking products.

 

 

All information looks like noise until you break the code.
—Hiro in Neal Stephenson's (1959–present) Snow Crash (1992)

 

 

 

Ben Hacker posts occasionally to the Server Room located elsewhere in communities.intel.com land. Here is a classic Ben posting, helping you to break the 10 Gigabit link types code.  It has been updated by yours truly to reflect the current state of the art and add in links to help provide illumination and diversion.

 

Ethernet (IEEE 802.x) has evolved over the years from a new standard linking computers together at slow rates and has moved from 10 Megabit per second (Mbps), to 100Mbps, to 1 Gigabit per second (Gbps), and a few years ago to 10GbE unidirectional throughput. Over time there have been several physical connection types for Ethernet. The most common is copper (Cat 3/4/5/6/7 cabling is used as the physical medium) but Fiber has also been prevalent as well as some other more esoteric (such as BNC Coax) physical media types. The most common 10GbE adapter (until very recently) has been Optical only due the difficulty of making 10GbE function properly over copper cabling.

But this post isn't meant to discuss the past, but more to decode the present and future as it relates to 10Gig Ethernet and the variety of flavors that are available. Below I'll cover a number of acronyms for 10GbE IEEE standards that are often lumped together as '10 Gigabit' and discuss some of the differences and usages for each. After that, I'll also try to clear up some of the confusion about ‘form factor' standards for optical modules (which are separate from IEEE) and some of terms and technologies in that realm:

.

 

 

10GBase-T (aka: IEEE 802.3an):

This is a 10GbE standard for copper-based networking deployments. Networking silicon and adapters that follow this specification are designed to communicate over CAT6 (or 6a/7) copper cabling up to 100 meters in length. To enable this capability, a 10GbE MAC (media access controller) and a PHY (Physical Layer) designed for copper connections work in tandem.

 

10GBase-T is viewed as the holy grail for 10GbE because it will work within the most prevalent Cat 6/7 based infrastructure that is already in place. For this flexibility, 10GBase-T trades off higher power, and higher latency.

 

10Gbase-KX4 (aka: IEEE 802.3ap):

This is a pair of standards that are targeted toward the use of 10GbE silicon in backplane applications (such as a blade design). The specifically is designed for an environment where lower power is required and shorter distances (up to only 40 inches) are sufficient.

 

10GBase-SR (aka: IEEE 802.3ae):

This specification is for 10GbE with optical cabling over short ranges (SR = Short Range) with multi-mode fiber. Depending on the kinds of fiber, SR in this instance can mean anything between 26 - 82 meters on older fiber (50-62um fiber). On the latest fiber technology, SR can reach distances of 300m. To be able to physically support a connection of the cable, any network silicon or adapter that support 10GBase-SR would need to have a 10GbE MAC connected to an Optics module designed for multi-mode fiber. (Optics modules are a whole ‘nother post!)

 

10GBase-SR is often the standard of choice to use inside the datacenters where fiber is already deployed and widely used.

 

10GBase-LR (aka: IEEE 802.3ae, Clause 49):

LR is very similar to the SR specification except that it is for Long Range connections over single-mode fiber. Long Range in this spec is defined as 10km, but distances above that (as much as 25km) can often be obtained. 

 

10GBase-LR is used sparsely and really only deployed where ultra long distances are absolutely required.

 

10GBase-LRM (aka: IEEE 802.3aq):

LRM stands for Long Range over Multimode and allows distances of up to 220 meters on older standard (50-62um) multi-mode fiber. 

 

10GBase-LRM is targeted for those customers who have older fiber already in place but need extra reach for their network.

 

10GBase-CX4 +(aka: IEEE 802.3ak):+

This standard of 10GbE connection uses the CX4 connector/cabling that is used in Inifinband ™* networks. CX4 is a lower power standard that can be supported without a standalone PHY or optics module (the signals can be routed directly from a CX4 capable 10GbE MAC to the CX4 connector). Due to the physical specification for CX4 based 10 Gigabit, it provides a lower latency than comparable 10GBase-T copper PHY solutions. With the use of CX4 passive (copper) cables, the nominal distance you can expect between your 10GbE links is ~10-15m. There are also amplified 'active' (but still copper) cables with nominal distances up near 30m.

 

Below is an image of a standard CX4 based socket that would be on a 10GBase-CX4 NIC:

CX4_Connector.JPG

 

There are also what referred to as ‘active optical' cables are for CX4, which actually have an optics module in the termination of the cable, and the cable body is fiber. This kind of active design increases cable reach and improves flexibility (fiber is smaller than copper pairs) but also increases cost. These active cables can increase reach up to 100m.

 

 

Intel has recently released our own series of active optical CX4 cables.

 

 

For short distances (such as inside the rack in a datacenter), CX4 offers one of the lowest cost ways to deploy 10GbE from switch to server. Because of its design, CX4 also achieves very low latencies as well.

Time for the big review:

1)     Ben Hacker has a great blog

2)     Intel® 10 Gigabit products support a wide variety of 10 Gigabit link types.

3)     Thanks of using Intel networking products!

 

 

 

 

 

 

 

 

 

 

Frozen82540.jpg

A picture is worth a thousand words, and in a blog that's a great time saver!  These are unaltered (except for size) images taking during our testing for our industrial temperature run for the 82540EP. The chip is marked as an EM for test reasons. The 82450EP is just like the 82551IT and the 82574IT in that it supports temperatures of +85/-40. The snippet on the left is the card at -85 degrees Celsius.  The ice is from the humidity.  The ice would melt off near the edges and it pooled on the motherboard and finally caused the motherboard to short out.  The right part of the image is running at 166 degrees Celsius and it was transmitting data until the solder melted.  If you look closely at the white lines around the top right corner and the bottom right corner you can see that the part is actually shifting clockwise and down as the balls of solder melt.  The capacitor at C39 has actually melted off and the force of the heater has blown it off the board.  The silver trail is the capacitor solder.  The orange line is the temperature probe.

This is just some of the testing we do to make sure that our products are well tested. 

     Now for a couple statements that our lawyers want to make sure you’re aware of.  First, just because we test to +166/-85 doesn't mean you should try to use it there.  Second, if you put the parts into an environment that is higher than the supported temperature, it will invalidate your warranty not to mention your product and Intel isn't responsible for any damage done.

 

 

Hopefully I'll have more cool pictures for you in the future.

 

 

Review time!

1)  Intel offers industrial temperature parts

2)  Testing until failure often leads to cool pictures

3)  Thanks for your interest in Intel networking products.

dougb

Routing Differential LAN traces

Posted by dougb Sep 15, 2009

Routing Differential LAN traces

 

I work with a TME in LAD who is tasked with reviewing design from our customers.  Most times they follow the guidance we give them in our datasheets and reference designs, completing the schematic with little or no customization. Where he does most of my work is in the layout reviews, where the schematics is "put to fab" and traces and components are put in physical locations on a PCB layout.  As many companies have shrunk in size and have chosen to outsource engineering resources around board layout, many companies outsource this function.  The DE at the company is responsible for finalizing the schematics and passing them on to a 3rd party that interprets the schematics and the guidelines that have been given for the design, typically a physical definition of what he board is supposed to look like.  Is it a typical ATX motherboard? Is it an ATCA Mezz card?  Or something completely custom to meet a very specific need?  The layout person will be the Dr Jekyll/Hyde and bring the Franken-schematic to life!

In the hundreds of reviews that he does for our customers, he tries to find things in their designs that will help them save time and money.  They will save time by having him look at the design to make sure that routes are optimal and are per our recommendations in our layout checklists.  They will save money because the RED flags that he will catch and fix for them will save them a board spin, which translates into time and money: all things customers like to have.   The biggest problem that he sees is when 3rd parties do layouts for my customer is that they use the big fat AUTO-ROUTE button (similar to red EASY button used in commercials from the office supply store, where hitting this red button, makes it EASY for customer to buy stuff).  This AUTO-ROUTE button does most of the work for the layout person, if he has defined all he interfaces and pins on the chip accurately.  What he means is if they have done a good job of defining the type of I/O that is used on every pin on the chip... Is a pin a digital I/O? Is it open drain? Is it SERDES? Is it providing power? Each of these types of signals has to be cared for differently when you actually "put metal" between the pins of the chips. As he will often say to colleagues as they are doing a first layout review is "the schematic will lie to you".

When first opening a board file, immediately look at how the MDI differential pair are routed. These are found on all of our 10/100 and gigabit silicon and are used to connect the PHY to the magnetics or transformer on the board. These are often time the ONLY analog signaling on the ENTIRE board. Typically they are only designed to drive minimal distances across the board to the XFMR and since they are a pair of complementary signals, they need to be routed symmetrically so that any board parasitics will affect both signals, not just one...  Actually, you want to avoid all adverse affects of board parasitics like vias, broadside coupling, etc..

This pair of differential signals (2 with 10/100 and 4 with gigabit) needs to routed with care.  Best Rule of Thumb items are:

1.) Route symmetrically as best as you can

2.) Avoid using vias for routing these traces: route a pair on the same board layer

3.) Follow inter-pair (greater than 5x the height of the board dielectric) and intra-pair (<10mil) distance guidance as given in the layout checklist

4.) For gigabit designs, to achieve #3 easily, route two pairs on one layer and the remaining two pairs on another layer

5.) Route on layers that are isolated to good grounds: don't route on layer that is next to another signal layer <- avoids broadband coupling

6.) Differential pairs need to be length matched to within 50 mils... They don't need to be exactly the same length

7.) Target 100 ohm differential impedance

Following these simple layout rules and you will save time and money...

Support:

Good definition of Differential signaling : http://en.wikipedia.org/wiki/Differential_signaling

Dr Howard Johnson has a fantastic book "High Speed Digital Design" that can be summed up in a few words - "Model Everything".

dougb

Getting ready to get ready!!!

Posted by dougb Sep 15, 2009

Intel has a great policy of allowing an eight week sabbatical every seven years.  Just before he left on his sabbatical, my hardware equivalent sent me this note.  It is a quick note that can help you save time on your design.  When he comes back, he'll be joining me on the blog, but until then, the words of Jeff Hockert!

 

Design reviews are important. Plain and simple.

A last check by a content expert can save you time when you fab the board, save you money so you don't have to spin the board, and generally helps the process of a successful design with Intel moving forward.  Let me share with you a little secret <lower voice>, you can be an Intel Ethernet expert... Just use the checklist and appnotes that we provide you with on Developer and IBL/CDI.  These checklists have been validated by hundreds of designs and improve your chances of a successful Ethernet experience. We tell you how to use interfaces you need and help you terminate ones you don't.  The checklists are used by our design engineers for our network interface cards and are reflective of most of the "secrets" of a good design. You can find most of these schematic and layout checklists under the part number on developer and in IBL/CDI (if you know what those are you probably have access, if you don't well, you won't) under the part number in question. Use them as a last check and send them to Intel TMEs when you have completed them as part of our review for you. 

 

Yes, we will do design reviews for you at no additional cost to you. Think of it as having an extra design engineer on your staff!

dougb

To buy Four?

Posted by dougb Sep 10, 2009

Intel has been making quad port adapters for a long time.  Sometimes however things catch up with us and some of our products end up between specifications, especially with cutting edge work.  The rapid deployment of PCI Express® Base Specification Revision 2.0 out into the market has left some once easy choices not so easy.  In the adapter market, Intel had a stable of four port adapters that customers could pick from.  The current adapter, Intel® Gigabit ET Quad Port Server Adapter is from the post 2.0 era.  The other adapters should stick to the 1.0 only slots.

The ET adapter also has Intel® Virtualization Technology for Connectivity (Intel® VT-c), a feature that marries well with the higher port density that a four port adapter brings to the table.  Older adapters are still great; just make sure to make sure it is not in a 2.0 slot.

Big ending!

1.  The Intel Gigabit ET Quad port adapter is a perfect match for PCI Express 2.0 based servers.

2.  Thanks for using Intel networking products.

Filter Blog