Skip navigation

With monolithic kernels, life without modules can be difficult.  Particularly, getting new hardware support means using the latest out of tree drivers, or a bleeding edge kernel.org kernel. But no more!  To get support for new hardware with older kernels, just update the existing kernel driver in place.  The kernel Makefile and the standalone driver makefile aren't exactly compatible, and upgrading from one to another is straight forward if you know what you're doing.  By the end of this post, you will.

 

First you need to locate the kernel source, usually something like /usr/src/linux.  Digging further, find the driver source buried in there, somewhere like /drivers/net/e1000e.  This should have the driver files and the Makefile.  Make a backup of the files, just in case we'll need it later.  Since we are going to be updating to the latest driver, we can go ahead and delete everything but the Makefile (and the backups).   Find your release driver tarball that you want to use and extract it and move files into the directory so all the *.c and *.h files are in the same location as old *.c and *.h files.

 

Example:  The files in the kernel source would be 82571.c, defines.h, e1000.h, es2lan.c,ethtool.c,hw.h,ich8lan.c,lib.c,netdev.c,param.c and phy.c

 

Once its updated it should look like this:

e1000.h, e1000_80003es2lan.h, e1000_82571.h, e1000_defines.h, e1000_ich8lan.h, e1000_mac.h, e1000_manage.h, e1000_nvm.h, e1000_phy.h, e1000_regs.h, hw.h, kcompat.h, e1000_80003es2lan.c, e1000_82571.c, e1000_ich8lan.c, e1000_mac.c, e1000_manage.c, e1000_nvm.c, e1000_phy.c, ethtool.c, kcompat.c, kcompat_ethtool.c, netdev.c, param.c

 

 

Now the driver directory has the new driver files and the old driver Makefile.  The problem is that the Makefile is going to build with the old file names and not the new ones.  This will be fixed by updating the kernel driver Makefile.

Here is an example kernel driver makefile file listing:

 

e1000e-objs := 82571.o ich8lan.o es2lan.o \

                lib.o phy.o param.o ethtool.o netdev.o

 

Looking at the latest driver, you need to make all the *.c files have a *.o in the objects list.  EXCEPT the kcompat_ethtool.c.  That needs to be left out.

Here is an example of what the line is modified to look like.

 

e1000e-objs := netdev.o ethtool.o param.o e1000_82571.o e1000_ich8lan.o e1000_80003es2lan.o \

               e1000_mac.o e1000_nvm.o e1000_phy.o e1000_manage.o kcompat.o

 

Now you just make the kernel and enjoy the new driver!


Time for the big review:

1. Update the kernel Makefile updated to use the new files.

2. Use the released driver files.

3. Thanks for using Intel networking products.

dougb

Cabling Wars: A New Hope

Posted by dougb Oct 15, 2009

     Looking up and down the ecosystem that your product operates in can often show things that one would normally just take for granted.  I attended a BICSI* conference last year with our partners Fluke*, ADC* and Extreme Networks*.  Our show at the convention was to highlight how to design an end to end solution for 10 Gigabit copper.  The show itself was a big hit, but the real data for me was while manning the both.  Times are always tight, so it was "one riot, one ranger" time for Intel networks, so I was the only one there.  I had to answer all the questions.  I waited to be bombarded with Operating System questions, buffer sizes, VLAN tags offloads all sorts of questions.  I got just a handful, and the big question on everyone's mind, was "Do you have that in fiber?"

     I was blown away.  Up in adapter land, we support all sorts of cable types, Fiber, Copper, and CX4 on our 10G products.  Normally the question we get is "Do you have that in Copper".  The BICSI crowd isn't your normal crowd.  Being cabling professionals, they worry about fire hazards and consider any active cabling, like copper, a latent fire hazard.  To the cabling guys, it was a touchy subject.  Fiber was more expensive to install, and harder to work with (Try splicing a fiber cable) but was lighter, didn't have crosstalk, did longer distances and wasn't a hazard.  Copper is currently cheaper, easier to work with in terms of splicing and custom length's, but was heavy, had noise problems (both giving and receiving) and was at risk of being obsolete.  The CAT3 to CAT5 transition would mean a complete re-wire of a building, something few businesses could afford.  Then to move to CAT6e just a few years later would be unimaginable.

     Another big concern expressed to me was about how concerned some were about the signal integrity.  At one point an attendee walked up and started to talk to me about crosstalk issues and how in his experience a crimp in the copper cable could drop link.  I wasn't buying it, having never seen this.  Finally having enough of my perceived bravado, he reached behind the demo and bent the cable from my server back on itself, touching the two lengths together.  He peered around the front and looked at the performance monitoring software I had running.  Not a blip.  He watched for a moment, then released the cable.  Again nothing.  He was finally convinced, (it wasn't bravado), and he walked away amazed that things had progressed so fast from the "early days" of 10 Gigabit.  At this point one of my friends at ADC came up and told me about the Cabling Wars.  BICSI was split into two camps, the fiber and the copper camps.   Both sides had good points, but it was almost like a rivalry more like it was between college football teams than networking professional.  They took it that seriously.

     The good news for developers and end users is that Intel networks (as you can tell from the links) isn't taking sides in the Cabling Wars.  Betting on all sides, we have options to cover how you or your customers design the network.  And since our MAC products are the ones that support the multiple interfaces, it is just one driver that will support the listed types.  So there the driver you qualify for your copper implementation is the same driver as your fiber implementation on the same card.  Some interfaces are available later than others, like 10G copper comes out after our fiber offerings, but once the later driver come out, it will support both.

Big finish:

1.  Intel has both Fiber and Copper (and CX4) offering for most product lines

2.  Intel drivers will support both interfaces when the both are offered at the same time

3.  Thanks for using Intel networking products

dougb

Back to the Front (Bus)

Posted by dougb Oct 7, 2009

The current generation of quad port cards are more than meets the eye.  They have a bridge (or switch) chip which creates a secondary bus that can make for trouble when debugging for performance.  The driver can only report what the MAC reports, and MAC always runs at maximum speed since it is on a private bus.  But the card, and therefore the connector chip (bridge/switch) might be in a slot that doesn't yield the maximum speed.  While it is called a switch in PCI-E land and a bridge in PCI-X land, I’ll call it a switch from here on out to keep things a little more clear.

 

     Here is what it can look like in block diagram art (Host is below the Green Arrow):

BusSpeed.jpg

     The two red arrows are the backside bus and they will always run at highest performance allowed by the connection.  The light green arrow is the front side bus and it is at the mercy of the slot.  Because the switch is in pass-through mode, it’s hard to see it running at the wrong speed.  Here's what you can do.

     Make sure when you are physically inserting the card that it isn't a slot that is different between the physical and the electrical abilities. A ton of motherboards today will have just one physical connector type (typically x8) and vary the electrical (x1, x4, x8) depending on the board.  Consult the documentation or inspect the motherboard, most boards these days are clearly marked.    

     This Back side / Front side problem gets worse if you’re still using our older PCI-X adapter cards.  It’s easier to exceed the front side capabilities when each backside channel is running at 133Mhtz speed.  PCI Express will typically have less of problem with this since its just plain old faster, and really enables some great networking speeds.  But that’s a whole ‘nother post.

 

 

Let’s go out with the big finish:

1)     Be aware of Front side / Back side speed and protocol differences and maximize for both

2)     PCI Express has more flexibility Front side / Back side than PCI-X

3)     Thanks for using Intel networking products.

 

Filter Blog