Skip navigation

It has recently come to my attention that at least one major server manufacturer has recently switched from packaging the Intel® Gigabit VT Quad Port Server Adapter, which uses the Intel® 82575 Gigabit Ethernet Controller to the Intel® Gigabit ET Quad-Port Server Adapter, which uses the Intel® 82576 Gigabit Ethernet Controller. 

 

The problem that some are encountering is that VMware* ESX 4.0 does not currently have inbox support for the ET Quad-Port (82576) adapters.  The older VT Quad-Port (82575) adapters have inbox support from VMware, as do the Intel® Gigabit EF Dual-Port (82576) Adapters.

 

You can download the VMware ESX/ESXi 4.0 Driver CD for the Intel®  82575 and 82576 Gigabit Ethernet Controllers from the VMware driver download page.  The specific driver can be found here.  Note that while the 2nd link is valid TODAY, when the next update to the driver is published, this link may change and not be valid anymore, in which case use the first link here and search for ‘82576’ in the list of products.

 

The installation instructions can be found on the right side of the Driver CD download page, under the ‘Related Resources’ section, in Release Notes.

 

 

 

 

 

 

 

 

 

dougb

Linux Questions:  Part 1

Posted by dougb Feb 19, 2010

We’ve gotten a couple of really good questions about our Linux* drivers so I figured I would share them and the answers. 

 

1. I have found the following CFLAGS for x86_64 architecture in the Makefile of the driver.

 

-mcmode=kernel -mno-red-zone

 

Are these options mandatory for 64 bit architectures, or can I remove them when I directly use the kernel build system (copy the sources to drivers/net/igb)?

 

This article from the GCC site explains them both in Linux terms.  But I'll try to put them into English   The kernel mode tells the compiler to build it for inclusion in the kernel, so you do need that one.  The no red zone one will build the driver without using the red zone, which is a scratch pad of sorts near the stack in memory.  They are turning this off so that if somebody else is using the scratch pad that the driver won't try to.  Again, I would leave this one alone.  The driver isn't designed to use the scratch pad, but you never know when the compiler would decide to mess with it.

 

Both these options are only *used by* our Makefile when building for the 2.4 kernel.  The 2.6 kernel already contains these flags, and uses a system called KBuild that we hook into to build the out of tree drivers just like any other kernel module.  Basically all the Makefile ends up doing is

 

make –C /path/to/linux/source M=/full/path/to/driver CF=$(CFLAGS_EXTRA)

 

and the driver is automagically built.

 

 

2. The README says that Intel supports the driver only as a loadable module. Does this mean that we do not get support when we statically link the driver or does this mean we are not allowed to link it to the kernel?

 

As for the readme, I think that is describing the difference between having the source configured for being in a distribution vs a standalone driver.  The tarball is in "stand-alone" mode and would need some modification to be built into the kernel like if it was a stock kernel driver.  If you remember this blog entry it has some details on that.

You can use make install with the driver and that will have it be loaded when the O/S starts, but if they want to build it into the kernel they need to be careful.  The article shows some tips on how to do it.  They can also look at kernel.org for our driver and they can see what it looks like when in the kernel.  The readme is just warning that the driver tarball isn't designed for inclusion into the kernel and if you do so there maybe troubles.  But I think there are enough researchable materials for you to do it.  (kernel.org, my blog page).

 

One other option is to build the driver as a module and include it in the initrd by using some of the extra options to mkinitrd to force the driver to be included.  This has the added benefit of not requiring “coding” to get the driver dropped into the kernel directory structure.

 

Big ending!

1)    You can reply to this blog posting if you have a question for us to answer next time

2)    Thanks for using wired Intel® Ethernet!

A little while ago we launched the Windows* embedded operating systems driver support package up onto the EDC website.  Once in the field an issue was discovered that would stop the e1y driver getting DHCP traffic on CE6* under some conditions.  Given the nature of the defect, we elected to field a new version of the webpack with an updated driver.  We also updated the entry readme, the one in the zip file, to correct the missing 1501 device ID from before.  The Driver_Selection_Guide.txt in the webpack is still missing it however.  We didn’t want to add any further delays, and our risk management policies enforced that we could only change the driver inside the webpack.  It’s the same link location as in the original announcement, so if you’ve downloaded it after Feb 3rd, 2010, you’ve got the new version.  Another way to check if you have the right one is the webpack installer is dated in January, where the old one has a November date.  Digging even further, just in case you’ve already installed it and deleted the webpack, the changed files are in the \PRO1000\WINCE6.0\PCIe folder, e1y51ce6.dll and e1yce6.rel.  These will be dated January 19th, 2010.  All other driver files will be the same.  (There is a build tracking file, verfile.tic, that changes on all our builds, but that isn’t a driver file so it doesn’t count and it should be ignored.)

Sorry we didn’t catch this defect before launch, and it only impacts the e1y driver. 

Review time!

1)      Here is the updated webpack (It’s the same link location as the old one)

2)      Only the e1y driver is effected, and only ones dated before 2010

3)      Thanks for using wired Intel® Ethernet.

"A complex system that works is invariably found to have evolved from a simple system that worked."  — John Gall

 

     The network boot is a collection of simple technologies that have come together to form the complex beast.  In order to effectively use the network   boot, you need to understand the various fairly simple parts that make up the completed boot.  By looking at each piece of the puzzle in detail, the bigger picture becomes easier to understand.  The system boot process is reviewed first.  The BIOS control process then covers how the BIOS makes the determination if a device is bootable or   not.  This is followed by a quick look into the devices that allows a network interface card (NIC) to bring its own loadable code with it.

 

The Boot Process
The PC isn’t like a VCR or a television.  It takes a long string of events to get even the video display up and running.  While this won’t be a complete blow-by-blow coverage of the system boot, we’ll try to get a good overview in before the bell sounds.

 

     1.       The system comes out of either complete power off, as in unplugged from the wall, or a soft off, where the system just looks like it’s off.  The system still has power on the soft off, so be careful putting cards (like Network Interface Cards) in and out of the system.
     2.       The processor is held in reset until the power has flowed evenly for a short period of time.  This is to protect the processor from poor power waveforms.  This step isn’t noticeable.  The power supply sends a Power Good signal and processor is removed from reset.
     3.       The processor starts executing instructions at segment 0FFFFh offset 0.  Sometimes this is expressed as segment F000:FFF0h.  It starts here by convention.  This address is 16 bytes from the top of the ROM memory.  It contains a ‘jump’ instruction that commands the processor to start its execution somewhere else.  Since the BIOS can run from E000:0000 to the end of F000:0000, this jump can go anywhere in this range.
     4.       The system is initialized one sub-system at a time.  Before the video is initialized, the system will report out errors via the speaker.  These beep codes can be found in most major hardware books.
     5.       The video option ROM is loaded into memory and executed.  The video card provider branding information is usually the firs thing to be displayed.
     6.       The BIOS determines if this is a ‘cold’ or a ‘warm’ boot.  This is determined by the value in word 0000:0472h in system memory.  If this word is 1234h it is a warm boot.  Not all sub-systems are initialized and/or tested on a warm reboot.  Memory for example is typically not re-initialized on a warm reboot.  Most laptops won’t request a lockout code on warm reboot.                                  
     7.       The system does a power-on self test (POST) on the video, and memory subsystems while displaying branding information on the motherboard, BIOS, etc.  Some motherboard vendors now display a logo in place of the initialization screen.  This can usually be disabled in a BIOS setup menu if you need to watch for error messages.

 

PCI Scan
Now the system is in a good state and we are ready to move from the BIOS init phase to the PCI scan phase.  At this point nothing in the system has resources.  The system uses the PCI configuration methods to figure out what needs resources are needed.  This is a complex process and we’ll spare you the details.  What’s important from a Boot perspective?

 

Option ROM Start
     1.       The Option area is scanned.  The memory area from C800:0000h to F000:0000h is scanned.  The BIOS looks at every 2-kilobyte block looking for the option ROM signature, AA55h.  This is the key to the option ROM system.  Every block that starts with AA55h is parsed as an option ROM and code is executed based off of the table.
     2.       The Option ROM determines if it is a bootable device.  This might be a SCSI device or in our case, a network bootable device.  The option ROM installs any code it might need to execute, and alerts the BIOS in a return code as to whether or not it is actually bootable.                                         

      3.       If the system is BIOS Boot Specification compliant, the BIOS can determine the order in which to call the bootable devices looking for a valid boot.  On other systems, Interrupt 19h or Interrupt 18h is called.  It is up to the option ROM software run in step 9 to make sure that these interrupt calls will get to option ROM software.                                  
     4.       Once the BIOS makes a call to any of the bootable devices, the system is now considered booted.It’s a lot of steps just to get to point where the operating system starts, but given the power of today’s machines, it’s usually less than 10 seconds from start to finish. The more memory, sub-systems, hard disk configurations and amount of option ROMs to be called all effect the time it takes to boot.  A single SCSI device can almost double the time it takes to boot a system.  RAID devices will also slow things down.

 

BIOS Boot Specification

     Also known as BBS, most modern systems use a set of APIs to allow for expansion ROMs to change the boot order.  This is both good and bad.  First of all it allows for the users to move the option ROM calling order explicitly, something that couldn't be done in the legacy system (which is coming up next).  This means you could select your network boot to go first, then a floppy drive then the local hard disk. Or invert it as needed.  With older legacy stuff, where you ended up in the chain was your spot.  But that flexibility comes with a cost.  In most BBS implementations, all option ROMs must register with the BIOS which means the BIOS must call all of them before the BIOS setup can be entered.  This slows things down when trying to get into the BIOS setup screens to make changes.  So if it seems like a long time since you hit F12 or DEL or F2 to get into your BIOS, its all the option ROMs that your waiting for.

 

Legacy Interrupt System Start Points

 

     Interrupt 18h and 19h are the older method of starting the boot process beyond the POST.  They are legacy methods since replaced by BBS.  Interrupt 18 is a call to the boot sector.  Interrupt 19h is the start point for the BASIC interpreter that used to be built into systems.  Any casual research on the web into 18h and 19h will yield mostly information on virus technologies.  Interrupt 19h is commonly intercepted by virus boot loaders, but is still a legal interrupt to call.  In the legacy system, the BIOS calls the interrupts blindly with regard of what happens after the call.  This is what makes them so attractive to the virus creators.  In the network boot environment, the interrupts are chained.

 

BootListInsertion.JPG

 

The first part of the diagram shows the boot path of the interrupt before the insertion of the network boot device.  During the initialization phase the boot technology inserts itself into the boot chain.  The second part of the diagram highlights what this looks like once insertion is complete.  Where the network boot gets inserted is up to the boot agent.  Any other device inserted may move the network boot device back.

 

Now that the picture is set, next time we'll talk PXE.

 

In review:

     1) Booting from the Network can provide lots of value

     2)  BBS is must have for modern systems

     3)  Thanks for using Intel(R) Ethernet

 

(Note 2/8/2010 - Updated to fix a typo or two.)

Filter Blog