Skip navigation

Network Boot Protocols

Posted by dougb May 27, 2010

Network Boot Protocols.

As mentioned last time, now we’re talking PXE.  Well, that and all the other Network Boot Protocols.

There are several types of protocols that form the network boot.  The most common form is Preboot Execution Environment (PXE).  Remote Program Load (RPL) is older, but still common. SAN and EFI are emerging technologies.   Each will be covered, with others like EFI being last since they are close relatives of the PXE.



PXE is based on several existing  IPv4 protocols, including BOOTP/DHCP and TFTP.  The development of the specification was initiated by Intel Corporation‘s Intel Architectural Labs, along with a number of third party contributors.  Called IAL for short, this lab is a think tank of new technologies.  The specification for PXE has through several revisions since its inception.  The earliest widely deployed version was 0.99, and was originally developed to run in conjunction with the LANDesk product. The current version is revision 2.1.  Since the PXE group inside IAL has moved onto EFI development, this will most likely be the last revision of the specification, at least for 16-bit IPv4 environments. Work is currently underway to extend the DHCPv6 protocol to support remote boot protocols such as PXE, but it is expected this will be for the UEFI and similar environments and not for legacy BIOS implementations.

At its core, there are two parts to the PXE image.  The base code handles the TCP/IP, DHCP and other protocols, while the Universal Network Driver Interface or UNDI does the hardware work.   The figure below shows the slices of the image.


Initializing Intel(R) Boot Agent Version FE v4.2.03
PXE 2.1 Build 083 (WfM 2.0)


A typical boot agent running PXE will show a sign-on message that will look something like this.  This particular sign-on displays all you would need to know about the PXE version information.  This particular version tells its from the last IAL-released build (083), and is the most current specification (2.1).

Once the boot agent is running, a stronger copyright message comes up, with the TCP/IP information from the DHCP transactions as well as the Media Access Control address for the hardware that the PXE is running on.


Intel(R) Boot Agent Version FE v4.2.03
Copyright (C) 1997-2010, Intel Corporation



IBM* originally developed Remote Program Load (RPL) for its Token Ring adapters. Novell* later added NetWare* support for it and extended it to support Ethernet adapters as well.


The Intel RPL option ROM code is based on the DOS Open Data-Link Interface (DOSODI) specification. RPL is simpler than the PXE specification but is also less flexible; for example,  it does not support routing boot requests between networks It does have a smaller transaction profile on the network, using just 4 frames before starting the download.  The first frame is a directed packet to a multicast address supported by any server that supports the RPL format.  The bootstrap code that is loaded from a NetWare server must use the DOSODI interface to talk to the driver and is typically limited to only booting from NetWare servers. Some very old versions of Windows server software also supported the original IBM-style RPL but it was difficult to configure and still had the limitations of the RPL protocol itself. RPL is pretty much done now, but it’s still worth talking about.



Extensible Firmware Interface is a new technology initiative from the same IAL that made PXE.  EFI is a BIOS replacement technology that boots the system directly into protected mode.  This allows for memory mapping of devices as well as the full use of the 32-bit address space. The legacy PXE/RPL environment runs the CPU in what’s known as real mode, which limits the address space to 1MB (as in the original IBM PC) and requires I/O mapping of the LAN hardware.  The main goal of EFI is to make the system preboot environment as similar to the operating system as possible.  Since the base code part of the boot agent is hardware-independent, it was decided to include the base code in the EFI network protocol stack.  This way the only part that is truly variable, the network interface hardware, would need to bring a driver.  The hardware interface is conceptually similar to the 16-bit UNDI driver, and is also referred to as an UNDI driver in the UEFI nomenclature.



iSCSIBoot allows for a remote network-attached iSCSI disk to be booted from over the network.  It mimics a normal SCSI option ROM, but instead of using a local device, it uses information stored on-board to connect to the network-attached storage.   This on-board information must be configured before use, so unlike PXE which can operate almost straight out of the box, it needs some extra configuration before it can be used.

This doesn’t even cover some of the other implementations such as EtherBoot, and others.  Maybe we’ll save that for another post.


Time for the review.

1)    Intel has created and currently supports a variety of network boot technologies

2)    PXE, iSCSI and EFI are all currently supported

3)    Thanks for using wired Intel® Ethernet products


This is designed to be a quick overview of the most common 82580 questions we get.  This is not designed to replace a reading of the datasheet and spec update.  In fact, the datasheet and spec update are required reading and if they conflict, the datasheet and spec update are right and this page is wrong.  If you like this type of FAQ, leave us a comment and maybe we will do more of them.  Also, we may not update this page in the future so I can’t say this enough, check the datasheet and spec update!



1. Does the 82580 support EEPROMless designs?

No. The 82580 requires an EEPROM for proper operation.



2. What power supplies does the 82580 require?

3.3V, 1.8V and 1.0V.



3. Which power rails are sourced from main power?

All 82580 power should be derived from AUX power.



4. Which ports are available on the dual port SKU?

Ports 0 and 1 are available on the dual port SKU.



5. Are the MAC Addresses still automatically calculated like on the 82576?

No. Each MAC address must be programmed individually.



6. Which Ethernet interfaces does the Intel device driver support for the 82580?

As of this release, the following Ethernet interfaces are supported:

• Windows NDIS*- SerDes, Fiber and Copper. SGMII is not currently supported.

• Linux* - SerDes, Fiber, Copper and SGMII are supported.



7. Does the 82580 support Pre-boot?

Yes. It supports iSCSI, PXE and UEFI.



8. Which Ethernet interfaces does the Pre-boot environment support?

As of this release, the following Ethernet interfaces are supported:

• iSCSI* - SerDes, Fiber and Copper are supported in Windows and Linux. SGMII is not currently supported.

• PXE* - SerDes, Fiber, Copper and SGMII are all supported in Windows and Linux.

• UEFI* - SerDes, Fiber, Copper and SGMII are all supported in Windows and Linux.



9. What Device ID’s are supported by the 82580?



10. Why do I see (4) devices with the same device ID?

Each port of the 82580 is considered a unique device and has its own Device ID. An 82580 with all 4 ports configured to the same interface type will show 4 Device IDs all with the same value.



11. What’s the Device ID of the 82580 if each port is a different Ethernet type?

Device ID’s are defined ‘per port’ and not per silicon chip. This means there is no mixed-port type device ID; just one device ID per port. Each port can be configured for one the 5 Ethernet interface types (Copper, Fiber, Kx, Bx or SGMII).



12. Are different device drivers needed for each Ethernet Interface type?

No. The 82580 device driver supports (or will support) each of the Ethernet Interface types.



13. Does the Intel device driver support the 1588 protocol standard?

Though the 82580 supports the 1588 protocol standard, the device driver does not. Each application is unique and requires customers to develop their own custom driver to support this feature.



14. Can I/O addressing be disabled in the 82580?

Yes. The 82580 has a ‘disable I/O mode’ feature for disabling the allocation of I/O port resources for use in systems and environments (such as Windows and UEFI) where this feature is either not desirable or not supported. Legacy environment components (such as DOS, PXE and iSCSI Boot) which previously required I/O port access can now either use I/O mode if available or an alternate mechanism if I/O mode is disabled.



15. Can I monitor the 82580 temperature through the TSENSP/Z pins?

Yes and No. The TSENSP/Z pins can be used to measure temperature, but should only be used in the lab for measurement/characterization purposes. These pins should not be used in actual product systems to monitor and react to temperature changes. Reference the Thermal Diode section of the Thermal Management chapter of the Intel® 82580 Quad/Dual Gigabit Ethernet Controller Datasheet for details.



16. What are the SMbus Slave addresses?

For the dev_starter EEPROM images that support SMBus manageability, the SMBus addresses are defined as:



17. How long can the Ethernet MDI trace lengths be?

In general, 82580 Ethernet trace lengths can be up to 8 inches, but will be dependent upon the actual design and layout. Reference the Intel Long MDI Traces Design and Layout Guide for details. An NDA agreement is required to access this document.


18. Why do I see valid MAC addresses in the dev_starter EEPROM images?

Default MAC addresses do exist in the 82580 dev_starter images so that users have working MAC addresses for design testing and validation. These MAC addresses should be overwritten with real MAC address values for production units. The addresses are:


19. How do I interpret the chip markings on my 82580?

Reference the Marking Diagram section in the Intel® 82580 Quad/Dual GbE Controller Specupdate document for a description of chip markings.



20. Does the 82580 have any ESD suppression on the MDI lines?

Yes. ESD suppression is built-in to the 82580.



21. I’m not seeing the ‘MDI flip chip’ option that was available on the 82576. Does it exist on the 82580?

No. The 82580 does not support the MDI flip chip feature. This means that port 0 can not be swapped with port 3, and port 1 can not be swapped with port 2.



22. The 82580 Reference Schematic doesn’t show a 1.8V on the system side magnetic center tap? Is this still required?

No. The 1.8V bias is not required for the 82580. The 82580’s output buffers are internally biased with a voltage mode driver.


23. How do I get the Linux driver?

The Linux driver for the 82580 can be downloaded from the Sourceforge web site, and



24. What do I do with unused pins?

Unused pins can be left unconnected, except for the manageability pins of the SMBus and NC-SI Bus. These pins must be pulled-up. Reference the Intel® 82580 Quad/Dual GbE Controller Checklists document for detail.



25. Can other devices be connected to the 82580 SMBus?

For best performance, each 82580 should have itsown dedicated SMBus link to the SMBus master device.



26. Can the Quad port dev_starter EEPROM images be used on a Dual-Port device or dual port configured Quad port device?

Yes. For a Quad-port device that’s configured as only a Dual-port, it is also recommended that bits [11:10] in Words 0x0E (port 2) and 0x120 (port 3) be set to 11b to actually disable ports 2 and 3, respectively.



27. Can the latest EEPROM images be used on A0 silicon?

No. A0 and A1 EEPROM images are not interchangeable with A0 and A1 Silicon. A0 silicon requires A0 images and A1 silicon requires A1 images.


A0 images can be found in CDI/IDL in:

o Intel® 82580 Gigabit Ethernet Controller SVK – Silicon Sample Kit - 14 Aug-2009; Doc ID #428529


A1 images can be found in CDI/IDL in:

o Intel® 82580 Gigabit Ethernet Controller – EEPROM Images – Rev 3.19 02-Feb-2010; Doc ID #441658


Contact your Intel representative for sample EEPROM images for the 82580.  An NDA agreement is required to access this information.

You may have noticed community postings about some desktops and laptops with slow connections after waking up, no connection after a driver upgrade, and similar connection issues. Well, we heard you and we now have a new driver release to fix those problems.


Download version 15.2 for driver updates that I expect will work for most of you. The version 15.2 release focuses on the problems you reported in the community.


If the driver update does not work for you, we look forward to hearing from you. If you need help with a connection issue include lots of details when you post. And do not forget to check with the manufacturer of your computer for BIOS or other updates that might affect your Ethernet connection.


Thank you for using Intel® Ethernet and for participating in the Intel communities.


Mark H

At Intel we have some rather elaborate IEEE test automation capabilities. Every NIC (network interface card) manufactured by Intel and every LOM (LAN on motherboard) is tested against the IEEE spec for conformance multiple times through our development cycle.  In addition to testing our own products we can test customer designs that are using Intel PHYs.




Over the years we have developed a client/server automation software package called “Network Product Platform Validation” or NPPV for short. This software works with a suite of different types of test instruments.  We use custom designed test boards with relays on them to set up the different loads and instrumentation required for all of the different IEEE tests. These boards have integrated 100 ohm test loads for all four pairs, 50 – 100 ohm balun test fixtures, differential breakout fixtures, common-mode output voltage test fixture, and an alien crosstalk test fixture. Below is a picture of the transmit (or TX) board used to run the IEEE transmit testing.








We also use a receive (or RX) board that is connected to cables that can test against different lengths for the IEEE Bit Error Rate (BER) tests for 10Mb/100Mb/1000Mb as well as testing Alien Crosstalk and Common-Mode Noise Rejection.  To generate the traffic for the BER tests we usually use an Intel® 82540 NIC, which is one of the cleanest client network cards designed in the past to provide the highest quality input signal for the test. Below is a picture of what our BER setup looks like. Each connector has a different length cable connection and those can be concatenated to create the different lengths in our BER testing.  The automation software controls the switches on the board to make the different lengths.








Before any of these boards are used they go through a rigorous characterization to ensure that each onboard fixture and BER cable length is consistent with our other boards and that they match the test results from using a manual test fixture.




Besides these custom boards, our test stations use a variety of digitizing scopes capable of up to 8 GHz bandwidth, network analyzers, high-bandwidth differential probes with capacitance less than 1 pF greater than 1 GHz bandwidth, a waveform generator, and a function generator.  A switch system is used to control the relays on the test boards to set up each test. We also have a VNA calibration load fixture that is used to calibrate the network analyzer before any return loss testing.  This is done before every return loss test is done because air temperature and humidity changes can affect the results.




The automation software is impressive as well. It is capable of resetting and configuring each instrument before the tests are run. Once a test is started it will look through each 10Mbit, 100Mbit, and Gigabit test collecting and storing data on our secure server. The automation software can intelligently go through each test tuning the trigger levels and other aspects of the scopes. For example, as the common mode output voltage test for gigabit is running ( it will physically change switches on our custom test board to route the output of Pair A through the common mode output voltage fixture. It then will set the scope to the correct horizontal/vertical scale, vertical range, trigger type, trigger level, and trigger mode. It will first step the cursor in the positive direction just until the trigger is lost and then toggle the trigger back and forth finding the ideal point just before it can’t trigger anymore. At that point the display persistence is changed, the common mode noise is sampled and the graph is generated.  We change the colors to make it look nice and pretty for the customer. J  This is repeated for the negative triggering and for pairs B through D.  In addition to our “pretty” graphs we put the noise level for positive and negative in a table so the values can be easily viewed. Below is a snapshot from our final report showing the results of the Common Mode output voltage test on an Intel Customer Reference Board. In this particular example our PHY passed with lots of margin which goes to show it is a quality PHY, with a good design following our layout requirements, and using good quality magnetics.








Once all of the tests are run, including the Bit Error Rate, the software can build a fairly comprehensive excel spreadsheet (see example above) that has all of the data captured for 10Mbit, 100Mbit, and Gigabit. This is our final report that we can give to customers for analysis. This format makes it very easy to identify problems and see the results. It is also easy to compare the results from different designs between different fab revisions to see how IEEE is affected.  Over the years at Intel we have built up a lot of knowledge on a wide variety of design decisions and how they impact IEEE.  This knowledge continues to evolve in our design guides and schematic/layout checklists that we provide to board design engineers to enable them to do better LAN design from the beginning.




Our test stations are fairly complex but they save a lot of time over manual testing and allow us to run thousands of different designs and board revisions through IEEE testing. We have this capability to test in several geographies.  The majority of customer testing for ODMs (Original Design Manufacturers) is done in Taiwan, where many of them are located, for onsite support. Because a lot of the design and architectural work for Intel is done in Israel, the initial IEEE testing and tuning of the LAN PHY’s is done there. Israel might test a customer platform if it requires some kind of specialized NVM tuning to pass a particular test, but generally that is rare because most IEEE failures can be fixed by focusing on the analog front end with better board designs or better magnetics.  Oregon, in the United States, has the most IEEE test stations and is where the automation software was developed.  A lot of NICs and customer platforms go through these stations.  The Oregon lab will soon be upgraded with new scopes that have the bandwidth to handle future 10 Gigabit testing.   This testing is being developed in Oregon and will eventually roll out to each worldwide lab.




If you have a board design with an Intel PHY for which you would like to see IEEE conformance results, contact your local Intel FAE to find out how you can get connected.


Shared code: Old versus New

Posted by dougb May 7, 2010

Regular users of our Linux* and FreeBSD* offerings probably noticed a shift in our design a while back.  In "Ye olde days" all the shared code was contained in one file.  We were cramming a ton of code into one file and it was getting unsustainable.  The code was built around a ton of ‘if’ statements, and as the amount of supported hardware grew, so did the size of the ‘if’ and ‘switch’ statements.  We elected to break up the statements by segmenting the code into different hardware types by file.  To glue all this together we moved to using function tables.  Some people don't like using function tables, but we felt it was the best way to solve our technology issues.  The Linux kernel guys seem to have some issue with it, so we have to do some work to compress the redirection tables at times.

This also allows for a smaller driver to be built if you’re interested in that.  Right now you can do it with using the #ifdef labels that are built into the driver.

     Another common question about our shared code is around the difference between the FreeBSD and Linux shared code.  The difference is mostly in the names.  Because the driver is called em or igb on FreeBSD and e1000 or e1000e or igb on Linux the function names need to reflect the name of base driver.  We have scripts that change the names to driver names.  Otherwise they are they same.  We have to keep ownership of the IP of the shared code so we don’t always accept input to it.  But that’s a whole ‘nother post.

     Some drivers do remove those #ifdefs so if you want the unfiltered shared code, please contact your Intel sales team since some of the code may require a source license agreement.  Only a trained Intel representative can make this SLA.  If you’re interested in having us publish the near raw code without the SLA, I'll need to hear it in the comments!!  If you don't want to leave a comment, please contact your Intel field sales team and request it.

Big Review:

1)      The shared code principle lets us maximize quality, shorten development time all while keeping readable code.

2)      If you have a custom O/S Intel you can request access to the raw shared code to help give you accelerate your driver work.

3)      Thanks for using wired Intel® Ethernet

Filter Blog