dougb

IO Util Tool

Posted by dougb Jul 30, 2010

In order to support pre-boot and legacy technologies, our PCI Express MAC families support IO Map mode access to the controller.  IO Map mode is using OUT and IN commands to push data via an old ISA technology.  While it works really well for real mode software, it can be very limiting for large install bases.  The IO address space is only 65535 bytes, and with some technologies taking large chunks of it, it is easy to run out of space.  When a BIOS runs out of space what happens next is always messy and often fatal.  While this “back door” is needed for PXE, NDIS2 and iSCSI boot, we have a method within our non-volatile storage to turn it off.  We’ve had this ability for a long time, but this is the first time we’ve shipped an end user tool for turning IO mode off.

IOUtil will allow the adapter to be configured to change whether or not it will request IO map mode access to the device.  It is located in the \tools\apps\IOUtil folder on the release media or within the 15.3 or greater Administrative Tools for Intel® Network Adapters webpack.

This utility is designed for system integrators. Only use it if your system has a high port density or you know you don’t need IO mapped access to the card.  Disabling IO access will prevent some pre-boot and DOS driver technologies from working.

So don’t turn it off just because.  When you get more than a couple quad port adapters, I would consider it.  If you have a bunch of other cards and bunch of Intel® Ethernet adapters in the system and it won’t POST, then remove a couple of the cards and try IOUtil.

Review!

1)      If you don’t need IO mapped access to the HW, turn it off using IOUtil

2)     Be careful if you do, some things won’t work

3)     Thanks for using Intel® Ethernet

Part 3 of our “probably-going-to-regret-calling-it-this” trilogy of “how-to” videos for Fibre Channel over Ethernet (FCoE) is by far the most in-depth one.  We do it all in fewer than three minutes.  No fancy GUIs here, we’ve got work to do!  Linux* segments all the layers of the FCoE onion a bit more clearly than under Windows*, so there is more to configure.  Mike is mute this time, letting the screen do the talking.  You can install the FCoE package using your favorite method, yum, zyppy, signal flags or hand compile.  After that, use tools like dcbtool, fcoeadm, lsscsi and old favorites like ethtool to get it up and running.  Once it’s all attached use mkfs and fdisk to bring it all on line. 

1)      SLES is more steps to configure than other O/Ss, but this video will get you going.

2)      While some people are saying FCoE is hard, Intel is doing its best to make it easy

3)      Thanks for using wired Intel® Ethernet.

The Intel(R) Ethernet Flash Firmware Utility (BootUtil) is a DOS/EFI utility that can be used to program

the PCI option ROM on the flash memory of supported Intel PCI and PCI-Express-based network

adapters, and to update configurations.

BootUtil replaces existing utilities and provides the functionality of the older IBAUTIL, ISCSIUTL,

LANUTIL, and FLAUTIL. BootUtil supports all the adapters supported by the previous utilities.

 

NOTE: Updating the adapter's flash memory using BootUtil will erase any existing firmware image

from the flash memory.

 

Intel provides the following flash firmware in FLB file format for programming to the flash memory:

 

- Intel(R) Boot Agent as PXE Option ROM for legacy BIOS

 

- Intel(R) iSCSI Remote Boot as iSCSI Option ROM for legacy BIOS

 

- Network Connectivity, UEFI network driver

 

 

OEMs may provide custom flash firmware images for OEM network adapters. Please refer to the

instructions given by OEMs. BootUtil allows the user to flash supported firmware to the adapter from

the included master FLB file. This option ROM includes PXE, iSCSI and UEFI drivers, and the image

is programmed to the flash memory at once. BootUtil will also build the required combo images for

supported adapter and program those images to the flash, as well. Since both discrete and combo

images are supported, the -BOOTENABLE command ONLY works on combo images.

The Master FLB file (BOOTIMG.FLB) is the new container for all of Intel(R) boot Option ROMs. This

file replaces the existing FLB files for iSCSI, PXE, and EFI. BootUtil supports older flb files to maintain

backwards compatibility with the previous utilities. BootUtil without command line options will display

a list of all supported Intel network ports in the system. BootUtil will also allow the user to enable or

disable the flash memory on specific ports by using -FLASHENABLE or -FLASHDISABLE options in

order to control access to the firmware from the system.

BootUtil allows the user to individually set iSCSI and PXE boot configurations by

-NIC=xx -[OPTION]=[VALUE] options. The -I option is iSCSI specific and will not work for PXE

configurations.

NOTES: No configuration settings are supported for the UEFI driver.

Functionality from the previous utility, IBAutil, is preserved in BootUtil.


 

BootUtil is located on the software installation CD in the \APPS\BootUtil directory. Check the

Intel Customer Support website for the latest information and component updates.

 

The syntax for issuing BootUtil command line options is:

 

BOOTUTIL -[OPTION] or -[OPTION]=[VALUE]

 

The readme that goes with the utility has a listing of all the parameters that it supports.  BootUtil

must be run with the computer booted to DOS only. Rebooting is required after executing BootUtil

to make updated settings valid.

 

 

BootUtil accepts one executable option and its associated non-executable options in an execution.

If conflicting executable options (such as -FLASHENABLE and -UPDATE used together) are supplied,

BOOTUTIL exits with an error.   A full listing of the error codes are in the readme.

 

The following examples show how to enter some typical BootUtil command lines:

 

Example 1:

To enable the flash firmware on the first network adapter for the system to be

capable of executing the flash firmware.

    BootUtil -NIC=1 -FLASHENABLE

 

Example 2:

To disable the flash firmware on all the network adapters.

    BootUtil -ALL -FD

 

Example 3:

To display BootUtil FLB flash firmware types and versions.

    BootUtil -IMAGEVERSION

 

Example 4:

To update all ports of a supported NIC with PXE.

    1. BootUtil -UP=PXE -ALL (Assumes input file is bootimg.flb)

    2. Bootutil -UP=PXE -ALL -FILE=BOOTIMG.FLB (explicit user specified file)

 

Example 5:

To update a combo image on supported adapter (eg, pxe+iscsi)

    1. Bootutil -UP=COMBO -NIC=2 -FILE=BOOTIMG.FLB

 

The above command will succeed if the PXE+ISCSI combination is supported on

NIC #2. If not an error is displayed to the user.

 

NOTE: THE -UP and -UPDATE commands are equivalent and interchangeable.

 

Example 6:

To enable PXE firmware on the third network port in the system.

    BootUtil -BOOTENABLE=PXE -NIC=3

 

NOTE: This command will work only if PXE is part of a combo Option ROM and

not a discrete Option ROM.

 

Example 7:

To disable the firmware on the second network port in the system.

    BootUtil -NIC=2 -BOOTENABLE=DISABLED

 

Example 8:

To get help descriptions.

    BootUtil -?

 

Example 9:

To enable DHCP for the iSCSI initiator on all the network ports in the system.

    BootUtil -INITIATORDHCP=ENABLE -ALL

 

Example 10:

To load the iSCSI boot configurations from a text script file to the first

network port.

    BootUtil -I=CONFIG.TXT -NIC=1

 

 

Big review!

 

1)    1)  BootUtil is THE tool for updating and configuring your NIC option ROM.

2)    2)  The readme is full of all the parameters, error codes and general goodness.

  3)  Thanks for using Intel® Ethernet.

Our first video was how to get Fiber Channel over Ethernet (FCoE) installed and basically configured on Windows* Server 2008 R2.  What if you wanted to boot that server from FCoE?  After our articles on PXE, this one is pretty easy.   Again, Mike walks through the configuration process and shows this isn’t just foilware, but working technology. Once the CTRL+D is used, the magic begins.  Mike is the quiet type, so there will be times when he lets the screen do the talking.  Rest assured that the audio is working correctly.  Mike then wraps up with “BootUtil,” the super configuration tool for our preboot technologies.  BootUtil is covered in more details here.

Next up in our video “how-to” cavalcade is a visit to the penguins! Show that review!

1)      Our FCoE solution spans from boot to O/S present.

2)      While some people are saying FCoE is hard, Intel is doing its best to make it easy.

3)      Thanks for using wired Intel® Ethernet.

One of the problems with new technologies is getting them installed correctly.  Fibre Channel over Ethernet (FCoE) is a compound of two separate technologies.  (Technically its three since Data Center Bridging must be used to get the Ethernet loss rates down to storage network required levels. That’s a whole ‘nother post)   So its not just an Ethernet driver that has to be installed; it requires a storage driver, a protocol driver and an Ethernet driver.  Our installer puts them all together.  This video is our man Mike doing a walkthrough of the installation and showing how to use the Intel® PROSet tool to see how the FCoE and DCB can be configured.  Mike finishes it off by getting the FCoE volumes online using disk manager and then using the new storage.  Before you try FCoE on Server 2008 R2, take a quick view of this show.  It runs just a touch over five minutes

Big review!

1)      Intel provides a walkthrough of steps to get your FCoE running on Server 2008 R2

2)      While some people are saying FCoE is hard, Intel is doing its best to make it easy

3)      Thanks for using wired Intel® Ethernet.

According to the studies, the average physical server in a datacenter that is running a virtualization solution is using eight GbE ports:

 

·         Four for VM Ethernet Traffic

o   Two for Traffic

o   Two for Fault Tolerance (failover)

·         Two for Live Migration

o   One for the Live Migration

o   One for Fault Tolerance (failover)

·         One for Backup

·         One for Management

 

That is a lot of ports going from a single server to a top-of-rack switch.  Now imagine a fully populated rack.  Let’s say there are 20 servers, that’s 160 physical cables.

 

 

Then, stop and think for a moment about those eight GbE ports on the single server.  That’s 8 Gbps of potential bandwidth, however, even if you are fully using all capabilities (say two Gbps going to the VM’s,  actively doing live migration and Backup and Management all at the same time), there are still going to be the three ports allocated for Fault Tolerance that won’t be used.

 

 

So, now you have eight ports, all with the cabling costs associated with them and the electrical costs of having them running, however, at best you are only actually using five of them at a time, but still paying for all eight.  Kind of makes you stop and think.

 

 

Now imagine if you will, replacing those eight GbE connections with just two 10 GbE connections.  By going to only two ports at 10 Gbps, you can not only reduce the number of cables by 75%, you can also better use network bandwidth because you now can have both ports being used at all times instead of having ports on stand-by for fault tolerance. 

 

 

Consider having the VM traffic on one of the ten GbE ports, with the other port configured for Live Migration, Backup, and Management.  Then simply have each port configured to be Fault Tolerant for the other, so that, in the case of a failure, all traffic simply fails over to a single port, which at 10 Gbps is still faster than all eight of the one Gbps ports used in a traditional environment.

 

 

So back to the 20 servers in a rack – you can go from 160 cables when using GbE to 40 when using 10 GbE. Have I piqued your curiosity?  If so we have a white paper that may interest you very much.  This whitepaper explains all the benefits of moving from multiple one GbE ports to two  10 GbE ports in a VMware* vSphere 4* environment.   (We are working on another paper for the Hyper-V* environment as well).

 

 

This whitepaper has been widely accepted by many throughout the industry.  It has been re-branded and re-purposed by many other companies.

 

 

The original Intel® whitepaper is located here: http://download.intel.com/support/network/sb/10gbe_vsphere_wp_final.pdf

 

VMware Communities Link

http://communities.vmware.com/docs/DOC-12168

 

Dell* co-branded white paper

http://www.intelethernet-dell.com/pdf/11102_NTL-DEll_10GbE_vSphere_WP_FINAL.PDF

 

Dell Power Solutions magazine article

http://www.dell.com/downloads/global/power/ps2q10-20100419-intel.pdf

 

VMware released a co-branded version of the Virtualization best practices white paper – Link

http://download.intel.com/support/network/sb/intelvmware10gbevsphere.pdf

 

Fujitsu* co-branded white paper

https://globalsp.ts.fujitsu.com/dmsp/docs/intel_10gbe_best_practices_fujitsu_wp.pdf

 

 

We hope you find it of interest and encourage your feedback.

Intel is pleased to announce that the Intel® Ethernet X520 Server Adapters now support FCoE (Fiber Channel over Ethernet).  Specifically, Intel has made available an FCoE initiator for Windows* Server 2008 and 2003.  You can find this initiator on both the CD that ships with the product and Intel’s support site as “Intel® Network Connections Software Release 15.4.”  This release provides a WHQL-certified Windows Server 2003 and Server 2008 stack that is optimized for the Intel X520 product family.  Linux*-based FCoE initiators from different distributions including RedHat*, Novell*, and Oracle* are expected later this year.  This FCoE initiator is going through the qualification process at various storage target vendors. 


Storage certification status is as follows (warning!  HTML tables!):

Vendor   Certification

Completion

Microsoft* WHQL

Now

Cisco* Nexus* 5000

Target Q3’ 2010

NetApp*

Target Q3’ 2010

EMC Elabs*

Target Q3’ 2010

 


Current validated switch list:

 

Vendor

Model

FCoE Switch

Cisco

Nexus 5020

 

Nexus 5010

Brocade*

Brocade 8000

FC Switch

 

Cisco*

MDS 9216A

 

MDS9124

 

MDS 9222i

Brocade*

Brocade 4800

 

 

FCoE encapsulates Fiber Channel* frames over standard Ethernet networks, enabling Fiber Channel to take advantage of 10 GbE networks while preserving its native protocol.  The X520 server adapters offer FCoE hardware acceleration to provide performance comparable to FC HBAs.  The X520 family of adapters support Data Center Bridging (DCB), which allows customers to configure traffic classes and priorities to deliver a lossless Ethernet fabric for FCoE traffic.  By incorporating the FCoE initiator, an Intel Ethernet X520 Server Adapter reduces TCO by eliminating redundant fabrics and saves the cost of expensive FC HBAs and FC switch ports.  The X520 Server Adapter family now supports all of your communication needs, including LAN, NAS, iSCSI, and FCoE.

Customers seeking direct support with the FCoE initiator can contact: fcoesupport@intel.com

The  product is expected to begin shipping with the FCoE initiator June 30, 2010. 

Filter Blog

By author: By date:
By tag: