Skip navigation


Here is a video outlining what VMDq is and why it provides value to your investment.  I’m posting about it rather than Patrick since he made the videos he can’t go on and on about how interesting this video is.  I don’t know what’s more impressive, the performance gains that VMDq provides, or the fact that this video is a screen capture of a set of slides!  Check out the last part of the animation, it shows in an easy to understand way the power of VMDq.


Coming soon is his follow up video and SR-IOV explainer!

Greetings from the Microsoft Management Summit (MMS) 2010!  I am a ‘booth monkey’ this week in the Intel Booth demonstrating iSCSI performance on the latest and greatest Intel® processor and Intel® 10 Gigabit Ethernet controller.


A few months ago Ben published a blog about achieving 1 Million IOPS (I/O operations Per Second) over iSCSI using standard component.  Earlier this week Doug published a follow-up blog explaining how this was achieved.


I won’t go into all the details listed in these blogs, I will just quickly state that it was done with the iSCSI initiator using the Intel® Xeon® 5500 Processor series and the Intel® X520-DA 10 Gigabit direct attach Ethernet adapter.


Here’s a photo of the slide we are showing in the booth at MMS:




We have an iSCSI demo running in the Intel booth.  However we are not showing the 1Million IOPS discussed in Ben and Doug’s blogs.  Rather we are showing nearly 1.3 Million IOPS!



How was this ginormous number achieved you may be wondering.  Well, the hardware setup is exactly the way Doug explained it in his blog, with the exception that we have placed the latest and greatest Intel® processors in the server.  We replaced the Intel® Xeon® 5500 Processor series with the Intel® Xeon® 5600 Processor series.


Yup, that is all we did – replaced last year’s Intel® processor with this year’s Intel® processor.  This simple change resulted in a nearly 30% performance increase in IOPS.


We think this is pretty cool, and worth blogging about!



1 Million IOP Article Explained!

Posted by dougb Apr 20, 2010

Earlier Ben posted in the Server Room about how we have demonstrated how Intel® Ethernet products can be used to generate 1 million I/O Operations Per Second. (IOPS).  This article left a lot of people saying "How did you do that?" and today I'll talk a little bit about how we did it.  Your mileage may vary, as they like to say, but this will enable you go get your own test bed up and running at or near the 1 million IOPS mark.  I used this ingredients list to make our demo at NAB which did just shy of 1 million IOPS for a week on the show floor.


First you'll need the right ingredients.  Like any recipe, you can do substitutions, but that may change things and not give you the same experience.  There are two sides of the equation:  Initiators and Targets.  We used a single initiator, the system under test (SUT).  Faster the better rule applies here.  We used an Intel® Xeon® 5500 series processor platform with the fastest RAM configuration available. The RAM needs to be as fast as possible, so watch how you install it.  Populating each memory channel in the system with a single stick of  RAM will keep the speed at maximum, so we just used 12 GB of 1333MHz RAM.  Make sure you use the x64 version of the O/S so you can actually use all of that RAM.  So on the Initiator you will need super fast CPU, super fast memory, and super fast LAN.  We used the Intel® X520-DA 10 Gigabit direct attach adapter.


For the second part of the equation, we needed a seriously fast target.  So we built one.  We used the StarWind* iSCSI product to build a modestly sized RAM drive array on 10 very fast machines.  Nearly as fast at the initiator, each machine also featured the X520-DA adapter.  Again, fast RAM, with enough for the RAM drive, but nothing excessive.  On each machine we made 5 RAM drives for a total of 50 RAM drives.

Back to the Initiator, we mapped all 50 drives into the Disk Manager and made them active.  Now we are ready for the test run. We used Iometer benchmarking tool, a free product that Intel open sourced a while back.  We used various Access Specifications, but the 512B I/O size (the smallest possible I/O size using Iometer) gives you the maximum possible IOPS.   The Max possible IOPS at 512B when running 10Gbs is 2.44 million. The math is really easy:  IOPS=bandwidth / IO Size.  We used 2 instances of dynamo, each with 25 workers.  Each worker was assigned one of the RAM drives to conduct its I/O.  We made sure RSS was on, with maximum number of queues supported to balance the work across all those cores.  The more cores you have the more IOPS you can do before you saturate the link.  If you add more 10G cards you should get more IOPS, up to the limits of the infrastructure.  QPI is fast, but even it has limits


Then we just sat back and collected the data.  The numbers would fluctuate around, part of the nature of the Windows* O/S.  As it does garbage collection, general housekeeping and even driver statistics gathering, it will cost some CPU cycles which will in turn cost some IOPS.  But we would peak above the 1 Million milestone.


This might seem a little "downhill with a tail wind" type of performance measuring, but it is really just like the top speed of a high performance sedan.  The speedometer might go to 160, but in its average use it might rarely see higher than 55.  This is the same case.  Under lab conditions Intel® Ethernet products generate eye popping numbers.  They do just as well in the real world.

Imagine what type of performance they can give your network.


Documentation Updates for April

Posted by dougb Apr 16, 2010

Here in the Wired Intel® Ethernet division we create or update around a 100 documents and technical collateral pieces a year.  We realize that finding and using these documents can be hard, so in a new feature of the blog, we'll be talking about the latest updates and why you would want to get the new copy.  Just subscribe to the RSS feed and it will tell you every time we make a new doc, or update an old one.  We will also include some quick notes on changes and updates to help you find the changes.  Some documents are going to be located inside of Intel's information security website, so if a link doesn't work for you, ask your Intel field representative, Extended Sales Force team member or your local distributor for help getting to that sensitive information.


As our first time out, here is the list of the public updates.

As part of our commitment to quality, Intel publicly discloses specification updates to it’s Wired Ethernet products.  Here are the updates for the month.

Intel® 82566 Gigabit Platform LAN Connect Specification Update, 2.3

Intel® 82563EB/82564EB Gigabit Platform LAN Connect Specification Update, 2.8

Intel® 82573 Family Gigabit Ethernet Controllers Specification Update, 2.9

Intel® 82574 Family Gigabit Ethernet Controller Specification Update, 3.1

Intel® 82576 Gigabit Ethernet Controller Specification Update, 2.7

Intel® 82583V Gigabit Ethernet Controller Specification Update, 2.4



We also make improvements to our datasheets.  If you are using either of these products you should update to these new levels.

Intel ® 82576 Gigabit Ethernet Controller Datasheet, v2.47

Intel® 82580 Quad/Dual Gigabit Ethernet LAN Controller Datasheet, 2.4

You can access these documents and others at our Ethernet controller entry page.

Big review!

1)     Always check for updated documentation before starting a support case

2)     Wired Intel® Ethernet produces a bunch of document updates

3)     Thanks for using Wired Intel® Ethernet products.


Meet the Blogger at NAB

Posted by dougb Apr 6, 2010

I’ll be at the National Association of Broadcasters convention in Las Vegas starting April 12th and running to April 15th.    I’ll be doing booth duty most days of the show with the demo I created.  Come see the Intel Booth at #N1323 in the North Hall.  There will be 21 demonstrations of various Intel technologies, including my demo of Unified Networking.  It features 10 Gigabit Intel® Ethernet for High Performance Storage Networking, running Fiber Channel, Fiber Channel over Ethernet and iSCSI all running at once!  Mention the blog and leave your business card to win a prize!

See you in Vegas!

An update to the PXE software that is integrated into the BIOS of some new computers with built-in Intel® Gigabit Network Connections is causing PXE boot to fail with an error: 


PXE-E74 bad or missing pxe menu and or prompt information.


The issue appears when the boot menu sent from the PXE server is long enough to cause the information to be split into two option fields. In this case the client does not accept the information from both options, and you see the error.


Intel has made a fix available to manufacturers to integrate into a future BIOS update. While you are waiting for the next BIOS update, here is a workaround that will allow you to use PXE boot.



As a workaround you can configure your PXE server to shorten the length of the boot menu.


If you have an add-on Ethernet Adapter

If you upgraded the Intel® Boot Agent on a plug-in adapter, then an updated flash update utility (IBAUtil) with the fix is available.

Do not attempt to use IBAUtil on a built-in network connection. IBAUtil can only update PXE on a plug-in adapter. If you have a built-in network connection, you will have to use the workaround until a fix is provided by your computer manufacturer.


Download  IBAUtil for Intel® Network Adapters.

Filter Blog

By date: By tag: