This is a bit puzzling. Have you tried swapping the adapter with one of the adapters that you know is working in one of the other servers? If you have not already tried a swap, the swap would quickly let you know if the adapter was defective.If it is the adapter, you can contact Intel Customer support for an RMA. If it is not the adapter, then we will need to look for other causes. If the server is a different model or configured differently, then maybe there might be a clue in what is different.
Also, the PCIe slot can make a difference. If the adapter is good, do you have a different slot you can try the adapter in?
If you Intel(R) PROSet installed, you can look in the link speed tab under identify adapter to check the PCIe slot width negotiation to see if we are really running at x8 instead of x4. (Screen shots below)
I cant seem to post pictures yet, but we changed to a new card. Changed the links to 5m cables, and verified that it is in a 8x slot. We also installed the I/O OAT since it was not enabled in the bios. Installed the Updated drivers and rebooted. Same result. So we switched back to the 10m cables.
Thanks for sending those photos to me in email. Can you tell us more about the type of traffic on the ports? For example, is the storage traffic using iSCSI, NFS, both? I thought I might be able to find a case study where they did something similar that we could work from if their is some configuration tweaks that might help. I think most of the case studies use a newer OS and are often on newer platforms, but we might still find something helpful.
Also, what tuning have you done on the connection? Is everything using the default settings right now?
One of my colleagues thought that this hotfix might help: http://support.microsoft.com/kb/972071. My understanding is that the hotfix will change the priority of the process that handles the packets. Let us know if this hotfix helps.