Has anyone successfully used the Intel X520 DA2 10GE adapter to a Cisco Nexus switch using the Cisco SFP+ 7m active cables? The Cisco part number of the cable is SFP-H10GB-ACU7M.
We have around 80 of these NICs, two per server, in HP DL380 G7 running VMware ESXi 4.1 (Releasebuild-582267) and while the link comes up OK, on approximately 6 of the servers we see ESX log vmnicx: NIC Link is Down immediately followed by vmnicx: NIC Link is Up 10Gbps. In the worst case we're seeing these messages every 60-seconds or so on the server, but we don't see any similar link down/up on the Cisco switch.
I''ve been looking at the quoted specifications for both the Intel NIC and the Cisco switch and have found the following.
In the Intel note Which SFP+ modules, SFP modules, and cables can I use with the X520 Series? there's a question "What are the SFP+ direct attach copper cable requirements for the Intel Ethernet Server Adapter X520 series" with the answer stating "Any SFP+ passive or active limiting direct attach copper cable that comply with the SFF-8431 v4.1 and SFF-8472 v10.4 specifications".
When I look at the Cisco web site regarding the specifications their cables are built to, their document Cisco 10GBASE SFP+ Modules Data Sheet states the standards supported are the SFP+ MSA SFF-8431 (Optical Modules, Active Optical Cables, and Passive Twinax cables) and SFP+ MSA 8461 (Active Twinax cables). Additionally their Twinax Cables Certification Matrix for Cisco Nexus 2000 and Nexus 5000 Series Switches shows the only supported twinax cables over 5m are their own Cisco cables.
Does this mean that the Intel NIC and the Cisco switches do not support a common standard for active SFP+ cables or have I misunderstood the documentation.
Thanks in advance.
I am rapidly coming to the conclusion that there is no standard that anyone is following regarding SFP+ connectors.
I have an HP switch, and HP tells me they will only support their own SFP+ direct attach cable. I cannot get the Intel NIC to recognize that it is connected. It lists as "Down" no matter. I've also been told the latest driver will solve the problem, but that hasn't worked either (for Linux or VMware).
Maybe the X520-DA2's requirements make it incompatible with anything but an Intel cable, which most likely will not work with my switch (that and/or it is just an expensive piece of junk).
Our adapters work with HP cables (and many other brands too.) You do not have to use Intel cables. I am not sure what is happening in your case. Are you gettng messages in the logs about the driver not initializing because of an unsupported module? You should get a message similar to that if this was the driver rejecting the cable.
I've got answers from Cisco on the standards they support and they state:
We offer passive cables in lengths of 1, 3 and 5 meters, and active cables in lengths of 7 and 10 meters.
As per Cisco engineering specs, passives are compliant to SFF 8431, 8472, they have the 03h identifier for SFP/SFP+ coded in the EEPROM. Actives are compliant to the same plus SFF 8461 which specifies requirements for DC blocking capacitors.
The actives are compliant to ALL: 8431, 8472 and 8461. 8461 is a specific requirement for actives (DC blocking capacitors), on top of the 8431 and 8472 requirements.
So we seem to be good from the point of view of both vendors following the same standards, but we still have an issue.
I can take a cable from a link that's working OK and move it to the link that's not, and the fault stays with the NIC/switch port combination. So we deduce it's a bad switch port or NIC. So I plug the possibly bad NIC into the switch port that doesn't have any issues, and the problem's gone. Based on this we decide it must be a bad switch port. So I then plug a NIC that's not seen any problems into the possibly bad switch port.... and no problem.
In short, it looks like some combinations of switch port, cable and NIC work OK, and others don't. As a result we're at the point where we have to find what we think is a good combination using trial and error. Hardly the way we should be doing things.
The other things we have that's weird is that only the NIC sees the link drop. I'm told that the signaling is end-to-end and, and if that's the case, I'd expect that both ends should see a state change.
All in all, very frustrating, with very little progress.
I am sorry to hear that getting the right set of cables, NICs and switch ports to work together has been so much trouble. I am not familiar with the Cisco SFP-H10GB-ACU7M cables. However, I do know that the drivers have had updates to address cable compatibility issues. The drivers might make a difference in the link going down and up. I will also check with my contacts to see if I can find out anything specific to those cables.
I can't say for sure that the link drop messages you are getting will be resolved by upgrading the driver, but I do know that bug fixes that are newer than your driver helped address some direct attach cable issues. I would highly recommend testing the new driver a try on at least one of the servers where you are seeing the link up and down messages. You can get the latest Linux drivers afrom SourceForge or from Intel Download Center.
For finding the latest VMware driver check out the directions at:
Let me know if the new driver fixes the issue for you.