As more information,
We are using Dynamic LACP mode. I know the switch side is correct because I can create a mulitple VLANs within the new team(disables VMDQ) and it works without issues. Intel doesn't recommend this because once you bind HyperV to that adapter, Proset doesn't allow any more VLANs to be created. We need to be able to grow and configure VLANS via the VLAN ID within the VM.
Try this. Disable VMQ on the adapters. Only create the VLAN on the Hyper-V switch or on the individual VM. Do not configure any VLAN on the NIC team. As I recall, someone else had communications with this type of setup, but the communications stopped when they enabled VMQ.
Thank you for your response. That did fix it. I setup 3 hosts without using VMDQ. For giggles, I set one of them up with VMDQ and the VMs couldn't communicate. It is a shame that I cannot take advantage of VMDQ. I have already submitted a case and I would hope this bug will be taken care of in the future. Nevertheless, this fixes my immediate needs. Thanks again for your help.
I am glad that you were able to use the adapters with your virtual machines. The issue with losing communications when enabling virtual machine queues is under investigation by Intel. A future fix should allow you to enable the feature on your adapters. I plan to post a further response whenever a fix is released.
to grab up this old thread, is there some new information about the issue?
We have done some further investigation with software:
We have 6-way Hyper-V Cluster Server 2008re sp1 with Intel x520 and intel quadport et mezzanine adapters.
Partially it seems to be fixed. We use latest boot rom and driver with Intel Quad Port et adapter. Everything works fine with vmq except that we can't use vlan's in combination with hyper-v legacy network adapters. No Problem with synthetic adapters. Cross checking the scenario with previously used broadcom adapters and vmq works fine.
We tested without teaming and with the modes static aggregation, lacp and vmlb. All the same. Combination of vlan, vmdq and legacy network adapter don´t work. Disabling vmq fixes the issue.
Only option we see at the moment to get vmq to work ist by using no vlan and set switch port to access mode.
Will this be fixed in near future? Cause with growing number of vm's per host disabling of vmq is really a performance breakdown.
Another side question: Is there any recomendation from Intel side which teaming mode is preferable in combination with hyper-v in case of technical availabilty of all the modes? We use Dell m610 Blades with PowerConnect 6348 Blade Switches.
HI, I have just spent weeks trying to fix an issue with my two three node clusters where servers could not communicate between nodes. I have had cases with Microsoft and Dell as I use Dell M610's to no avail.
I found this post tonight and it seems to have resolved my issue, I was using a single nic (no team) with the hyper-v profile and it did not work, I have turned off VMQ as per this post on all nodes and it seems to have resolved my issue. I am using driver version 15.5.2
Is there a fix for this soon, as I could really use VMQ
Your problem description didn't provide much information
There are much newer drivers available that have various fixes, some ofr VMQ. I would try the new drivers available here: