Same/similar problem here.
We do a full uninstall and then reinstall the driver version we want to try. Driver e1q62x64.sys dated Dec 2009 seem to work with VMDq enabled after some trouble getting it installed. Drivers dated July or Sep 2010 won't work with VMDq enabled at all. Over time, either version seems to cause issues in some of the VMs (mostly the file server will slowly lose connectivity, files are left open, progressively slower connections until server becomes unreacheable - it can take a few hours, or weeks/months between a restart and the server exhibiting this behavior) - even if we disable VMDq, there is something wrong that eventually surfaces (not to mention that there seems to be a performance problem).
We first get a BSOD when we assign the team to the Hyper-V Switch. We then manually uncheck Microsoft Virtual Switch protocol from the NIC, and we are then able to associate the team NIC to the Hyper-V Switch and then we're usually able to get VMs to come online w/o causing the host to BSOD.
We have seen that Windows 2008 and 7 VMs can get corrupted by this problem. So if you're seeing this problem make sure you have a backup and/or use a fresh VM each time you try (otherwise the BSOD will happen even if you have already solved the problem).
We have tried installing Intel driver set 15.5, 15.6, 15.7 and 16.1 (none of which seemed to work). We have also tried with Dell provided drivers Intel_LAN_12.1.0_W2K3_8_64_A02_126.96.36.199 (which has an e1q62x64.sys dated 2009/12) and Intel_LAN_12.5.2_W2K3_8_64_A00_188.8.131.52 (which has an e1q62x64.sys dated 2010/09).
We're sporadically getting event 28 from VMSMP that read:
Port '...' was prevented from using MAC address '...' because it is pinned to port '...'.
Obviously, we don't have any duplicate MAC addresses. The number of event 28 messages was more frequent when using Intel_LAN_12.1.0_W2K3_8_64_A02_184.108.40.206 (Intel_LAN_12.5.2_W2K3_8_64_A00_220.127.116.11 seemed to help).
We have teamed all 4 ports in a single team, or in 2 teams with 2 ports each (and manullay assigning VMs to either team). Seems to make no difference in the BSOD behavior nor in the degraded connectivity.
We were hoping that SP1 would help, but given your experience, it seems it won't.
Server is a Dell T710 with dual Xeon X5650 and 2 dual port Intel-ET network cards and 24 GB RAM running Windows 2008 R2.
I am experiencing the exact same issue as the user above. Is there an issue with the base driver as you mentioned? Any idea when they'll release a revision to it?
Is this an issue with VLMB being enabled in tandem with VMq?
I've purchased a total of 4 Dual Port ETs to do failover clustering and need to get this resolved so I can go forward.
We have the same Problem with BSOD when NIC Teaming (VMLB) and *vmq is enabled.
Is there any solution up to now?
We use an INTEL Modular Server with the 5520VI Compute Modul Firmware 6.7 and the BIOS Setting "Maximize/minimum Memory below 4GB" set to max.
Microsoft Hyper-V Server with Failovercluster is installed.
Great timing. Maybe your psychic. The web pack with the bug fixes (version 16.4) just went live today.
http://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&DwnldID=18725 will get you the files you need to update the drivers and software.