Hi, thanx for coming to the forum.
VMQ has nothing to do with PXE, so this is an interesting case.
Are you booting the physical server, or a virtual machine via PXE? What happens if you power off the server and then power it on, trying to do a PXE boot?
I am booting Hyper-V VM and of course I am using the emulated NIC which supports PXE. I use PXE booting and WDS to install / spin up servers. I wish I could power off the physical host, but it is in production.
What I may do, is I have a pre-production host I can test with. I am currently testing NIC teaming with this box. Interesting thing, I have NIC teaming enabled with two intel nic ports and VMQ enabled, but PXE booting works. However, what I have read though is that the current intel nic drivers prevent VMQ with NIC teaming because it is not supported by Microsoft's Hypervisor.
I was able to have VMQ enabled and PXE boot after a reboot occured. Interesting.
Is there any way to reset the nic's without performing a reboot?
Thar be gremlins about!
I'm surprised rebooting had any effect - I thought you were trying to do a PXE boot and just use the standard Windows advice of **** a reboot :-)
I've made this issue aware to our virtualization team and we will see what they say.
I tried rebooting the production MS Hyper-V Server with vmq enabled on all the nics. The only difference with this nic port is that it is a trunk port and MS Hyper-V set the vlan for the virtual nic. However, I still got the same problem after rebooting, PXE-E11: ARP timeout.
The only other thing that was different, was that I had issue with one of the NIC ports on the on test vm host. I had to use Microsoft Nvsbind utility to correct a binding issue. After I unbound the NIC to the Hyper-V virtual switch, and then rebound the NIC and rebooting, the PXE boot worked with VMQ enabled.
After trying the same procedure on the production box, it was still not succesfull. The only was I can get PXE boot to work on the production hyper-v host was to disable the vmq setting. Otherwise I get the PXE-E11: ARP timeout.
I am hopefull that your virtualization team will be able to find something. Hopefully just a driver update!
My next step is to enable a different NIC port on the production Hyper-V host and not use trunking and VM vlan tagging. I will do that when I get back into my office.
Again thanks for your help and attention to this annoying issue.
Could it be an issue with Emulated v. Synthetic NIC. In MS System Center Virtual Machine Manager, I can not enable virtual network optimizations with the emulated nic. My question is how does the emulated nic function on an intel nic with vmq enabled? The emulated nic would not know anything about VMDq'ing, so how would it know which queue to respond to? Could that be the reason why PXE booting is not working?
Any update to this?
Our new virtualization dude has added this to his list of interesting things to try. He's very busy trying to ramp up, but he promised he would look at it.
Our validation team was unable to reproduce the issue you reported. We used a Cisco C210 M2 server (with dual Intel 82576 LOM’s) and installed the Intel Network Software Release 16.3 drivers on Server 2008 R2 x64 SP1. Teamed both ports (in AFT and VMLB configuration) and bound a Hyper-V VM-Switch to the teamed adapters. Those ports are connected to a network with PXE boot (via WDS). Created a blank VM and set it up for Network Installation. Successfully PXE boot and got into the WinPE interface. Can you please provide details regarding your setup?
Looking forward to help you resolve this issue.
I was using a Dell T610 with 2 Intel Dual Port 82576. One port on each nic was teamed to a Cisco 2960 switch with VMQ turned on each nic port. Created a Hyper-V VM using the Teamed NIC with VMQ enabled on both nic ports.
Any updates on this?
I just started over and the same thing is occuring again.
repeated PXE-E11: ARP timeout
4 x 82576 VMQ enables team (802.3AD mode) to a Cisco 2960G switch in LACP mode, 1Q Trunk Port.
I am setting the VLAN with-in the Virtual Machine settings with the Hyper-V Manager.
After letting all the PXE-E11: ARP timeout (x10) complete, the following message appears.
CLIENT IP: 192.168.12.119 MASK 255.255.255.0 DHCP IP: 192.168.12.1
PXE-E55: ProxyDHCP service did not reply to the request on port 4011.
After more trouble shooting....
Instead of using 802.1q TrunkPort mode on the EtherChannel port, on the Cisco 2960 switch, I set it to a specific VLAN. I left all the other settings the same with VMQ enabled on each Intel NIC in the Team (x4). I did not set a VLAN in the Hyper-V Manager, the test VM booted to PXE without any problems.
So, if you please test the in the following configuration to reproduce the problem.
Cisco 2960 Switch
4 phyical ports configured at 802.1q Trunk port mode, ALL VLAN available, 1 is the native VLAN
Create a EtherChannel use those 4 ports as a LACP. Set the EtherChannel port to 802.1q Trunk port mode, ALL VLANs available, 1 is the native VLAN
On a Windows Hyper-V 2008 Server, install Intel 16.4 drivers.
Use the PROSetCL.exe to create a 802.3AD team
Use Hyper-V Manager to create an external Virtual Network connected to the Intel Teamed NIC. Do not connect it to the HOST for management purposes.
Use Hyper-V Manager to create a test VM. Of course use the emulated NIC drive and enable VLAN, and set it to the same VLAN has the network where the WDS server is (NOT VLAN 1).
Start the VM and see if you get the same results.
I read in a recently releases notes for the 16.5 drivers the VQM and VLANs are not supported. I am assuming that When I use teamed nics with VQM enabled, I cannot use the ANS / PROSetCL.exe to assign a VLAN to it (which I don't).
Please let me know your results. If you have any other configuration questions please let me know.
Our test team has been able to reproduce the issue you reported. Thanks for providing such helpful details.
We have opened a defect on this issue, and it will be addressed in a future driver release.
When I have more information, I'll pass it along.
We are seeing the same issue with Server Core 2008 R2 (SP1) / Intel Gigabit ET Quad Port Mezzanine Cards running drivers version 16.5.
Two ports are placed into a Trunk, the trunk is then presented through to Hyper-V.
Disabling *VMQ with ProSetCL on both adapters resolves the issue for now - but obviously prevents the use of VMQ within HyperV.
Is there a defect link i can track progress for on this particular issue?