Can I have an answer here please Intel? These cards were not cheap and we have purchased them specifically for their VMQ abilities. Being Intel cards, reliability is one thing that I would expect however I am disappointed thus far with both customer service and reliability.
We are seeing constant issues here with Virtual Machine Queues. The four adapters in the team are configured correctly. RSS is disabled and VMQ is enabled. As this is a Switch Independent team in Dynamic mode, Sum of Queues mode is in operation, therefore we have been careful not to overlap processors for each team member.
Even with the text book configuration, latest drivers and firmware, we still see problems with Virtual Machine Queues when they are enabled on the physical NICs and Microsoft Multiplex Team Adapter. Packet loss, high latency or complete loss of network connectivity for VMs is the order of the day. This mainly happens when a VM is migrated from host to host. Secondly, during a Live Migration, we see a longer than usual delay in network connectivity. Usually, we lose about a single ping during a Live Migration with VMQ turned off. With VMQ enabled, we see a good 5 or 6 lost ping requests before connectivity is restored. Clearly this isn't acceptable.
Hi MattJH, have you tried using VMLB teaming mode of Intel on your setup? kindly refer to the site below for details.
hope this help.
That's not really a valid answer if I'm completely honest with you.
My questions were:
A) What is Intels Stance when mixing RSS and VMQ modes on different ports, on the same physical adapter?
B) Is splitting a VMQ enabled NIC team between ports from physical NICs supported by Intel when using the native Windows NIC Teaming in Server 2012 R2?
I guess now I should be asking whether or not these are adapters are even compatible at all with Windows NIC Teaming in 2012 R2???
Hi MattJH, i'm currently checking on your 2 questions related to VMQ and RSS. For the mean time, would like to clarify if i captured your issue and setup correctly.
Packet loss, high latency on VMs such as 5-6 lost pings request during live migration
3 I350-T4 installed on 1 system and configured as one of the Hyper-V. 2 out of 3 I350-T4 configured as Native Win2k12 Teaming (Switch Independent/Dynamic distribution). Total of 4 ports in 1 team. 2 ports belongs to I350-T4 #1 and 2 ports belongs to I350-T4 #2
With VMQ enabled and processor core allocations. Remaining ports from I350-T4 1 and 2 were configured for ISCSI MPIO to access SAN with RSS enabled.
Turned off VMQ
Please share the output of the following
Yes that's correct. With VMQ enabled in conjunction with the native Windows Server 2012 R2 NIC Teaming, we see intermittent packet loss, lengthy timeouts when Live Migrating VMs. With VMQ disabled, things seem to behave as expected.
I can also confirm that out of the two quad port i350 adapters, two ports from each physical adapter are in a Switch Independent / Dynamic team (Sum-of-Queues mode) for the core VM Switch (4 interfaces), whilst the remaining 4 ports across the two physical adapters are ISCSI MPIO interfaces to the SAN. RSS is enabled on the ISCSI ports and VMQ is enabled on the VM Switch Team ports.
We are also allocating a single CPU core to each of the VMQ enabled VM Switch Team interfaces (separate cores for each interface with no overlap). Hyper-Threading is enabled on the CPUs, however HT Logical execution units are not being allocated to VMQ or RSS, only actual cores.
Here are the Get-NetAdapterRSS and Get-NetAdapterVMQ outputs as requested:
Can you help me with core allocation process. I have 4 pcie gen 2.0 slots. Each slot is having 2x 10G Flexi ports.
I am using E5-2680V2 (Ivy Bridge- EP) which has 10 cores and 2.8GHz.
I am familier with RSS but a little more description would be helpful.
Also want to mention we are using Linux.
Can i be able to implement VMQ on this platform in order to enhance the performance. I am not that much familier with VMQ.
Please help me with this situation ASAP.