Hi Robert, thanx for posting to the forum.
So you want to run iSCSI directly from you VM, interesting. Been a while since I looked at that, however last time I was reading about it, it was a discouraged activity for a number of reasons. However technically feasable.
As for Jumbo Frames on the I350, should work fine on a VF. I've reached out to my local experts and will get back to you with any results I find.
Just "discovered" something. If I just configure Jumbo Frames from within Windows "Network Connections" for the "Microsoft Hyper-V Network Adapter"(s), SR-IOV deactivates. If I then go into Device Manager and also set Jumbo Frames on the "Intel(R) i350 Virtual Function" adapters, SR-IOV reactivates.
I'm hoping that SR-IOV and the other offload functions of the adapters and Win 2012 will eliminate the huge performance penalty of configuring iSCSI initiators within a VM.
I've used VHD/VHDX files for data disks within VM's, but I can't expand them on a live VM that's a part of a Cluster Shared Volume. As for pass-through disks, all those offline disks (with no names) in Server Manager get's confusing.
At any rate, thanks for looking into this. It seems to be working now. If you have any other suggestions for increasing performance / decreasing overhead of iSCSI initiators within a VM, please let me know.
1 of 1 people found this helpful
Great, glad you found a solution. Configuing Ethernet 'goodies' under Hyper-V can be interesting. Many ways to go enable & configure features (such as Jumbo Frames), do not always actually perform the same task.
I suspect (been a few years since I've played with Hyper-v) configuring via "Network Connections" does more with the internal virtual switching infrastructure of Hyper-V, while the Device Manager goes in and twiddles the goodies within the Intel I350 directly.
Feel free to report back your findings on performance of iSCSI!