I checked with someone who works with performance testing. This is the feedback I received:
You should check CPU load. This looks like an RSS issue since your ntttcp command is only targeting a single core. You probably need to pin your threads to different CPUs for your test.
I hope this helps with your testing.
i will try to change this test settings and see what happens. But a have another question(this should be put into production later). Two ISCSI servers (all Intel hardware - the latest Intel server platform R2308GZ4GS9, Intel H/W RAID controller, SSD's and Intel 10 GbE NICs. - connected directly. (bypassing switch to be sure it's not a switch proble)
by using the same test config
ntttcps -m 1,0,192.168.10.1 -l 1048576 -n 100000 -w -a 16
ntttcpr -m 1,0,192.168.10.1 -l 1048576 -rb 2097152 -n 100000 -w -a 16 -fr
i can easily load network up to 10 Gbps in both directions. That's fine. But when i'm starting to use these connections for ISCSI traffic - i cannot load network more than 5 Gbps and avrg. latency is unacceptable.
Just read article about Intel and Microsoft got 1000000 iops and wired speed for ISCSI.. sounds good but cannot get. Local storage (RAID 10 SSD's) works excellent. more than 2Gb/s read and 2Gb/s write speed, with latency less than 5 ms.
Interesting thing - when i use RAM drive or Image file (as ISCSI target) the network performance is the same. but RAM drive should load network up to 10 Gbps. nothing.. only 5 Gbps and high latency..
Do you have suggestions about? Because i'm using only Intel hardware, everything should be compatible whith each other.
tried Windows 2012 as well as Windows 2008 R2, Jumbo, latest drivers, ISCSI registry tweaks. i was not able to get acceptable performance.
another interesting thing - when i use built-in 1 GbE network connection as ISCSI paths - i can load this connections up to 1 Gbps. It seems 1Gb works better than 10 GbE ;-) Is there any special BIOS settings? now i'm using max performance profile in BIOS.
The short answer is that I’ve been able to get line rate bidirectional (10Gb Tx/10Gb Rx simultaneously) traffic with iSCSI server with SSDs on Window Server 2008 R@. Since you're not achieving line-rate, my guess is that you are I/O bound. RSS should be on by default on a Windows server. Mark is right, you should check since you are using Windows 7. Go to the advanced properties of the NIC in Device Manager to check. The longer answer is setup of the environment. Since you MS iSCSI Target set up go to your server manager and right+click the Microsoft iSCSI Software Target and select properties. Then you set the 10Gb adapter to be the only one used for iSCSI. Typically i wouldn't use NTttcp to test storage. I would use IOmeter, which can be downloaded from http://www.iometer.org/. Here a link to a paper that can help you set up IOmeter. http://www.intelcloudbuilders.com/docs/icb_ra_cloud_computing_unified_storage_NetApp.pdf On your attached target, format it on the client side rather than using the raw volume. Now set up IOmeter to use a larger block size. Maybe 16K or larger to achieve line-rate. Good luck on your testing. Craig
I would look at updating the Win7 machine to server. I've not tested client OS with 10Gb adapters. I would also try running NTttcp without all the switches (-l, -rb, -n, -fr). I would also make sure that RSS is enabled. By the way, how many cores are your machines?