for some time now, we try to figure out what might be the problem/difference between Windows Server 2008R2 and Redhat & Debian regarding the ISCSI speed we notice.
The Server ist a dual core intel Core 2with 2.4 GHz, 4GB Memory, an 8x PCIe slot, an Intel x520-D and a Qsan P600Q-D316 10GbE ISCSI Raid.
We tested the setup with two different Intel x520-D and different twinaxial sfp+ cables from Intel. The support of Qsan was helping and doing some tests, so did the distributor we got the storage from.
From that we can be 99% sure, the hardware is not faulty.
What did we do and check:
We installed Windows Server 2008R2, redhat and debian for each test on a local server disk, connected to the storage point to point, so no switch is involved. The Server ISCSI Port and the Storage ISCSI Port are in the same VLAN.
On Windows we installed the most recent driver from Intel.
On Redhat we used the default and later the most recent one. V 3.9.15-k and 3.18.7
On debian we just tested the default one. version: 3.6.7-k
Than we run different test like IOmeter, IO zone, dd to the device and dd to the filesystem.
The conclusion after a couple of days and a lot of research in readmies, presentations and tuning guides by redhat, intel, NASA, CERN etc etc is:
That is +- that what could be expected from the storage and Raidlevel setup we use. So that is the goal.
But the big question still is:
Why are we (is linux) still so far behind? As mentioned we use the same hardware, all tests are done one after an other on the same server.
We applied and checked tuning tips from:
We and the storage manufacturer still don't have any clue what might be the problem. May be the iscsi implementation in linux is that worse? But from the intelcloudbuilders it looks like possible.
Any hint, help and suggestion regarding that setup is welcome.
Rergards . Götz