I assume that you are talking about networked storage. Is that correct?
1GB/s=8Gbps so you will need 10GbE cards. We have seen well over 10Gbps over iSCSI in our lab testing on a similar platform with reasonably low CPU utilization. NFS should do almost as well, but I don't have any of my own data to back that up. It should be easy to do this over the storage network if your disk array can keep up.
For 1x 10GbE card over PCIe gen 1 you will need a dedicated x8 slot or your performance will be limited by the narrower PCIe connection.
If you are talking about directly attached storage in the server then please repost.
Message was edited by: Bob Albers. Fixed typos.
Thanks for the quick reply - good to know about the low CPU utilization. The server comes with a PCIe SAS controller which has 8 connections, which should be easily be enough to get the required data rate (assuming I buy enough disks!). I am also looking at a Fusion IODrive which is a 640GB Solid State Drive on the PCIe bus as a possible storage option.
Presumably the IO Hubs aren't a problem as they have to deal with PCIe and QPI - do you agree ?
The IO hub won't be a problem. You will be disk limited (or limited by the SAS controller itself). The Fusion IODrive will give you higher IOPS and throughput. SSDs may also be an option.
Make sure your SAS controller and/or the Fusion IODrive is plugged into a wide enough PCIe slot. A x8 slot that connects directly to the IOH would be best. The cards will usually still work if they are connected at x4, x2 or x1, but they won't attain peak performance. Beware: some x8 physical slots are electrically connected to the chipset at smaller widths. Verify via the system board block diagram and/or software tools that the cards are connected with a wide enough PCIe bus. Avoid any PCIe slots that are connected downtream of any PCIe multiplexer devices, including the legacy ICH device, for high performance IO cards.
Let us know if you have any more questions.