1 Reply Latest reply on Jul 1, 2011 2:41 PM by Dan_O

    Intel RAID RS2BL040 CacheCade SSD bad performance and problems




      Recently i have purchased a new Intel Server ( SR1612UR platform) with INTEL RAID RS2BL040. I have installed a 40GB SSD Cache Drive (called CacheCade).

      The server has 2 xeon 5620 cpu, 32 gb ram and 6 1TB SAS 6GB/s drives configured in RAID 10 (2 virtual drives, each 1,3 TB)

      The bios and raid controller are up-to-date with their firmware.

      I have ESXi 4.1 U1 with all the latest patches.

      I have installed 3 VMs , Windows 7 x32, windows 7 x64 and debian 6 x64 with vmware drivers inside

      For benchmarking i used "Iometer" on windows and "Stress" on linux.

      The results are very strange and bad. As soon as i start the test on all 3 VMs, one machine has maximum perfomance with about 600 commands/s (according to esxtop) and the other two are very low, even 0 at some point (this test with 20GB testfile) ... we try to set up the same test with smaller

      testfiles and different Iometer settings and the behavior is the same ..one machine running at full perfomance the rest are barrely running


      Also we tested with 2 windows machines running just a torrent client on the same files ..one VM have sustainable perfomance and the other

      has almost no access to disk.

      When the load is not very high both VM are starting at the same performance but in 2 to 10 minutes (depending on the disk load) one of the  VM performance is degrading to the point of no usability


      As soon as i stop the fast VM, one of the other two takes over the resources, crippling the remaining VM.

      If i run the same tests without the SSD cache drive, the load is distributed evenly to all VMs and the system performance is very good


      Any one have any ideea why the server behaves like this? Is there a special setting that has to be done on the RAID controller?