4 Replies Latest reply on May 31, 2017 4:00 PM by Intel Corporation

    Raid 5 performance declines dramatically after adding drives to array

    jhyland9

      I have a question regarding IRST performance.

      At first I created a RAID 5 with 3 3TB drives.  I waited a day for it to fully initialize.  When complete the performance was very slow (about 15MB/s). I disabled write-cache buffer flushing and enabled the write-back cache to improve performance.  It increased to about 120MB/s which I am happy with.  However, I added 2 more 3TB disks to the array.  I waited another day for it to fully initialize.  I made sure the cache settings were still set to performance.  However, I'm now getting about 10MB/sec.  If I enable write-cache buffer flushing and turn off the write-back.  The speeds go up to around 50MB/s which is about where they were before I expanded the array.  It seems like with the expanded array the performance oriented cache settings are actually slowing the array down.  This doesn't make any sense to me.  The new array is raid5 with 5 3TB drives for 11,178GB of space.  Note that the array is very empty at this point with about 270GB used.

       

      The only change I made was adding the 2 disks and letting them initialize and complete.  And then everything went slow.

       

      Anyone have any ideas why this might be happening?