14 Replies Latest reply on Nov 23, 2010 8:10 AM by Georgich

    SRCSASBB8I, RAID 1 keeps going to a degraded


      We have 2 servers that are live that, after a few days in the field, are having issues with their RAID1 arrays dropping a drive and going into a degraded state.


      The servers are configured as follows:

      S3420GPLC motherboards

      SC5650UP chassis/power supplies

      SRCSASBB8I controllers w/AXXRSBBU6 battery modules

      AXX6DRV3GR Hot Swap Backplanes

      Western Digital WD3000HLFS 300GB 10K RPM SATA Hard Drives (6 per server)


      All latest firmware and drivers are loaded on the motherbord, RAID controller, and backplane.



      The servers are each configured with 2 drives in a RAID 1 array, and 4 drives in a RAID 5.  Cache mode is set to Write Back with BBU.



      Two days ago slot 1 on the backplane was marked as failed.  The drive was replaced, the array rebuilt, and the server has been fine since.



      Yesterday slot 1 on the backplane was marked as failed.  A replacement drive was not available, so the array was rebuilt using this same drive.  This morning slot 0 on the backplane was marked as failed.  A replacement drive is still not available, so the array is again rebuilding using the same drive.


      I'm finding it very hard to believe all these drives are failing so quickly after the server was installed.  I'm also finding it odd that the drives are failing on both servers, but always on the RAID 1 arrays.


      Does anyone have any advice on what might be causing this?  I'm very concerned about data loss or having to restore to tape, which would be devestating to us with the downtime involved.


      Thanks in advance.