1 Reply Latest reply on Jan 3, 2011 4:13 PM by Dan_O

    ESRT2 RAID 1 Degraded - new drive shows as failed


      Apologies if this is a redundant post.

      I have a customer server S5000PSL using ESRT2 and a RAID1 config with two drives.  No hot spare.


      One of the drives was failing and I removed it.


      The server now boots normally.  As expected both in the ESRT BIOS utility and the RAID Web configurator the RAID1 Logical drive shows status "degraded."

      I obtained a replacement drive, downed the server, then went into the BIOS ESRT2 utility.  The new drive is seen but shows in status FAILED.

      RAID will not rebuild to the drive in failed status.


      Drive diags run on another system show the new drive is fine.


      How do I convince the RAID controller that the new drive is in fact present?


      Trying to force it online is not helping.

      I have taken a clonezilla image of the good drive in addition to regular backups so I have the data backed up.

      Any help on getting the array to rebuild the mirror?


      Other specifics:

      Microsoft Windows Server 2003 SBS 32bit

      Seagate 1TB Drives

      The failed drive and the remaining drive had SD15 firmware on the SATA controllers.

      I also need to upgrade the controller firmware on the remaining drive as the SD15 firmware has had issues per Seagate, but I really want to rebuild the array first then down the server and upgrade the SATA controller firmware separately.

      Thanks for any help