RAID 10 is the spanning of two or more RAID 1 mirrors. In this case, it tolerates a single drive failure for each spanned volume. However, the array would be offline if both of the drives from the same span fail.
On the image below, you can lose a drive for each RAID 1 and be able to boot to your Operating System. If you lose both drives from the same RAID 1 you won’t be able to boot to your Operating System.
You can always find useful information about our RAID solutions here.
I got it working again. It is a S5000PSL using the onboard ESRT2 RAID.
Sequence was one HDD in RAID 10 array failed and then the RAID auto rebuilt.
A week later a different HDD in the array failed.
The later HDD was replaced and an auto-rebuild of the array occurred.
During the rebuild the HDD that failed a week before (and was also on the same span) failed again. Not sure where in the process it failed but there was NO log reporting that the Array rebuild finished and was in an optimal state.
RAID array status was then offline as there were 2 offline drives.
I tested both suspect hard drives using Western Digital diagnostic software and found that one drive had too many bad sectors. The other drive tested ok so I put it the ok drive in with the hard drive that I put in to replace the faulty drive.
RAID array status was still offline.
As a last ditch effort I forced both offline drives 'online' within the ESRT2 BIOS.
RAID 10 status was optimal which was strange.
I rebooted and RAID 10 status degraded. Was able to boot into Windows safe mode.
Could not see run RAID web console as was in safe mode.
I shut down and then rebooted into normal mode and let the array get to Optimal state again before replacing the other suspect drive.
RAID 10 then rebuilt again and all is good.
Just putting these notes in just in case someone else comes across this issue.
Moral of the story is to replace failed drive in array as soon as it is detected as bad or offline before a second one fails.