After numerous erroneously reported drive failures and countless RAID rebuilding issues using Matrix Storage Manager (MSM) 8.9, I decided to try the Rapid Storage Technology (RST) 9.5. It was listed as a possible fix for the problems being reported by many users. Unfortunately, while shaking down my new RST RAID10 build the system reported another failed drive. Note, I am using WD RE drives intended for RAID applications - no TLER issues here. Given the dissapointment of finding that this issue still exists in the newest RST drivers I shut the system down intending to deal with it later. Now the BIOS shows 4 nonmember disks with no RAID array upon boot. Could the RST issue completely destroy the array?
I have built RAID systems repeatedly for years and never had an issue. This mess is almost enough to make me through the brand new machine out the window. Anyone have any idea what will truly fix this problem?
ASUS P7P55D EVO
Win 7 Pro 64
4 x WD5002ABYS in RAID10 (well, they were before)
Now the BIOS shows 4 nonmember disks with no RAID array upon boot. Could the RST issue completely destroy the array?
I don't think so, because none of the RAID drivers will affect the track0 of the active RAID partition, where the partition table and the RAID data are.
At first step you should enter the BIOS and make sure, that the SATA Controller, where your hdd's are connected, are set to "RAID" mode.
After having saved this setting, you should reboot and look, if your RAID now will be detected by the BIOS and shown as "healthy". If not, you should run the RAID BIOS Utility and look, what is wrong with the RAID array (both/one/no hdd shown as member of the RAID). Anyway you should be able to repair the RAID array (unless you got a hardware issue).
Thanks for the input. Unfortunately, the BIOS was still set as RAID and the RAID utility shows all 4 drives as "Off-line Members" with RAID Volumes defined as "None defined." I am unable to fix the array, it would appear, as the utility is not allowing me any options to activate the off-line drives. I cannot create a new volume nor can I recover or delete the old one. I reset the BIOS to IDE and tested each of the drives with the WD diagnostic tool... nothing wrong, as expected given all my prior experience with the 8.9 MSM issues. What hardware issues would cause the entire array to drop off-line?
I was hoping that someone had solved this situation. Neither RST or MSM appear capable of meeting the performance of my old Storage Matrix Console 7.5. I hate to think this issue is just being ignored by Intel.
I am having a similar issue. Two drives in a mirror suddenly showed up as off-line members and the operating system won't boot anymore.
I'm thinking I'd like to break the mirror and just go with single drives as this RAID configuration is REDUCING my system reliability, not increasing it. But I don't want to do anything that would destroy my data.
I was finally able to boot by unplugging one of the mirrored drives. Magically the system recognized a RAID array once again, albeit with a degraded (offline) drive.
I have decided to do without RAID altogether, as this Intel implementation clearly reduces the reliability of the system instead of increasing it.
I am not going to install a new driver. What I need to figure out now is how to set the system back to non-RAID (i.e. SLED) without losing my data.
I think that RST 9.6.0 will work well for you if you want to give it a try. If you are currently using 9.5.0, I would encourage upgrading to 9.6.0 anyway due to the issues introduced with the 9.5.0 release. Those issues have been cleaned up and 9.6.0 is a much better driver.
Is you OS installed to your RAID10 volume? If so, it will difficult to just set the drives back to non-RAID without losing everything.
Yes, it is on my O/S drive. Regardless of what I do, I'm going to have to do a full system backup before trying anything. If I have a good, full system backup I'll probably just take my second drive - the one that is currently unplugged - and restore the O/S to it as a single drive instead of RAID.
In hindsight I should have purchased a good backup solution in the first place instead of going with RAID. I really don't care if the newest driver has fixed this issue or not - Intel is not getting a second shot at screwing up my data. I thought I was buying RAID that worked at the hardware level, not the driver level. I don't trust RAID solutions that rely on software drivers to work. Never have, never will.
I'm also having a similar problem! I have a RAID 5 setup. 2 Days ago Intel Rapid Storage Technology decides to give me an 'unexpected error has occurred.. try restarting or reinstalling" (you can see the error at the bottom of the screenshot link)..
The drive continued to work for the day, then suddenly while trying to access a folder on the raid, i get an "I/O error - can't access" or something along those lines.
So I restarted the computer and when windows booted up, the raid drive wasn't visible. In Disk Management, it just tells me to Initialize. Which I refuse to do because its a 9TB raid! (I do have backups though - i'm not a complete moron )
Is this as simple as re-installing the Intel RST software and raid drivers?? one opinion is that the driver is buggered, and windows doesn't understand how to read the RAID 5 anymore, and assumes i should just Initialize it and wipe it clean...
I work on mac's usually, so I'm a bit behind on what I should do here...
Similar issues here. One drive in a RAID1 pair "failed" then worked for days after a rebuild, then "failed" again. Started another rebuild; but with a 2TB drive that takes a while so let it do a bit, switched off and came back the next day. Now it thinks neither disk is a member?
Some useful forum posts out there on this. It appears to be a common problem for Intel RAID controllers?