A computer running Win7_64bit off a partition on a 2-drive-RAID-0 array built with Intel Rapid Storage Technology BSODed during video playback. Everything had worked fine for approximately 6 months. On first two restarts, RAID driver did not detect any hard drives connected to the system. On next restart RAID driver detected both hard drives and recognized an array. Windows booted but BSODed again upon opening Firefox and attempting to restore lost tabs. On next and several subsequent restarts RAID driver detects both hard drives but recognizes only one as a member of an array, and labeles the other 'ERROR OCCURED (0)'. Windows boots but Intel RST reports array failed and the 2nd drive inaccessible. However files fully accessible and computer seems to have full functionality.
Before examing the issue further, I connected an identical pair of drives, used Intel RST to construct a RAID-0 array, then cloned the old volume to it using a bootable Acronis Disk Director cd. Clone completed without errors. With the old pair of drives disconnected, RAID driver detects the new pair fine, Windows boots and Intel RST reports no errors. Computer seems fully functional.
So what kind of an error can cause a drive to simultaneously be accessible and inaccessible, cause BSODs but have no problem reading its whole 1 TB worth of contents, be alternatingly detectable and not detectable, and even affect whether another drive in the system is detected?
I have the same problem, I can do a ddrescue out of the faulty drive into another drive. When I add the drive to the raid i says that the drive doesn't belong to the Raid. How can I "fool" the raid into accept the new drive? If these are bad sectors is ther any way to force the raid to mount the volume in order to copy the data out of the volume, even if we have some errors?
I also have this problem. I have 4 OCZ Agility 3 60GB SSDs in a RAID 0 and the 4th one reports as "Error Occurred (0)" and the array is marked as FAILED. This array is the boot volume and the only storage volume on the system, yet the system still boots and functions. I guess I should be glad, but what the heck has happened? I'm guessing that while an error occurred, the SSD handled it in some non-fatal way and allowed the system to continue functioning, yet the RST driver thinks that the malfunction was fatal and expects the array to be dead. How can I inform RST that everything is okay and reset the status to NORMAL? Also, it would be nice to get some additional info on what actually happend on the 4th drive.
I have just had the same problem, which brought to this forum for the first time. So hi everybody!
A balloon tip popped up from my notification area saying that my volume had failed and that I should run CHKDSK. No BSODs whatsoever.
Then I restared my computer and CHKDSK ran automatically without finding any erros.
I´m using a Corsair 60GB SSD as cache for a 500GB ordinary HDD, the whole system is about 4 months old, and right now it seems to be fully funcional; it also seems to me that acceleration is working fine, but the Intel RST software keeps me giving a failed disk.
Upon booting the computer, the screen information right after mobo splashscreen doesn´t give me any warnings, it says what it has always said: it lists the SSD as cached disk and the others as non-raid volumes.
Is it any kind of bug or should I be really concerned? Shoud I deactivate acceleration and mark volume as normal?
Here is a captured screen of Intel RST software:
Thank you very much.