i am facing some issues with RAID1 system created using the IRST software on HP z820 system (Windows 7 pro 64 bit sp1).
In one case the issue has started after a power failure, in the other one it has started after the window blue screen of death.
Customer called me after a power failure; when they tried to turn on again the workstation, the IRST RAID status message during the boot said that the first member disk was degraded and that the second disk had a "disk error (0)".
The OS didn't start form both the disks if i didn't change their controller from sata to sas (they were using SATA hard disk).
At first i tried to clone on a new hard disk the OS from another workstation (same model), i copied the folders which i needed from one of the two disk faulted into the new one, i installed again all programs needed and i added a new hard disk. At the end of this, i created a new RAID1 volume on their workstation which was affected by the power failure. Just after the end of the mirroring, the system crashed again and leave me at the same start point.
In order to verify that this was not an hardware problem, with another workstation (same model) and those 2 disks (i have removed them from the RAID1 volume before ), i have created a new RAID1 volume; this didn't crashed.
At the end i thought that there was a problem on the SATA controller.
Customer had to turn off manually the workstation after the blue screen of death of window. The same status of case 1 showed up at boot.
So we installed another workstation (hpz420) with his own OS and we added the hard disk member of the old RAID1 which should have been degraded only (there was a database which we needed). After that, we tried to set again a raid1 between that disk and another one taken from the old workstation (it was the other member of the RAID); after that 100% of mirroring was reached, the IRST software showed up a disk error for the first disk (the one which at the beginning of this story was addicted as degraded only!) and, after some minutes, the same message for the other member disk.
My questions are:
what does it mean this "disk error (0)" message?
why, if i clone the OS on another new hard disk and set up again a RAID1 using two fresh hard disk on a new workstation, the same error shows up?
why i cannot boot my operating system until: 1) i replace the hard disk calbe from SATA cotroller to SAS controller OR 2) I set the hard disk as non member of the raid and leave it on SATA controller?
A fault on the controller could affect also the two disks member of the raid1 volume?
Is it possible that a new RAID1 mode is not applicable on an hard disk after that an error occured? The disk works properly if it is not member of a RAID volume!