I had raid 5 setup with 4 1TB segate ST31000520AS disks, and also raid 1 setup with 2 120GB disks
i have server2008 R2 running on it with 3 Hyper-v virtual machines.
a disk failed in the raid 5 set and caused the virtual machines to corrupt!!!!! i thought raid 5 was supposed to cope with a disk failing, can anyone explaine to me why the corruption happened and what i could do to stop it happening again.
the second question is about the failed disk that i have taken out, once out i can use it and format it etc if i use a sata to usb converter but if i plug it into another computers esata port then the intel rapid store control centre picks it up as the the raid 5 array and i can't even initialise it, as it says that there is a i/o error, but in device manager it lists it as the name of the raid volume and the 1.8TB of the old raid volume.
can this configuration be removed from the disk as i can't use the seatools to test the disk