2 Replies Latest reply on Nov 5, 2009 9:39 AM by praaphorst

    ICH10R - RAID 5 - Reboot during migration forced a recover?


      Configured a raid5 of 3x1 TB drives and using it on a Window Server 2008 configured as file server. I copied a total of 1 TB of data on the RAID. Then added a 4th TB driver and modified the RAID at the Storage manager. The storage manager then started a migration (I estimated the migration would take 10 hrs). (tray icon alert).


      The server needed to move to a new location so i shut it down (relying on the advanced algorithms allowing for a interruption of the migration) and on power up the volume was invalid and the first 3 drives marked as failed? The context menu allowed me to mark them as normal which i did and then "Recover" became available in the volume context menu. Picked recover and it is now recovering for another 34 hrs. .. sigh ...


      I have a couple of questions:


      1. Can I expect all my data back in 34 hrs?

      2. Why didnt the storage manager just continued the migration?

      3. What can I approve in the procedure to prevent this from happening again?