3 Replies Latest reply on Apr 15, 2014 2:09 AM by Muiz

    RAID1 Failed: One of the harddisk broken then replace a new one, but still cannot boot

    Muiz

      Dear all,

      I have server running raid1(2 2TB harddisk). Yesterday the first harddisk was broken, then I replace a new one, but system cannot reboot, then what can I do?

      1. Can I get the data back from disk 2?

      2. Can I add one more disk to install windows to use Intel® Rapid Recover Technology  to recovery the RAID1?

      3. Or I can do it using Linux live cd?

       

      motherboard: ASUS P8Z77-LV

      Intel rom: 11.0.0.1339

      Raid1 with two 2TB harddisk, the new one also is 2TB.

       

      Just now I run linux live cd to get some info:

      root@sysresccd /root % cat /proc/mdstat

      Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

      md126 : active (read-only) raid1 sdb[0](S)

            1953511424 blocks super external:/md127/0 [2/0] [__]

       

      md127 : inactive sdb[0](S)

            2257 blocks super external:imsm

       

      unused devices: <none>

       

      root@sysresccd /root % mdadm -D /dev/md/Volume0_0

      /dev/md/Volume0_0:

            Container : /dev/md/imsm_1, member 0

           Raid Level : raid1

           Array Size : 1953511424 (1863.01 GiB 2000.40 GB)

        Used Dev Size : 1953511556 (1863.01 GiB 2000.40 GB)

         Raid Devices : 2

        Total Devices : 1

       

                State : clean, FAILED

      Active Devices : 0

      Working Devices : 1

      Failed Devices : 0

        Spare Devices : 1

       

          Number   Major   Minor   RaidDevice State

             0       0        0        0      removed

             1       0        0        1      removed

       

             0       8       16        -      spare   /dev/sdb

       

      root@sysresccd /root % ls /dev/sd*

      /dev/sda  /dev/sdb  /dev/sdc  /dev/sdc4

      sda 是新硬盘,sdb是好的第二块raid1硬盘,sdc是live cd。

       

      root@sysresccd /etc % ls -al /dev/md/Volume0_0

      lrwxrwxrwx 1 root root 8 Apr 12 08:12 /dev/md/Volume0_0 -> ../md126

       

      root@sysresccd /etc % ls /dev/md12*

      /dev/md126  /dev/md127

       

      root@sysresccd /etc % mdadm -E /dev/sda

      /dev/sda:

         MBR Magic : aa55

      root@sysresccd /etc % mdadm -E /dev/sdb

      /dev/sdb:

                Magic : Intel Raid ISM Cfg Sig.

              Version : 1.1.00

          Orig Family : 63409191

               Family : 63409191

           Generation : 003fb046

           Attributes : All supported

                 UUID : de83aad6:8d1c0812:72c02043:5459fa5f

             Checksum : 3e86fb32 correct

          MPB Sectors : 1

                Disks : 2

         RAID Devices : 1

       

        Disk00 Serial : W2F0F0V8

                State : active

                   Id : 00000001

          Usable Size : 3907023112 (1863.01 GiB 2000.40 GB)

       

      [Volume0]:

                 UUID : 82f41dd6:43842b55:555cc5e7:8042a1b1

           RAID Level : 1

              Members : 2

                Slots : [__]

          Failed disk : 0

            This Slot : 0 (out-of-sync)

           Array Size : 3907022848 (1863.01 GiB 2000.40 GB)

         Per Dev Size : 3907023112 (1863.01 GiB 2000.40 GB)

        Sector Offset : 0

          Num Stripes : 15261808

           Chunk Size : 64 KiB

             Reserved : 0

        Migrate State : idle

            Map State : failed

          Dirty State : clean

       

        Disk01 Serial : W2F0F1NM:0

                State : active

                   Id : ffffffff

          Usable Size : 3907023112 (1863.01 GiB 2000.40 GB)


      1.png

      22.png

      3.png

      4.png

      5.png

      6.png

      7.png