2 Replies Latest reply on Jan 27, 2016 1:06 AM by JF00bar

    Raid Data Setup in Linux (Also supported in Windows)

    derek99

      I have an Asus z170 motherboard with two 3TB data disks in Raid 1,  configured using the bios setting "Raid", so Intel raid in effect.  The IRST application in Windows shows the array and I can read and write to it.  But in Linux (Ubuntu 15.10) I can see in the file manager two separate 3TB disks and if I write to either one of them the file doesn't survive...that is I can't see it in Windows.  I can read the files but not write to the array.  Somehow the array does not seem properly setup in Linux.  As this deals with using Raid from the Intel chipset I thought I might ask the question here - how to setup raid in Linux using the z170, but without destroying the existing raid array that Windows sees?

       

      If I examine my 2 drives with "mdadm --examine" I get:

       

      >sudo mdadm --examine /dev/sdc

       

      /dev/sdc:

                Magic : Intel Raid ISM Cfg Sig.

              Version : 1.3.00

          Orig Family : b8dc9faf

               Family : b8dc9faf

           Generation : 000028f5

           Attributes : All supported

                 UUID : 0e6a3741:f6666efc:3b3f793a:1b7fa58a

             Checksum : ff687ac8 correct

          MPB Sectors : 1

                Disks : 2

         RAID Devices : 1

       

        Disk00 Serial : WD-WCC4N5VX7AFE

                State : active

                   Id : 00000004

          Usable Size : 5860528392 (2794.52 GiB 3000.59 GB)

       

      [WD3TB_Raid]:

                 UUID : e1fb2dad:506bcbfb:18aee43c:7e59c40f

           RAID Level : 1

              Members : 2

                Slots : [UU]

          Failed disk : none

            This Slot : 0

           Array Size : 5860528128 (2794.52 GiB 3000.59 GB)

         Per Dev Size : 5860528392 (2794.52 GiB 3000.59 GB)

        Sector Offset : 0

          Num Stripes : 22892688

           Chunk Size : 64 KiB

             Reserved : 0

        Migrate State : idle

            Map State : normal

          Dirty State : clean

       

        Disk01 Serial : WD-WCC4N1HND0JN

                State : active

                   Id : 00000005

          Usable Size : 5860528392 (2794.52 GiB 3000.59 GB)

       

      and for the other drive:

      sudo mdadm --examine /dev/sdd

       

      /dev/sdd:

                Magic : Intel Raid ISM Cfg Sig.

              Version : 1.3.00

          Orig Family : b8dc9faf

               Family : b8dc9faf

           Generation : 000028f5

           Attributes : All supported

                 UUID : 0e6a3741:f6666efc:3b3f793a:1b7fa58a

             Checksum : ff687ac8 correct

          MPB Sectors : 1

                Disks : 2

         RAID Devices : 1

       

        Disk01 Serial : WD-WCC4N1HND0JN

                State : active

                   Id : 00000005

          Usable Size : 5860528392 (2794.52 GiB 3000.59 GB)

       

      [WD3TB_Raid]:

                 UUID : e1fb2dad:506bcbfb:18aee43c:7e59c40f

           RAID Level : 1

              Members : 2

                Slots : [UU]

          Failed disk : none

            This Slot : 1

           Array Size : 5860528128 (2794.52 GiB 3000.59 GB)

         Per Dev Size : 5860528392 (2794.52 GiB 3000.59 GB)

        Sector Offset : 0

          Num Stripes : 22892688

           Chunk Size : 64 KiB

             Reserved : 0

        Migrate State : idle

            Map State : normal

          Dirty State : clean

       

        Disk00 Serial : WD-WCC4N5VX7AFE

                State : active

                   Id : 00000004

          Usable Size : 5860528392 (2794.52 GiB 3000.59 GB)

       

       

       

      Also from the contents of file "/etc/mdadm.com" I have as follows:

       

      # mdadm.conf

      #

      # Please refer to mdadm.conf(5) for information about this file.

      #

       

      # by default (built-in), scan all partitions (/proc/partitions) and all

      # containers for MD superblocks. alternatively, specify devices to scan, using

      # wildcards if desired.

      #DEVICE partitions containers

       

      # auto-create devices with Debian standard permissions

      CREATE owner=root group=disk mode=0660 auto=yes

       

      # automatically tag new arrays as belonging to the local system

      HOMEHOST <system>

       

      # instruct the monitoring daemon where to send mail alerts

      MAILADDR root

       

      # definitions of existing MD arrays

      ARRAY metadata=imsm UUID=0e6a3741:f6666efc:3b3f793a:1b7fa58a

      ARRAY /dev/md/WD3TB_Raid container=0e6a3741:f6666efc:3b3f793a:1b7fa58a member=0 UUID=e1fb2dad:506bcbfb:18aee43c:7e59c40f

       

      # This file was auto-generated on Sat, 23 Jan 2016 11:24:34 -0500

      # by mkconf $Id$

       

      -------------------

       

      I'm not sure how to interpret this data and what to do to get the array functioning in Linux without losing its functionality in Windows.

       

      It looks like I have an imsm container in existence with UUID:0e6a3741:f6666efc:3b3f793a:1b7fa58a and there is one member identified as UUID=e1fb2dad:506bcbfb:18aee43c:7e59c40f.  I'm not sure why there are not 2 members in the array each with 2 different uuid's.  Perhaps that is my problem, but both mdadm --examine on dev/sdc and dev/sdd yielded the same UUID.  Slightly confused.

       

      Any suggestions on what I should do to get this working in linux without borking it in Windows? Some mdadm assemble commands or create perhaps? Thanks.

       

      Derek

        • 1. Re: Raid Data Setup in Linux (Also supported in Windows)
          derek99

          Here is the answer to all of that.

          1) It would appear that Intel RST in the bios provides the operating system a uuid to an Intel raid container and a uuid to the array in that container

              Issuing the command sudo mdadm -Ebsc partitions | tee raid_array_ids.txt will provide in the file raid-array_ids.txt the uuids to the container and

              the array in the container.

           

          2) Those elements above need to be assembled into a device (e.g. /dev/md126) that can be mounted in the filesystem.

              That step has not been done at boot up of linux and a few steps are needed to assemble the arrays and to automate the process.

              Note, one may create the raid array, however, if they already exist from a Windows definition, a mdadm --create process will kill all data on the array.

              The mdadm --assemble commands do not!

           

               First, the name and identities of those arrays must be put into    /etc/mdadm/mdadm.conf file .

               A mdadm.conf file might be generated by the command: sudo update-initramfs -u  (though I'm not 100% sure of that).

           

          3)  Then the command sudo mdadm --assemble --scan   will scan that mdadm.conf  file and create the necessary raided devices (e.g. /dev/md126

                which can be mounted. The raided device should now be available in the filesystem (it should be defined and mounted at this point).

               To see what you have can try cat  /proc/mdstat  or lsblk  or  mdadm --detail /dev/md126  (md126 is the usual raided array device)

           

          At this point you are done and the data Raid array should be useable.

           

          To automate this process I used two files:

          1) a Script called raidd.sh  which a chmod 755 raidd.sh had been executed on it to make it executable.

          The contents of that file was:

          #!/bin/sh

          #Caution /etc/mdadm/mdadm.conf must exist with the ARRAYS & CONTAINERS defined already

          #        the definitions in mdadm.conf determine what will be raided

          #Note to create the essential identities of the container and array within the container

          #     type sudo mdadm -Ebsc partitions > containerarray_id.txt

          #     copy the identities in the above text file into mdadm.conf

          #

          #Note must move raidd.sh to / and then sudo chmod 755 raidd.sh one time

          #Note to execute manually: go to root, then sudo ./raidd.sh

          mdadm --assemble --scan

          # Now add some results of that assemble to a log file and see it on screen

          cat /proc/mdstat | tee /raidd.log

          lsblk | tee -a /raidd.log

           

          2) then, since using ubuntu 15.10 which utilizes systemd, a service was created in /etc/systemd/system.

          That text file was called raidd.service and contained the following:

          [Unit]

          Description=Assemble Raid from /etc/mdadm/mdadm.conf

          DefaultDependencies=no

          Before=network.target

           

          [Service]

          Type=oneshot

          RemainAfterExit=no

          ExecStart=/raidd.sh

           

          [Install]

          WantedBy=multi-user.target

           

          3) issue the command

          sudo systemctl enable raidd.service

           

          reboot, and the results should be seen in raidd.log

          or can issue the cat /proc/mdstat command to see what raid devices have been defined.

           

          Comments, corrections to the above are welcomed.

           

          Derek

          • 2. Re: Raid Data Setup in Linux (Also supported in Windows)
            JF00bar

            Better to use Btrfs or ZFS for fake/biosraid.  Also, you are supposed to set it to AHCI if you are doing that in Linux.  It's an overly complicated PITA, as you've demonstrated, otherwise.

             

            In short, for Linux + other OS's I use a AMCC/3ware/LSI/Avago raid card so I don't have to deal with the fakeraid drama across OS's.

             

            http://www.avagotech.com/products/server-storage/raid-controllers/

             

            You can find them on ebay for a fairly cheap price.  Intel's raid, even on the server side, is nothing short of horrific.  I don't even bother, but I'm glad you found a solution that works- for now.