6 Replies Latest reply on Jun 17, 2013 9:09 AM by kbystrak

    Raid 1 (Mirror) - Removed 1 working disk, but remaining disk shows status is "incompatible" and unable to boot to OS. Whats the point of Raid 1 Mirror?


      I have a Intel Server S3420GPV Motherboard.


      It has been setup and running Raid 1 (Mirror) since May.


      All has been fine, till recently when Disk 1 (port 0) have been reported as "degraded".

      I marked the status as "OK" and let it rebuilt itself.


      Weeks later, the same disk was degraded again. this time I knew that there must be issues with the disk, even though its less than 6 months old.


      I went out to purchase a new disk. According to the reference guide



      I should be able to remove the Degraded disk, replace with a new disk, and it would startup as per normal.

      Then all I needed to do is to click on "rebuilt"


      However when I removed 1 of the degraded disk in the Raid 1 (Vol) setup, and replaced it with a new disk.

      When booted up, the RAID Bios reported the RAID 1 Disk 2 as "incompatible". The new disk was reported as "NON Raid HD".

      Unable to boot into the RAID Vol.



      I am NOT able to boot into windows unless I have both the disks in the RAID 1 vol connected.


      Why is it reporting "incompatible" when it should be able to boot with only 1 of the 2 disks in the volume, since its MIRROR?


      This sounds really "silly" as anybody with 1 failed disk in RAID 1 setup, and the remaining "working" disk is reported as "incompatible" not allowing them to boot!


      Anybody with the same problems?

        • 1. Re: Raid 1 - Missing disk, but remaining disk status is "incompatible"

          You are right - you should be able to boot just fine, with only 1 HD installed.


          I have seen problems when I add a replacement hard drive, that has been in a system before.  If there is any meta-data in the master boot record on the new hard drive, it confuses the RAID.  Usually a low-level format of the new hard drive fixes the system.


          Did you make sure to get the same manufacturer, model, and firmware revision of hard drive, to replace the old one?

          • 2. Re: Raid 1 - Missing disk, but remaining disk status is "incompatible"

            Both disks are the same brand, same built, same capacity, same model and it works fine.


            When I remove either 1 of the disks which is setup in the RAID 1 (Mirror) mode, the remaining disk shows up as "incompatible". [which is just SILLY and STUPID as it doesnt protect me agaisnt single disk failure].


            I had no way of booting with the remaining working disk.


            What I instead did was insert a new disk, same brand, same capacity, but different model (same model not in stock) and set as "Spare-disk".


            I then did a "Verify Volume Data" on the RAID 1 disks, hoping that the faulty disk will fail, and then rebuilt on the spare disk.


            Sure enough, it failed and it automatically rebuilt on the spare disk.


            Now that it has been rebuilt, all is fine and running.


            The question now I have is, will the new RAID 1 pair report "incompatible" when I remove 1 of them?


            [I will have to TEST this scenerio out, as a 1 disk failure is very possible, and being unable to boot with the remaining one will defeat the purpose of building a RAID 1 volume in the first place]

            • 3. Re: Raid 1 - Missing disk, but remaining disk status is "incompatible"

              Today I decided to "emulate" a Hard Disk failure.

              I shutdown my server.


              Unplug the SATA cable to one of my 2 disks in the Raid 1 Volume.

              When it boots up, the RAID bios reported the ramaining disk as "incompatible". I was unable to boot.


              It will only boot when both the disks in the Raid 1 Volume are connected.


              Anyone has the same situation?


              My only concern is when the 1 of the disk really dies, I am unable to boot, as the BIOS will show "incompatible"


              Please help



              • 4. Re: Raid 1 - Missing disk, but remaining disk status is "incompatible"

                Nobody had similar issues?


                I still don't know how I should proceed to rectify the issue.


                Current setup will NOT protect me from a single disk failure (unable to power/boot).


                But it would be ok on a degraded disk.

                • 5. Re: Raid 1 - Missing disk, but remaining disk status is "incompatible"

                  Y have de same problem.

                  Nobody had similar issues?


                  • 6. Re: Raid 1 (Mirror) - Removed 1 working disk, but remaining disk shows status is "incompatible" and unable to boot to OS. Whats the point of Raid 1 Mirror?

                    I realize this reply is two years late but i thought i should post it because this page is the top hit for google when you search "intel raid incompatible disk".


                    I had a similar problem to yours last Friday.  I am not very familiar with Intel raid controllers other than using them for my own PC at home.  I spend all of my server time working with real array controllers from HP and IBM, and crappy ones from DELL.   This was the first time i had to deal with a failed array on an Intel ICHXR chipset.


                    The server had an Intel branded board and 4 WD SATA disks.  They were divided into 2 mirrored arrays.  The mirror containing the OS Volume was not working.  One of the drives had failed catastrophically and was not even reporting as present by the CMOS.  When the server posts and the Intel array firmware comes up it did not show the first mirrored array at all.  It only listed one array and it said had a status of "Normal".  It then listed the three remaining disks.  The first disk was listed as "Incompatible" in Red  and the other two were listed as "Member Disk(0)" in green.  In fact the array it was listing as RAID ID 0 used to be the 2nd mirrored array and used to be RAID ID 1.  It renumbered the array and never showed the failed array.  The server wouldn't boot because instead of showing up as degraded, the failed mirror did not show up at all.


                    I should note that this situation is very common in the server world and that every other array controller i have ever worked with, even the horrible ones produced by Dell, would have been fine and would have booted to the OS.  Even the documentation i found from Intel seemed to indicate that the array should show up as degraded and should still work.   Only after searching the web did i find this is a common situation for Intel array controllers when one disk is missing from a mirror.


                    It is strongly recommend that in a mirrored array, both disks should be of the same capacity and rotational speed.  It is best if they are the same model and F/W version also.   Since I did not have another disk to match the one good disk, replacing both disks was necessary.


                    Here is what i did to fix the situation:


                    1. I removed the failed disk and the good disk from the server.

                    2. I cloned the good disk to another disk using disk cloning software on another PC. (i used ghost 4 linux).  I did this because i always make a backup copy just in case.

                    3. I installed two new hard drives into the server and used the CTRL+I  bios setup to create a new mirrored array from the two new drives i just installed.

                    4. I powered down the server and unplugged the sata cable from one of the two new disks in just installed.  Lets call that disk B from now on.  Disk A (the other new disk) is still connected.

                    5. I turned the server back on and sure enough the array I just made is not showing up in the intel CMOS at all and disk A is showing as incompatible.  I booted to a mini-tool partition wizard boot CD to see if it would recognize the disk A and it did see it as a single stand alone drive.

                    6.  I connected the original good array disk that contained the OS to a USB drive adapter ( i used a Rosewill RX-DU300) and rebooted to ghost 4 linux.

                    7. I cloned the good source disk to the new disk A.

                    8. I shutdown the server and unplugged the USB disk adapter.  I also plugged the disk B back in.

                    9. I turned the server back on and entered the CTRL+I intel CMOS setup again.  This time both arrays show up during the CMOS post and all 4 disks are showing in green and say "Member Disk".   Both arrays are status "Normal".  However, i know that the new mirror is not good because only disk A contains any data.

                    10.  I chose to revert disks back to non-raid and selected the disk B drive as the one i wanted to reset.

                    11.  The status of the new array went from normal to degraded.  Immediately I was presented with the option to rebuild the array using the (now free, non-raid) disk B which i had just taken out of the array.

                    12.  The status of the array went to rebuilding or recovering, i don't remember the wording exactly.  I exited the CTRL+I setup and the server rebooted.

                    13.  After the server reset the OS began to boot.  It took about 35 minutes for it to get to the CTRL+ALT+DELETE prompt.  It took a lot of patience to wait that long and not just assume it was hung.  However, during a rebuild operation, I/O performance is degraded significantly and so i knew it would take a long time to boot.

                    14.  I logged into the OS (server 2008) and allowed it to fully start.  It found new hardware (the new array) and said it was finished installing the new hardware and needed to reboot.  I said OK to the reboot.

                    15.  The server reset and then tried to boot the OS.  It failed at an error screen that said it could not find winlogon and there may be a problem with missing files.  This situation is common when installing a new OS volume on a windows OS.  What happens is the new volume boots fine the first time and then windows assigns that volume a new partition ID that is a GUID.  Then when windows boots the second time, the Master Boot Record (MBR) is wrong and does not reference the new GUID id assigned to the partition.

                    16.  I rebooted the server to the windows 2008 server DVD and clicked on the repair option and then went into the recovery console (a DOS style prompt).

                    17.  I input the command to rebuild the master boot record: "bootrec /rebuildbcd " and chose to accept the installation of windows that it found.

                    18.  I reboot the server again and this time it booted correctly.  I logged in and shut it down.  I repeated the process again to make sure it was not a fluke.   It worked correctly and continues to work correctly.


                    The Intel storage matrix software in windows indicates that the volume was rebuilding by flashing a tray icon.  During this time the server performance was significantly reduced.


                    The original disks were 250GB and the new disks were 500GB.  The whole operation from beginning to end took about 6 hours of which i spent around 4 hours actively working on it.  That time and the loss of business for the day for my customer far exceeded the cost of a real array controller.  The lesson learned here is that the intel ICH raid chipsets are OK for non-mission critical situations or home use but have no place in a mission critical role like the only server for a busy business.   I would also like to mention that i looked exhaustively for some official documentation from Intel on what the incompatible disk means and what to do in this situation and i could not find anything.


                    I hope this helps someone else. - Karl

                    1 of 1 people found this helpful