Thank you for joining the Intel communities.
You can do RAID migration from RAID 10 to RAID 5, but not from a RAID 10 to RAID 1, a RAID 1 requires only 2 hard drives installed and a RAID 10 you will need minimum 4 hard drives so you cannot fit that volume into a RAID 1.
You can see more information here:
Thank you for your response. So, my end goal is to increase the size of my existing 4 HDD RAID10. The volume is 150G and is it out of space. I know that if I had a new RAID controller (> ICHR9), there is an option to grow the volume but that is not the case with my controller.
Considering your response, I think that the ONLY solution for me is to reload the system. Take it all back down to bare metal, use new larger capacity HDD's and reconfigure it from scratch. Would you concur?
I'd the replace the controller while I'm at it. Being on software RAID, which is what Intel's nil supported chip is, is fine if you're doing AHCI with ZFS. But for anything else, it's a death march towards a cliff unless it's a open source OS that handles the software RAID itself and uses btrfs or LVM or other expandable filesystems.
In any case, you might consider dropping 200 bucks on a raid controller unless you think you'll hate the guy working on it in few years.
JF00bar - I thought that the controller was hardware. It shows up in Device Manager. By the way, what is ZFS, btrfs and LVM. I don't think I can use another file system for Windows server OS.
I get your point about old hardware. I am working with the client now trying to get him to pull the trigger for a new server.
Thanks for your response.
You can see some information about ZFS, btrfs at:
And you can see some information about LVM at:
NOTE: These links are being offered for your convenience and should not be viewed as an endorsement by Intel of the content, products, or services offered there.
Areca, Intel, or Avago(Formerly LSI, and Formerly 3ware, and Formerly AMCC) are hardware raid cards. You don't need to drop coin on a new server if you aren't pegging the CPU/RAM with your current setup. The "raid" setup that you see on motherboards or $30 addon cards are software raid configurations. Meaning your driver and cpu are doing the RAID, not the card.
Since you are on windows, I'd recommend just dropping coin for an 8 port internal card. It's an extra layer of protection. OS independent BIOS, Configuration, Battery, Cache, CRC checking, Byte Verification, Parity checking, etc. Unless you go really high end like on a IBM Mainframe, where it's all software raid, but with redundant paths and redundant backup configurations(also not running windows).
But for windows, I would never run a server without hardware raid. You might as well fill out your pink slip ahead of time. It will bend you over as soon as a driver pukes and you are left with no data. Back on the original topic, as far as converting a RAID 10 into a RAID 1- just remove the mirrored drives. You are already at RAID 1 with a mirror of each. That's all RAID 10 is. Just pluck the 2nd and 4 drive out and boot it up. And it's RAID 1. Assuming the drives are in order.
As far as capacity expansion on that software raid chipset, the answer is NO. That's another reason why hardware raid is used, by the way. It does that.
Here's the list of intel software raid chipsets and how none of them do capacity expansion.
So what you can do with your current setup is do a full image backup with Macrium reflect or another product(As you said). Blow away the raid. Swap the drives. Recreate the raid. Deploy the image and have the backup software extend the partition(Usually does that automatically now. And you're done, for now.
In short, unless it's a mainframe or you can run a raid based filesystem(which is only on solaris/aix/linux/etc), you need a hardware raid controller if you don't want your data to die.
Make sure you have not just one backup, but multiple. And test the image on a single drive of equal or greater size before you destroy the raid configuration. Sysadmin's are backwoods survivalist right wing extremists. They follow the rule of 3's. Because you can't program your way out of a hardware failure. I wish you the best. You can do it.
And looking at your other posts, it looks like you got the 4 drive software raid to work. So, congrats.