6 Replies Latest reply on Sep 2, 2015 8:26 PM by krasoft

    Intel RST: Unable to change cache mode for RAID 5 array

    ShaRose

      This is actually sort of an odd issue: I actually had it partially working at one point, but it seems disabling the cache as a test disabled it for good, even after reboots, updating drivers, recreating arrays, etc. Let's start with my specifications...

       

      ASUS Maximus VII Hero BIOS 2012 (RAID mode)

      Intel i7 4790k processor 4 GHz (Slight OC to 4.32 GHz)

      16GB RAM

       

      Windows 8.1 Pro

      Intel RST RAID Drivers 13.1.0.1058 (from ASUS website), and later

      Intel RST RAID Drivers 13.6.0.1002 (from Intel download center)

       

      Boot disk: Samsung SSD 850 PRO 256GB

      Hopeful array includes:

      2x Toshiba DT01ACA300 (3TB)

      3x Seagate ST3000DM001 (3TB)

       

       

      Well, I wanted to set up a raid 5 array with those 5 drives listed as a storage array. The problem is, when I first created the array, I just went with the defaults of a 128k stripe: This resulted in abysmal write speeds of 25 megabytes per second at most, which tends to be an issue when restoring a 7.6 TB backup. So, I messed around with cache values: I went through each one, but it didn't change the write speeds at all, even if I turned the cache off completely. I then looked around, and found that it seems that the default of a 128k stripe for raid 5 has known terrible performance (which begs the question on why it's the default). Anyways, deleted the volume, started again (with a 64k stripe, seemingly the best for performance), and: I wasn't able to turn the cache back on. Uh oh.

       

      To try and fix it, I try a reboot: Which doesn't change anything. Note it's still initializing, but it's comically slow, even with how slow they are supposed to be: Approximately 1% every 30 minutes. Which I'm not waiting for. Note with cache disabled I'm still getting that 25 MB/s write, which even WITH initialization I shouldn't be getting. I try rebooting into the raid settings during the boot: No options there for it.

       

      Let's try updating the drivers, since I haven't since I installed windows since I wasn't using it. Good, a few versions have passed. Install the update. Restart. Create new raid 5 array, and still can't enable cache. Lovely.

       

      After messing around, I find that if I start a raid 0 array, I can change it into a raid 5 array: Awesome! Make a raid 0 array (with 4/5 drives, 64k stripe), and look: I can change the cache type! Awesome. Change the cache type to write-back which apparently the best one, and run a small test: Copy a 25 gigabyte file on, and then back off again to test the read and write speeds. Other drive being an SSD. Write: 475-485 MB/s. Read: 470-485 MB/s, lowering over time. Fair enough.

       

      Now, delete the NTFS volume, convert to raid 5 (adding that last disk). Once again, default is 128k: Change it to 64k so it's the same stripe as before. It starts migrating data, but uh oh! I still can't change the cache type. It still says it's using write-back though, so let's try benchmarking it! Create a new NTFS volume, and try the benchmarks again. Write: 100-200 MB/s. It goes back and forth, but at least it's better than 25 MB/s. Well, let's continue. Read: 20-35 MB/s. I didn't even wait the 20 or so minutes this would have taken to finish. I'd consider messing with the cache type, but that's still disabled. It might be so bad because it's 'migrating', but that's even slower than initializing: about 1% every 2-3 hours.

       

      Try reforming the array, and (without touching it: No partition or anything!) wait for it to initialize. Takes 2-3 days. Maybe it'll let me enable the cache then. Nope. Performance any better? Nope.

       

      I give up. Sorry about the style of post, but as you can tell I've spent a good week and a half debugging this so I'm starting to get rather annoyed. Here's what it looks like, by the way.

       

      From a raid 0 to raid 5:

      2mz0emu.png

      Before I show the created raid 5 array, I should note that the option to Enable Write-Back cache is disabled during creation.

      2m7dvu9.png

      And now the Manage tab for the created raid 5 array.

      2ibylbp.png

      I really can't figure out what it is I'm missing here.

        • 1. Re: Intel RST: Unable to change cache mode for RAID 5 array
          joe_intel

          You may want to verify how is the write-back cache setup in Windows* Device Manager.

          Write-cache.JPG

          I see there are newer BIOS versions for your system; you may ask ASUS* if those versions contain a RAID option ROM update.

          • 2. Re: Intel RST: Unable to change cache mode for RAID 5 array
            ShaRose

            Thanks for the response. Unfortunately, I did verify that the windows write-cache buffer flushing was disabled (Both boxes checked). I also asked ASUS Support of there were any RAID option rom updates in subsequent BIOS updates, but they were unable to verify that information. I tried flashing it anyways, but sadly it didn't seem to resolve the problem (At least not going from RAID 0 to RAID 5). I tried running a benchmark and while write remained the same (100-200 MB/s), read was somehow even SLOWER that before: Now a piddly 7-8 MB/s. I'll have to create another incremental image and a few other housecleaning processes before I destroy and recreate the array as a raid 5 from scratch: I decided to run on a raid 0 with 4/5 drives while I waited for a response. I'll post when I have those results ready, as with the read speed I said it might take a bit for the backup, even when it's only got to read a few GB of changes. And as before, a few pictures.

             

            fvcyfp.png

            29pywwz.png

            2uh76go.png

            I'll post again when I can verify if a new raid 5 array still causes the issue.

            • 3. Re: Intel RST: Unable to change cache mode for RAID 5 array
              ShaRose

              And sadly it seems it still doesn't let me turn cache on.

               

              2gsgx0o.png

              • 4. Re: Intel RST: Unable to change cache mode for RAID 5 array
                ShaRose

                I found something interesting while trying different things: I wanted to uninstall RST, reboot, and then reinstall it to see if that helped (And it didn't), but I tried out creating a raid 5 array with only 4 of the disks. To my amazement, it did allow me to change cache settings! Then I added the last disk which disabled them again. I also tried a 3 disk array and adding a disk to that (so it had 4 disks) but it also disabled cache settings. I also noticed that (as long as it doesn't use all 5 disks) RST doesn't require the initialization at creation.

                 

                So, I created a 4 disk array, set cache to write-back, and did a performance test. Write was at 480MB/s and Read was at 450MB/s. So that's nice. Now let's try hitting initialize, and sure enough it disabled the cache options. So that helps narrow things down at least. The only problem now is it not re-enabling the cache options after an initialize.

                • 5. Re: Intel RST: Unable to change cache mode for RAID 5 array
                  krasoft

                  I faced the same issue. The only difference was, I have had 4 HDDs of 2TB each in my RAID5 with the write-back cache enabled in the RST and good write performance. But after adding 2 more HDDs to the RAID5 the RST disabled the write-back cache and forced to initialize the new array - you can clearly see this RST behavior on its "Advanced" tab while choosing the HDDs during the RAID5 array creation.

                   

                  Sure after 3 days of initializing and impatient waiting I tested the new array and found its the write performance dropped from 150-350 MBps to 5-15 MBps.

                   

                  A week of try-fail dances with various versions of the RST did not bring any positive result.

                   

                  Then I decided to try the latest available version of the previous Intel driver I used to use before - the Intel Matrix Storage manager (IATA) 8.9.0.1023 (Intel® Download Center) released back in 2009. I replaced the latest RST with the latest available IATA, recreated the RAID5 array with 64K stripe and 64K NTFS cluster size, enabled the write-back cache, and in 3 days of initialization I've got my write speed back to 150-350 MBps! Yes, I lost auto-validation and email notifications but it is better to have fast RAID5 with eventual manual array validation.

                   

                  Well, this old IATA driver recognizes my 2 new HDDs as 3072 physical sector size not 4096 bytes - I guess they did not know possibility of these larger physical sector sizes at that time. At least, I'm back on track!

                   

                  I know this is a workaround while I wait for Intel to fix their RST driver (IMHO I do not believe they will ever fix this in the RST if they have not yet done this for 5 years).

                   

                  For the reference, I "Enabled Hard Drive Data Cache" in the Array_0000, and "Enabled Volume Write-Back Cache" in the RAID5 volume in the IATA. Plus I "Turned off Windows write-cache buffer flushing on the device" in the Driver -> Policies settings.

                   

                  Hope this will help the others to make their 5-6 x RAID5 working fast with the Intel chipsets.

                  • 6. Re: Intel RST: Unable to change cache mode for RAID 5 array
                    krasoft

                    Finally, the Intel Rapid Storage Technology 14.6.0.1029 provides the cache write-back with 6 drives!

                     

                    I was waiting for this release because the Intel Matrix Storage Manager 8.9.0.1023 did not work correctly with the LPM feature even after I have disabled it in the registry and power management settings.