11 Replies Latest reply on Mar 24, 2011 5:04 PM by

    Slow RAID5 Write on ICH10R

    tawney

      Hi All,

       

      I have a GA-EP45-DS3P Motherboard ( http://www.gigabyte.com/products/product-page.aspx?pid=2841#sp ) with an ICH10R RAID Controller. This raid controller runs a RAID5 4x 1TB 7200RPM Sata HDD's with the default stripe size.

       

      The issue I have is that it seems unusually slow, I understand that there is clearly a limitation to this as the RAID Controller has no built in cache, memory, processor or BBU - but I would have thought it would be at least the same write speed of a single SATA disk (or even closed to).

       

      The read speeds are fine ~100MB/sec

      The write speed is ~5 to 20MB/sec

       

      I am running Windows 7 x64 and I have tried both the Intel 8.9 and 9.6 Driver/Software Package.


      I have done a bios update of the Motherboard to the latest stable revision, is there a firmware update for the ICH10R that I can perform. Could anybody provide some advise on how to resolve the seemingly slow write speed?

       

      Thanks in advance

       

      Tawney

        • 1. Re: Slow RAID5 Write on ICH10R
          PeterUK

          How are you testing the speed? Use CrystalDiskMark.

          http://crystalmark.info/software/CrystalDiskMark/index-e.html

           

          Also try enabling write-back cache in RST and in device manager > disk drives > your array > properties > Policy tab > check “Turn off windows write-cache buffer flushing on the device”.

          • 2. Re: Slow RAID5 Write on ICH10R
            tawney

            Hi PeterUK,

             

            Hmmm. I will try that. I did enable Write Back Cache in the Intel Software - but I didn't get a chance to test it before a disk failed. I think the disk failed only after enabling Write Back Cache.

             

            Anyway, so then I did a lot of research and seemingly some people said the 8.9 Windows 7 x64 driver had a bug where it would randomly drop drives for no reasons. So I upgraded to 9.6 and rebuild it.

             

            Then it was slow still and I enabled Write Back Cache and it dropped a drive.....

             

            So I am going to buy another disk this weekend and try it, I just wanted to know if anybody had some advise... meanwhile I have bought a 6 Port PCI RAID Adapter with 64mb cache......

             

            Oh well, I guess when I get the new disk I will try your advise and perhaps might see if it yeilds any results and help somebody else out in the future.

             

            Thanks

             

            Tawney

            • 3. Re: Slow RAID5 Write on ICH10R

              I've had the exact same problem with RAID 5 on this controller except that my read speeds were also pretty slow. My writes were about 5 MB/s and reads were about 14-20 MB/s. I have 6 Hitachi Travelstars 7200 RPM but these are not the Enhanced Availability drives, which may work better in a RAID situation.

               

              I had to switch these drives to RAID 1 and a RAID 10 set. They seem to work a lot better now. The write speeds are above 40 MB/s and the reads are now above 100 MB/s for the RAID 10 set.

              • 4. Re: Slow RAID5 Write on ICH10R
                mechbob

                Part of your problem is these drives you are talking about are not RAID specfic, WD makes what they call RE-3 and RE-4 , which made to be used in RAID . They also have dual processors and 32MG cashe.

                • 5. Re: Slow RAID5 Write on ICH10R

                  That's only partially true. RE, ES, Enterprise drives have low TLER/CCTL settings which are useful for a *true* hardware RAID controller. Software RAID controllers are usually more tolerant of deep recovery cycles that occur on consumer versions of HDDs. Consumer drives are, hardware wise the same drive as the RAID-Edition versions. The differences are in the firmware. A WD black drive has the same "dual processor and 32MG (sic) cache" as a RE drive. Up until recently, you could take a WD drive, use the (now infamous) wdtler utility, and change the recovery cycle timeout to a low value that is acceptable for RAID controllers, and it would work quite well in any RAID environment. WD saw their RE sales getting cannablilized (of course, why would anyone *want* to pay double for a change in firmware?), and altered the firmware on the more recently manufactured drives to make it the wdtler utility useless. There has been some headway in getting other consumer drives to have a more acceptable CCTL/TLER setting, but the change doesn't survive a power-cycle, and so require a custom boot setup to alter the CCTL/TLER setting prior to O/S boot.

                   

                  While a TLER issue may be causing you to drop drives from the array, that is not however, the cause of your slow writes.

                   

                  Pulled from: http://www.linuxquestions.org/questions/linux-software-2/raid-5-with-even-number-of-drives-gives-bad-write-performance-why-840866/

                  neonsignal wrote:

                   

                  The performance on writes to RAID 5 is bottlenecked by the write to the parity.

                  How it behaves with different numbers of drives will depend on the write  block size. With large blocks (equivalent to a multiple of a stripe  across all the drives), you will hit all the drives equally with each  write. But if the stripe doesn't divide into the block size, then for  each write some of the drives will be written less than others, which  will decrease performance.

                  So for large write blocks, drive numbers such as 3, 5, 9, 17, etc  (2^n+1) will work better than others, because the block size will be a  multiple of the sum across the stripe.

                  For small write blocks (eg writing to only a single drive + parity), the  number of drives will not matter. This will not improve best case  performance for sequential writes (since the parity is still the  bottleneck), but will help for random write access patterns (where the  parity disk will often differ between writes).

                   

                  Now, the reason I'm here: I have a 5x1TB ICH10R array. I've experimented with every combo (litterally, I benched it 20 times) available of stripe size & NTFS cluster size. My writes at best are an abysmal ~31MB/s. Reads with the right combo are as high as 500MB/s. I've always had write-back cache enabled, and cannot find a cause for this problem. You could dismiss it as just poor ICH10R performance, but I've seen ICH10R benches using a similar setup as mine getting 80MB/s writes.

                  • 6. Re: Slow RAID5 Write on ICH10R
                    PeterUK

                    Now, the reason I'm here: I have a 5x1TB ICH10R array. I've experimented with every combo (litterally, I benched it 20 times) available of stripe size & NTFS cluster size. My writes at best are an abysmal ~31MB/s. Reads with the right combo are as high as 500MB/s. I've always had write-back cache enabled, and cannot find a cause for this problem. You could dismiss it as just poor ICH10R performance, but I've seen ICH10R benches using a similar setup as mine getting 80MB/s writes.

                    It could help to post what benching tool you used?

                     

                    Are you using RST? And have you initialized the volume? You only need to initialize once and you can do it with data on the array (yes I have done it a number of times to know its safe) by opening RST clicking your RAID name > Advanced > here you can see if its initialized or not and clicking Verify will start it because you can't run Verify unless you have initialized the volume. The initializing for you could take days!

                    http://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&ProdId=2101&DwnldID=18859&ProductFamily=Chipsets&ProductLine=Chipset+Software&ProductProduct=Intel%c2%ae+Rapid+Storage+Technology+(Intel%c2%ae+RST)&lang=eng

                     

                    Could one of your drives be bad? If your not using the array you could run a full write check on them.

                     

                    Running CrystalDiskMark with a 500MB test file this is how my writes compare should you run CrystalDiskMark.

                     

                    3 x 320GB Seagate 7200.10 in RAID 5 64KB strip size from a P55

                     

                    -----------------------------------------------------------------------

                    CrystalDiskMark 3.0 x64 (C) 2007-2010 hiyohiyo

                    Crystal Dew World : http://crystalmark.info/

                    -----------------------------------------------------------------------

                    * MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]

                     

                    Sequential Read : 144.831 MB/s

                    Sequential Write : 104.274 MB/s

                    Random Read 512KB : 36.700 MB/s

                    Random Write 512KB : 8.218 MB/s

                    Random Read 4KB (QD=1) : 0.545 MB/s [ 133.0 IOPS]

                    Random Write 4KB (QD=1) : 0.643 MB/s [ 156.9 IOPS]

                    Random Read 4KB (QD=32) : 2.113 MB/s [ 515.8 IOPS]

                    Random Write 4KB (QD=32) : 0.930 MB/s [ 227.2 IOPS]

                    Test : 500 MB [D: 44.5% (265.4/596.2 GB)] (x1)

                     

                    And this is when about 50% of the space is used when testing as the more you fill your array on HDD's the slower it gets.

                    • 7. Re: Slow RAID5 Write on ICH10R
                      DarkKnight

                      I was using ATTO to bench, then teracopy for an 8GB iso to test real world performance. Both corrolated closely with each other, and the best I could achieve is 50MB/s write speed.

                       

                      I am using RST to setup the array, and I waited for the initialization to finish before formatting, then benching everytime. I only setup the array for 1% of available total capacity for testing purposes, roughly ~45GB, so the initilization is quick. When I found the combo that worked the best, I rebuilt the array for the full size, waited the 16 hours, then benched again, same results 50MB/s write.

                       

                      Yesterday, I broke the array, benched each drive individually. There was some variation in the results on repeat tests on any same drive, but all were above 110MB/s, and all benched at 145MB/s at least once. Frankly, If I could get my array to write at 80-100MB/s, I'll be happy. It's as fast as I can serve it data anyway.  Today I setup an R0 array, and benched it, 500MB/s writes, 600MB/s reads! I hardly think that's indicative of drive or cable problems, but I won't do R0 for an array this size, it doesn't suit my needs.

                       

                      There is no problems with the drives themselves that I can find in my testing. So far, the only common factor in poor performance is the IRST driver.

                      • 8. Re: Slow RAID5 Write on ICH10R
                        mechbob

                        Do your drives in this RAID 5 have any partations ??

                        • 9. Re: Slow RAID5 Write on ICH10R
                          PeterUK

                          Try the following setting:

                           

                          In device manager > under disk drivers > your array > properties > policy > check Turn off windows write-back buffer flushing on the device.

                           

                          You can if you want to try RST 10.0.0.1046 found here its what I'm using now:

                          http://www.station-drivers.com/page/intel%20raid.htm

                          • 10. Re: Slow RAID5 Write on ICH10R
                            mechbob

                            Have you tried running the Paragone Partition alignment tool??

                            • 11. Re: Slow RAID5 Write on ICH10R

                              I thought I would reply with an update (or some sort of feedback).

                               

                              As an IT Engineer at work we deal with high end RAID Controllers with memory, cache, BBU's and the works using either SAS, SATA, SCSI or whatever normally on a pretty decent backplain and RAID 5 (or even 6) generally has it's merits and speed isn't of concern. Now being able to use Write-Back (because of BBU abilities) is a good boost to that.

                               

                              On my home media computer in the original post, the ICH10R really had no cache, memory, BBU and generally does Write-Through. That being said in a RAID 5 scenario - "SOMETHING" has to calculate parity. It is my believe based off some more recent technical experience that the calculation of parity (on write) is really one of the largest limiting factors, because the RAID Controller doesn't calculate parity - something has to and generally the Motherboard utilises memory and processor to calculate this. (To prove this, if you run any sort of CPU benchmarking/monitoring software... or even task manager! you will find that during extremely large write processes that the processor is heavily utilised and NOT by any process and NOT by the kernal - the raid controller utilises processor calculations).

                               

                              So, Because RAID1 doesn't calculate parity (it just mirrors the information which is somewhat less intensive than parity calculations) - I decided to go with RAID10. This allowed me to achieve all my goals which were:

                               

                              a) Protect from Disk Failure (primary concern)

                              b) Keep disk write speed as high as potentially possible on a cheap solution (large writes on a media computer is essential)

                              c) Utilise disk array to increase disk space size and cater for additional growth.

                               

                              So in my experience, unless you have the correct hardware to SUPPORT the onbaord raid controller to calculate parity - you simply need to deal with RAID 5 slow write, or go with RAID1 or RAID10 (or RAID0 if you don't care about disk failure and just want speed).

                               

                              The ICH10R is a great cost-effective way to delivery RAID in a home environment. I would never implement this at work, for a customer, or for any environment or scenario where a solution or implementation is required for any business requirement.