There is no 2TB limit on your H57 chipset if this is the storage array (non-bootable) for you to use GPT.
Your Initializing is not done yet and yes its normal for a big array to take hours if not days..., having Write-back cache enabled from the start may speed that up.
Once done enable Write-back cache in rapid storage-technology (RST) and see how it performs then.
It took 4 days to initialize and the write back cache is switched on now.
The result is better now but still not as expected. Read performance is fine but write performance goes down to 19 Mb/sec depending on the transfer size. This does not happen, when I used the single discs without array configuration. So I still believe that something is wrong with the RST driver.
Or is there anything else which must be tuned?
These 2.5” Toshiba disks are only spinning at 5400rpm which doesn’t help.
There is a check box to turn off windows write-cache buffer flushing in Device manager > disk drives > your Array name > properties > policies tab that may improve things.
What strip and NTFS format cluster size (Allocation unit size) did you use? You can try reformatting to 8K (8192) size and see if that helps or try another strip size.
Note: if you used MBR and not GPT (from disk management) you can not add another drive to the array to increase beyond 2TB.
I can second that the performance of the intel RST software Raid5 seems really slow on writes.
I have a p67 asus board with 16GB of memory and a i7 2600K. Have a 6*2TB Hitachi 7K3000 in a RAID5 array using win7 64bit and the latest RST drivers.
Installation was a breeze compared to some other implementations and the RST interface works really great however the performance was a disappointment.
The write cache is enabled in RST for my volume. I am seeing readspeeds of around 350-400 MB/s but write speed is showing as 60MB/s?!
That is incredibly low, even less than the rated write speed of a single drive?!
In my application I am dependant on both write and read performance.
Is Intel throttling this software solution? Why is it this slow? I have previously had hardware XOR-accelerated raidcards and they were a lot faster. However a modern CPU should easily handle the parity calculations especially compared to a 5 year old RAID-card with 256MB of memory on it?
I feel dissappointed and cheated at the moment What is it that is slowing down the RAID?
Maybe someone at Intel could give some hints and pointers to improve performance? Maybe a setting to increase the write cache?
I want to chime in too. I've have a nice stable build with a Asus P67 Sabertooth (B3),Intel 2600k with 8GB ram, Crucial C300 main drive and three Hitachi 5k3000 drives in raid 5 default settings. All hard drives are on the Intel controller with SATA III cables. In IOmeter I am seeing horrible results for sequential 256K writes. I am seeing 7.4MB/sec writes and 205 MB/sec reads(about what I would expect). These results were with the Intel Rapid Storage Technology driver software V10.1.0.1008. I didn't save the results with the older CD drivers but they were more on the order of 110MB/sec read and 60MB/sec write, radically different before the update/reboot. Something is definately not correct with the write speeds. My array is a bulk media store and boy does it take forever to move files to it! I am using 661GB of 3.63TB and growing every day! I do want to say that the Crucial C300 is cooking though! 329MB/sec read and 143MB/sec writes on FW006. With a GTX560 and CUDA's help 2hr DVD H.264 transcodes in 6min! I plan to test the drives further but I have no disk/array errors. Is this in any way related to the recalled chipsets? Are the drivers really concervative now?
This is probably not a particularly useful replay since I'm not using RAID 5 on my system, but I wanted to point out that I've always avoided RAID 5 on Intel controllers specifically due to dismal write performance. "FakeRAID"such as this is OK for striping and mirroring, but you really still need dedicated cards for parity-based RAID (whether it makes sense or not).
On my current system I'm running 4 x 2TB consumer 7200RPM drives as RAID 10 with a small RAID 0 portion at the beginning as a scratch disk, on an ICH10R.
I have yet to see any official response to anything on these forums especially regarding Rapid storage technology.
A lot of the documentation on the rapid storage technology site is outdated and ambigous. I am used to installing and setting up RAID-systems and recently installed a RAID5 system with 6 disks on a brand new p67 motherboard. Installation and the setup was very easy and actually better than many other RAID-competitors (3ware,LSI, Adaptec etc). However you cannot specify the amount of write cache and the policy you want for it just turn it on or off. The more proffesional systems have a lot more settings for optimizing the performance but I am fine with Intel making it a bit more simple.
On the website is says only 4 drives are supported for RAID10 (the reason I went for RAID5) but some users have gotten it to work with 6 drives. I wonder if it is supported and if it is possible to migrate from a 6 drive RAID5 to a 6 drive RAID10? I know it does not state that on their website but that information is generally pretty outdated.
My biggest gripe by far concerning Intel Rapid Storage Technology is the abyssmal write performance on RAID5!
I am aware of the problem with alignment issues affecting performance and hence use 512k sector drives (2TB Hitachi 7k3000). Write cache is enabled etc.
With this setup I am currently getting around 450-490 MB/s read speed which is great, however my write speed is around 30-40 MB/s?!
I have 16GB of RAM, a 2600K running at 4,5 Ghz and I would assume that this could perform the parity calculations needed for writing?
When performing write tests the CPU never goes above 10% so it is not lack of computing power.
This is about a third of the write speed of a single harddrive?! When searching this forum and looking on some other tests on the internet other users seem to be getting between 5-80 MB/s write speed (depending on the number and type of drives).
This is totally subpar and not at all acceptable, how can Intel boasts about all the advantages of Rapid Storage Technology with this beeing the reality of RAID5 performance?!
Nowhere on the site are there any performance figures quoted, the reason for this is quite obvious.
Also apparently the RAID10 implementation is inferior too (due to them not optimizing how they write and read from the striped drives within each set).
Most people respond to these posts that you should get a hardware raidcard. I have however done that several times before but was curious after reading all the marketing stuff on the website here. If Intel can not deliver acceptable RAID5 write performance it should state that on the website so people do not have to waste time installing this crap and beeing dissappointed. This really drags down my impression of Intel as a serious actor in the storage business. Be honest and clear Intel, people will always appreciate that. Look for instance at what OCZ is doing in regards to their SSD-drives. My company will by OCZ SSDs from now on. State on the website that RAID5 write performance will be really low and if that is needed Intel Rapid Storage is not suitable.
Read speed seems to be on par with what I see from my highend controllers so if you are mostly contrained by readspeed Rapid Storage is great.
Glad to hear the big reviewers are looking into this. I spent a ton of hard earned money on this build and have little time to get to the bottom of this myself.
This RAID 5 array was my biggest concern with this build, but I was expecting more problems with consumer HDDs dropping out of the array, I wasn't expecting severe performance problems. If it wasn't for the well performing SSD main drive, this build would be nearly useless for video editing/transcoding. Anyone seeing any correlation to display drivers? Some other people were seeing problems after adding drivers to both on die and external video cards. This is a bit subjective but with the Windows drag and drop dialog I'm seeing some excellent burst speeds with transfers under 1GB but if I copy one or several GB+ files the transfer rates appear to drop off severely. I know any Second Gen iSeries should be able to compute parity with ease, is there some cache assumption being made by the drivers that are crippleing our arrays? Intel? Is there a human or cyborg looking into this for us somewhere?
I am seeing the same issue as well. I had a 3 disk (Samsung Spinpoint F4) Raid 5 on an H67 chipset and the write was about 80-90mbps, which is ridiculous for Raid 5. I know I do not have a $800 raid card but my Core i3 2100 CPU should easily be able to handle parity calculations. When transferring large files the CPU% is a measley 0-1% so it is not the CPU's fault that it is so slow. I also added another disk to the array same make/model and Intel RST is in the "Migrating Data" stage. It is going at the rate of about .1% ever 2-3 hours, which i do not understand either. Again, my CPU is running at 0% during this. Why can the Intel driving realize you are not doing anything else on the computer and use more CPU cycles to add another drive to the array? Hopefully this is a simple driver issue and can be resolved soon.
Will intel release a statement and let us know they are at least looking into this? If i would have known this was going to be a problem from the beginning i would have just bought a highpoint raid card.
The saga continues... So I upgraded my bios last night and made the unfortunate mistake of letting windows boot before turning RAID back on in the bios. I lost two weeks worth of work and several months worth of digital pictures after two of three disk showed up as non-member disks. However this now afforded me the ability to play with my RAID 5 array a bit.
I started playing with Stripe and allocation sizes and have some interesting news. I decided to use the 128k stripe and 32k allocation unit size and found a huge increase in write performance and a noticeable increase in read performance. I now have reads that are over 260MB/sec and writes that are 205MB/sec. That is a huge increase from 7-9MB/sec writes. Would this suggest severe alignment issues with default 64K stripe and windows default allocation unit size (4K right?)?
Very interesting results There are some sites that have tested different RAID stripe sizes but for the most part the conclusion seems to be that it will not make that big of a difference as you are seeing. Your is about a 20x increase?! I have read that one of the problems with RAID5 implementations is that the controller needs data that is adjacent to the data to be written in order to calculate parity and if the controller (software or hardware) is not smart it can incur multiple read requests for every write and hence slow down write speed.
What program are you using for testing it?
I have 4kb allocation unit and a stripe size of 128kb. Are your drives advanced format drives (4096 byte sector size)?
Hmm I wonder what happens if you put the allocation size in windows to equal the stripe size?
I used IOmeter to test my array. I used the following settings:
Maximum Disk Size = 2048000 sectors.
Transfer Request Size = 256 Kilobytes
Percent Random/Sequential Distribution = 100% Sequential
Percent Read/Write Distribution = 100% Write
I can post a CSV of the results if you want.
I didn't happen to test equal stripe and allocation size and after getting the performance I was expecting I copied my data back to the array. The disks I'm using are Hitachi 5k3000 which are still 512 byte sector size. Before my data catastrophe I was using the default 64K stripe and 4K allocation unit. I've normally not had any problem with the default settings but in this case it made all the difference. My array is media storage (pictures, MP3's, H.264 DVD rips) so the larger block sizes are probably useful anyhow. Anyone else have the ability/desire to test equal stripe and allocation sizes?
I'll chime in as well with my input. I'll prelude my post with a quick bulleted summary before delving into details:
- Small Win. storage server, based on IRST RAID5.
- Optimizing RAID volume parameters can improve write throughput about 10x, but only for relatively small (<0.5GB) writes.
- Even with optimized RAID volume parameters, sustained throughput for large (>0.5GB) sequential writes is horrible (can get as low as 4.5MB/s), with poor multi-I/O performance.
I've set up a few months ago a small server, based on Win2008R2 x64, Gigabyte H57M-USB3 MoBo with Core i3 @ 2.93GHz and 2GB RAM. To keep energy consumption and costs low, I've decided to use Intel's RST with RAID5. Obviously the system's CPU ought to be sufficient for parity calculations, especially since as a storage server there are no other computation-intensive tasks for this system.
Storage is 3 HDDs, each is a 2TB Seagate ST32000542AS (which are not AF, i.e. have 512 bytes sectors) set up as 2 RAID5 volumes -- a small 100GB volume for boot, and the rest of the space allocated to a data storage volume.
When I was initially setting up the system, I've tested its performance with HD Tune Pro 4.6. I've noticed write performance was bad, so I tried various combinations of the RAID volume settings that I can control, which is limited to stripe size and write cache. Enabling write cache had the most perceiveable improvement (about 6x), but stripe size optimization had also a respectable improvement (up to around 54%). My conclusion was that the RAID volume should be set up with 64KB or 128KB stripe size, and have write cache enabled.
So after having found the optimal settings for the array, I went ahead and finalized the installation, and started using the system for storage, as I intended. However, I was frustrated to find out that while write performance was reasonable for relatively small transfers, big transfers (i.e. over 0.5GB) were horrible.
I do get an initial throughput of about 50MB/s, but after several seconds it starts dropping steadily, and can get as low as 4.5MB/s for really big files (of several GBs).
Even worse than this abysmal throughtput, when such a transfer is taking place, the server is pretty much unusable for any other purpose. Simultaneous access by other processes or users is slow and unresponsive while such a big file is being written. You can't even play an MP3 from the server while another user is writing a big file.
I used Windows' Resource Monitor (perfmon) to monitor the system during such transfers. CPU usage is always very low (less than 10%, with perfmon itself using 6-7%). The "Disk activity" tab revelase something interesting: the "disk response time" column shows that response time for the file being written starts at 3sec, then rises steadily and can get as high as 13sec. I feel such a figure, for a local HDD system is an indication of a serious problem.
Some (probably) insignificant information: system was originally installed with RTM release (no service pack), and I've recently upgraded to SP1 - with no preceivable performance difference (neither productive nor counterproductive).