I asked this in graphics and chipsets about a week ago but no-one answered it. I noticed more RAID related questions in this topic, so I am reposting here instead of bumping the old one.
Intel RAID Controller: Intel(R) ICH8R/ICH9R/ICH10R/DO/PCH SATA RAID Controller On an Asus P6T Mobo using Windows 7 RC1. I have updated the Mobo bios to latest firmware.
I have two volumes defined: one for system disks and one for storage. The system volume is 2 x 640 GB WD drives in RAID1. The storage volume is 4 x 1 TB Seagate Barracuda drives in RAID10. All drives were brand new as of two weeks ago.
The Seagate diagnositic software came on an iso file to create a bootable CD with the software on it.
Drive Test Results
OK People, What's The Scoop?
Why would the Intel Matrix Manager etc. continuously and ERRONEOUSLY tell me my drives are failing or failed and bring my system performance down to its knees, and be absolutely a pain in ths _ss? The drives are good.
Anyone tell me how to fix this other than return the Mobo or buy a real RAID card (which I am 3ware 9559SXU-8PL... but I still want to know what is wrong with my Mobo). BTW, now I know why Linux fanboys refer to cards like the 3ware card I am buying as 'real' (the one I am buying has an actual controller risc chip and its own onboard memory (128MB)), and this Intel RAID as FAKERAID. This experience almost has me in the Linux fanboy club as I am very angry about this.
Any help is appreciated.
I have had same problem
Had Intel matrix storage manager (IMSM) 6.2. Did not have administrator privileges.
Downloaded IMSM 8.x. Warning came up. Opened IMSM. Now able to double click icon in left pane to see two drives in array with one marked as failing.
Right click failing drive and 'mark as normal'
Hoping this was software and not failing drive but backed up just in case.
I too just experienced a 'degraded' disk failure in a 3x disk RAID 5 configuration - Western Digital RE3 WD2502ABYS-01B7A0 250GB - which incorporated the boot system. Worked fine for about 4 or 5 days and then this.
Intel RAID Controller: Intel(R) ICH8R/ICH9R/ICH10R/DO/PCH SATA RAID Controller On DX58SO. I too was using 64-bit Windows 7 RC1. DX58SO (AA# E30149-503) motherboard BIOS is the latest version: 4014. Processor i7-920 D0 stepping.
I shut down and removed the failing drive, now I'm awaiting an RMA replacement to test further.
Although Intel has released a handful of 'Beta Drivers' for evaluation within Windows 7 RC1, regretfully their latest release of the BIOS - version 4014 - was not one of them. In which case, I suspect ICH10R Matrix Manager and W7 RC1 may be stumbling over each others toes. Once the replacement disk arrives, I may need to limit my RAID affairs until Intel releases a Win7 bonafide BIOS for this set of chips.
Since you have an Asus mobo, this may not be the case for you. Anyway, lets hope a few others chime in on this!
The version of matrix manager I have (8.9) does not support the ability to mark as normal. I have purchased a 3ware raid card with an on board RISC chip so it doesn't share the machine's CPU to manage the arrays. Here's to hoping ALL the hardware vendors have their W7 driver factories operating at full capacity.
Care to make a wager that your drive is perfectly fine? Just kidding. But I wouldn't be surprised. Like I said to the other fellow, I'm getting an industrial strength 8 port 3ware raid card (9550sxu). There are some very good prices for these on eBay. And 3ware is clearing out one model of 12 port sata compatible raid cards for ridiculously cheap. You can split the ports to make it up to 24 drives.
Thanks for the info. BTW, the arrays have been pretty stable for the last 4 or 5 days. I think it might be messing with my head.
I think I might have a solution, but if so, it is kind of crappy one.
On the two Arrays, I set "Hard Drive Data Cache Enabled" to 'No'
I already had "Enable Volume Write-Back Cache" set to 'No' for the volumes. Every since then, I've not had a problem. I don't like it though as it removes some of the performance perks of RAID. Since this doesn't come with battery backup etc. then it is rather sensible then. It has only bee about 5 days without issue, so I won't say it is 'fixed', but considering this happened almost daily, it is looking pretty good.
I have a 3ware card coming that has its own risc processor on board as well as 128 MB (and is expandable) or DDR2 RAM AND battery backup. ;-) The only problem is that now I have to buy one for my other tower PC. I can see it is feeling jealous.
I too just experienced a 'Degraded Volume' disk failure in a 4 x disk RAID 10 configuration - Seagate ST3320620AS - which incorporated the boot system. Worked fine for the past few months on Raid Driver 188.8.131.527. Seqence of events as follows: loaded updated driver version 184.108.40.2063 three days ago. First drive marked as failed occured two days ago. Drive replaced and rebuilt. Second drive failed today. I attempted to plug the first failed drive into an open port on an Intel DG965WH motherboard in order to test it. Raid hardware did not recognize the drive, much to my surprise. Plugged in a backup drive to the other open port to see what would happen. Much to my surprise again, the drive is not recognized by the hardware. This is interesting as this backup drive has been plugged in previously when I needed to reload or transfer files for backup purposes. Never had any recognition issues with this drive being powered up and plugged in on a rare occasional basis. My suspicion at this point is either the 220.127.116.113 driver itself or possibly the driver load, which occured without any warning of a problem occuring. Looking forward to sorting this situation out finally, hopefully without losing anything on the drive and reverting back to driver V 18.104.22.1687. If anyone has any ideas on why the hardware isn't recognizing a single "non-raid" drive being plugged into the motherboard I'd be interested.
What are you looking for Mark? I am not sure what you are asking so I am giving a long answer aimed at a novice level computer person just beginning to use RAID on their home system. If I am shooting way too low as I don't understand you question, then I apologize.
Usually these are made up of hardware (chip et al) added to a desktop's main board (motherboard). You enable the functionality via the bios. Look for something along the lines of a menu item alluding to your storage drives etc. You might see something refering to your drives and ACPI or SATA (these will be the value of a bios property), which can be changed to RAID. After you save your bios changes you will see a RAID configuration screen just after booting and before the OS starts. Don't do this unless you are installing a new system, or you have researched your system and know you can add enough NEW drives to allow you to create a RAID volume AND have it work with the existing drives on which your system resides. You will need to do your homework to fill in the details. Good 'ol Google will help a lot.
It might be an issue with error handling in SATA hard drives. There's a good article on Wikipedia about it:
Time-Limited Error Recovery
"Modern hard drives feature an ability to recover from some read/write errors by internally remapping sectors and other forms of self test and recovery. The process for this can sometimes take several seconds or (under heavy usage) minutes, during which time the drive is unresponsive. RAID controllers are designed to recognize a drive which does not respond within a few seconds, and mark it as unreliable, indicating that it should be withdrawn from use and the array rebuilt from parity data. This is a long process, degrades performance, and if a second drive should fail under the resulting additional workload, it can be catastrophic.
If the drive itself is inherently reliable but has some bad sectors, then TLER and similar features prevent a disk from being unnecessarily marked as 'failed' by limiting the time spent on correcting detected errors before advising the array controller of a failed operation. The array controller can then handle the data recovery for the limited amount involved, rather than marking the entire drive as faulty."
This might be the root of your problem.
I have an Intel DG965WH motherboard board with Windows Vista SP1. The computer is fairly clean in terms of applications (latest version of Norton Internet Security, MS office, Winamp, and an FTP tool) and has been rock solid for the last year. The comptuer is generally rebooted once a month after Microsoft releases patches, otherwise the computer is always on and connected to an APC UPS backup power supply. I updated the Intel Matrix manager from 8.7 to 8.9 less than a week ago. Computer has since frozen twice requiring a hard reboot and today one of the 3 hard drives was listed as failed in the RAID 5 array. Also received a few errors in the Windows Event log with regards to "A request to write to the a file succeeded, but took an abnormally long time to be serviced by the OS." I went into the Intel Matrix Manager, right clicked on the failed hard drive, set it to normal and the RAID array rebuilt successfully. Found other postings scattered across various blogs and support forums with regards to failed drives being reported via the Intel Matrix Manager after upgrading to 8.8 or 8.9. Users also mentioned that they were testing the reported failed hard drives using the MFG hard drive test utilty and the drives were okay. I found a copy of the Intel Matrix Manager 8.7 on Intel's website and downloaded + installed. I'll try to get back here in a week or so to let you know if I'm back on a stable platform. If anyone's interested in reverting to 8.7, Google STOR_allOS_22.214.171.1247_PV.exe. Appears Intel has removed most of the 8.7 downloads from thier site, however, I did manage to find it listed under the downloads section for one of the Intel boards.
I've reverted back to 126.96.36.1997, running stable, as it was before the update. Sorted out the spare drive situation, it was a power cable issue and so far the array appears to be running without any issues. I see that I'm not the only one with issues after updating to 8.9 including array rebuilds. Using Seagate's Seatools, I did not find any problems with the drives that were reported as failed. Interesting to say the least.......
It looks like Intel tweaked a timing setting in the Matrix Storage Manager 8.8/8.9, and it occasionally causes the software to drop a drive out of the array. Nice.... I haven't seen this issue myself, but the Intel RAID setups that I support are running 8.6 or earlier. Here are the URLs for downloading 8.7 from Intel:
Intel Matrix Storage Manager 188.8.131.527 - executable
32-bit floppy configuration utility
64-bit floppy configuration utility