Documentation isn't exactly the most thorough...
So is SDD caching limited to only single drive/RAID volume at a time, or does it work for multiple disks/volumes by enabling separate cache for each?
And is that 64GB cache size limit just some decided limit in current software or hard limitation?
With 18 years of computer experience about everything keeps constantly bloating to consume all available space, so I could see higher limit being usefull in future and ability to cache only one disk/volume isn't very flexible.
My desk PC which I use for everything from I'm media use to digicam photo editing/processing and gaming is rather old Q9550 E0@3.7GHz/1.2V (wonder why it had unnecessary high 1.25V as stock voltage?) PC and I'm planning to upgrade main components during summer when Haswell becomes available.
Wanting both fault tolerance and more performance I went at that time for 3ware 9650SE hardware RAID controller with four 640GB WD Blacks (WD6401AALS) in RAID10 as main drive. That has so far gotten me over hyping of SSDs which are still insanely expensive per GB, despite of HDD prices still being lot behind price curve before Thailand floods.
Of course no mechanical HDD/RAID of them can approach SSDs in overall performance, so pure HDD storage approach just isn't anymore tempting.
And while combination mat lined Lian Li PC-A71B does good enough job at damping HDD noise that I had to make small comparator based circuitry for getting HDD activity led controlled by all sources (mobo and 3ware's two connectors) fewer HDDs is always less noise sources and less heat produced.
So considering read performance would be lot more important than fast non sequential writes simple RAID1 with SSD caching would be best compromise in all aspects.
SSD caching should also help if this seriously **** poor RAID1 performance (no excuse for it to be slower than single disk) wasn't some compatibility problem but sign of purposely crippled RAID1 implementation.
At least LSI's hardware RAID controllers have flexible SSD caching system but those would mean lots of cost for simple RAID1 and probably some 5 to 10W of more power consumed/heat produced by card. (plus problem of non-luxury priced CPUs/mobos being crippled in number of PCI-e lanes for both PCI-e x16 and x8 card)