I don't see any performance numbers comparing the two. But if all your hard drives are already in use, it makes sense to not try dumping three more servers on top of them.
Is hard drive access your current bottleneck?
Thanks for your reply. I am having the same problem finding comparison info. (Disk I/O performance...) We are adding servers to the IMS plus a vtek array...E series, and disk contention has been a problem in the past, but I think the ext. direct attached will solve the problem. I don't know if anyone has tried the promise J310sD in place of the E310sD...that would be interesting info.
We have a few fully populated modular servers and with the testing we have done the disk performance is going to depend a lot on how you have pooled your disks and of course the disk I/O intensity of the applications you run on top of it.
There is great benefit in putting all your drives in one big pool and building RAID10 VDs for the systems that need the disk I/O, but this can have a negative impact if you have multiple disk I/O heavy systems. In that case it might be better to have smaller pools which give you predictable disk access, but at a capped performance. (We went for a big pool)
We've also seen cases where the OS access to the disk is the actual bottleneck. i.e. splitting the applications to another compute module improves the disk I/O performance of the OS. So it looks like the controller in the modular server can handle quite a bit of load.
As for the Promise vtrak. I've just got my first one in the lab, busy testing for a client. Best advice there is load up your systems and do lots of iometer testing. Test local and external storage to compare and do simultanious tests to simulate more than one system with heavy I/O.