We now have a version of RCKMPI on the SCC. If you are interested in MPI on the SCC, please answer this thread. We can make it available for download through our public svn. This is a new thing and getting user input would be very helpful to us. If the interest in MPI is strong, I'll make a subcommunity specifically for discussing it.
The issue of MPI on SCC comes up from time to time.
rckmpb (which will be described in detail in a soon to be published paper in ACM OS review) implements the data link layer of the TCP/IP stack. The result is that anything that uses TCP/IP will just work on SCC. That includes MPI.
So ... take you MPI of choice and rebuild it from source. It will just work. OpenMPI and MPIch just work if you configure and build them for SCC.
Now if I had lots of time, I would do this and put it in a public place where anyone could grab it. I really would like to do that ... but I just don't have time. Maybe I will in a few weeks when my current batch of travel is done, but I can't make any promises. But seriously ... anyone interested in MPI could do this on your own.
Completely orthogonally to the issue of building MPI is how rckmpb is configured. I don't know if how to change the rckmpb confiuration is documented. Maybe Ted could comment on that?
And if its not documented, I will go ahead and make my notes on rckmpb configuration available somewhere (I'm sure Ted can tell me where I could put these notes).
Indeed, MPI has been possible on the SCC for a long time with the use of TCP/IP. I have tested both MPICH2 and Open MPI, and I can confirm they work flawlessly.
However, Ted is referring to an MPI implementation that is customized for the SCC, and does away with the TCP/IP overhead.
I'm looking forward to seeing compliance with MPI-2 validated to the extent possible with the MPICH2 test suite. There is not a single implementation of MPI that is fully compliant with the MPI-2 standard. Every implementation has bugs and there is no set of tests capable of verifying full compliance with the complete MPI-2 standard.