MVAPICH (MPI-1 over OpenFabrics/Gen2, OprnFabrics/Gen2-UD, uDAPL, InfiniPath, VAPI and TCP/IP)
This is an MPI-1 implementation. This implementation is based on MPICH and MVICH. MVAPICH is pronounced as ``em-vah-pich''. The latest release is MVAPICH 1.2 (includes MPICH 1.2.7). It is available under BSD licensing.
MVAPICH 1.2 supports the following seven underlying transport interfaces:
- High-Performance support with scalability for OpenFabrics/Gen2 interface, developed by OpenFabrics, to work with InfiniBand and other RDMA interconnects.
- (NEW) High-Performance support with scalability for OpenFabrics/Gen2-RDMAoE interface, developed by OpenFabrics
- High-Performance support with scalability (for clusters with multi-thousand cores) for OpenFabrics/Gen2-Hybrid interface, developed by OpenFabrics, to work with InfiniBand.
- Shared-Memory only channel This interface support is useful for running MPI jobs on multi-processor systems without using any high-performance network. For example, multi-core servers, desktops, and laptops; and clusters with serial nodes.
- The InfiniPath interface for InfiniPath adapters from QLogic.
- The standard TCP/IP interface (provided by MPICH) to work with a range of networks. This interface can be used with IPoIB support of InfiniBand also. However, it will not deliver good performance/scalability as compared to any of the lower-level (OpenFabrics/Gen2 or OpenFabrics/Gen2-Hybrid) support.
MVAPICH 1.2 supports many features for high performance, scalability portability and fault tolerance. It also supports a wide range of platforms (architecture, OS, compilers and InfiniBand adapters). A complete set of features and supported platforms can be found here.
The complete MVAPICH 1.2 package is available through public anonymous MVAPICH SVN. It is also available with the OFED stack.
Successive versions with additional features (such as optimized algorithms with collective offload) will be available soon.