MVAPICH2 2.2 Features and Supported Platforms

New features and enhancements compared to MVAPICH2 2.1 release are marked as (NEW) .

MVAPICH2 (MPI-3 over InfiniBand) is an MPI-3 implementation based on MPICH ADI3 layer. MVAPICH2 2.2 is available as a single integrated package (with MPICH-3.1.4). The current release supports the following ten underlying transport interfaces:

MVAPICH2 2.2 is compliant with MPI-3 standard. In addition, MVAPICH2 2.2 provides support and optimizations for NVIDIA GPU, multi-threading and fault-tolerance (Checkpoint-restart, Job-pause-migration-resume). New features compared to 2.1 are indicated as (NEW). A complete set of features of MVAPICH2 2.2 are:

MVAPICH2-X 2.2 Features

MVAPICH2-X provides a unified high-performance runtime that supports both MPI and PGAS programming models on InfiniBand clusters. It enables developers to port parts of large MPI applications that are suited for PGAS programming model. This minimizes the development overheads that have been a huge deterrent in porting MPI applications to use PGAS models. The unified runtime also delivers superior performance compared to using different MPI, UPC, UPC++, OpenSHMEM and CAF libraries by optimizing use of network and memory resources. MVAPICH2-X supports pure MPI programs, MPI+OpenMP programs, pure UPC, pure OpenSHMEM, pure CAF as well as hybrid MPI(+OpenMP) + PGAS programs. MVAPICH2-X supports UPC, UPC++, OpenSHMEM and CAF as PGAS models. High-level features of MVAPICH2-X are listed below. New features compared to MVAPICH2-X 2.1 are indicated as (NEW).

MVAPICH2-GDR 2.2RC1 Features

Features for supporting GPU-GPU communication on clusters with NVIDIA GPUs.

MVAPICH2-GDR 2.2RC1 derives from MVAPICH2 2.2RC1, which is an MPI-3 implementation based on MPICH ADI3 layer. All the features available with the OFA-IB-CH3 channel of MVAPICH2 2.2b are available with this release. MVAPICH2-GDR 2.2RC1 offers additional features that take advantage of the GPUDirect RDMA technology for intra- and inter-node communication between NVIDIA GPUs on clusters with Mellanox InfiniBand adapters. Further, it provides efficient support for NonBlocking Collectives (NBC) from GPU buffers by combining GPUDirect RDMA and Core-Direct features. It also supports the CUDA managed memory feature. New features compared to MVAPICH2-GDR 2.1 are indicated as (NEW).
The list of features for supporting MPI communication from NVIDIA GPU device memory is provided below.

MVAPICH2-MIC 2.0 Features

MVAPICH2-Virt 2.2rc1 Features

MVAPICH2-Virt 2.2rc1 derives from MVAPICH2 2.2rc1, which incorporates designs that take advantage of the new features and mechanisms of high-performance networking technologies with SR-IOV as well as other virtualization technologies such as Inter-VM Shared Memory (IVSHMEM), IPC enabled Inter-Container Shared Memory (IPC-SHM), Cross Memory Attach (CMA), and OpenStack. For SR-IOV-enabled InfiniBand virtual machine environments and InfiniBand based container environments, MVAPICH2-Virt has very little overhead compared to MVAPICH2 running over InfiniBand in native mode. MVAPICH2-Virt can deliver the best performance and scalability to MPI applications running inside both virtual machines and containers over SR-IOV enabled InfiniBand clusters. MVAPICH2-Virt also inherits all the features for communication on HPC Clusters that are available in the MVAPICH2 software stack. New features compared to MVAPICH2-Virt 2.1 are indicated as (NEW).
The list of features for supporting MPI communication in virtualized environment is provided below.

MVAPICH2-EA 2.1 Features