MVAPICH2 2.2a Features and Supported Platforms

New features and enhancements compared to MVAPICH2 2.1 release are marked as (NEW) .

MVAPICH2 (MPI-3 over InfiniBand) is an MPI-3 implementation based on MPICH ADI3 layer. MVAPICH2 2.2a is available as a single integrated package (with MPICH-3.1.4). The current release supports the following ten underlying transport interfaces:

MVAPICH2 2.2a is compliant with MPI-3 standard. In addition, MVAPICH2 2.2a provides support and optimizations for NVIDIA GPU, multi-threading and fault-tolerance (Checkpoint-restart, Job-pause-migration-resume). New features compared to 2.1 are indicated as (NEW). A complete set of features of MVAPICH2 2.2a are:

MVAPICH2-X 2.2a Features

MVAPICH2-X provides a unified high-performance runtime that supports both MPI and PGAS programming models on InfiniBand clusters. It enables developers to port parts of large MPI applications that are suited for PGAS programming model. This minimizes the development overheads that have been a huge deterrent in porting MPI applications to use PGAS models. The unified runtime also delivers superior performance compared to using different MPI, UPC, OpenSHMEM and CAF libraries by optimizing use of network and memory resources. MVAPICH2-X supports pure MPI programs, MPI+OpenMP programs, pure UPC, pure OpenSHMEM, pure CAF as well as hybrid MPI(+OpenMP) + PGAS programs. MVAPICH2-X supports UPC, OpenSHMEM and CAF as PGAS models. High-level features of MVAPICH2-X are listed below. New features compared to MVAPICH2-X 2.1 are indicated as (NEW).

MVAPICH2-GDR 2.1rc2 Features

Features for supporting GPU-GPU communication on clusters with NVIDIA GPUs.

MVAPICH2-GDR 2.1rc2 derives from MVAPICH2 2.1rc2, which is an MPI-3 implementation based on MPICH ADI3 layer. All the features available with the OFA-IB-CH3 channel of MVAPICH2 2.1rc2 are available with this release. MVAPICH2-GDR 2.1rc2 offers additional features that take advantage of the GPUDirect RDMA technology for intra- and inter-node communication between NVIDIA GPUs on clusters with Mellanox InfiniBand adapters. New features compared to MVAPICH2-GDR 2.1a are indicated as (NEW).
The list of features for supporting MPI communication from NVIDIA GPU device memory is provided below.

MVAPICH2-MIC 2.0 Features

MVAPICH2-Virt 2.1 Features

MVAPICH2-Virt 2.1 derives from MVAPICH2 2.1, which incorporates designs that take advantage of the new features and mechanisms of high-performance networking technologies with SR-IOV as well as other virtualization technologies such as Inter-VM Shared Memory (IVSHMEM). For an InfiniBand SR-IOV-based virtualized environment, MVAPICH2-Virt has very little overhead compared with MVAPICH2 running over InfiniBand in native mode. MVAPICH2-Virt delivers the best performance and scalability to MPI applications running over SR-IOV enabled InfiniBand clusters. MVAPICH2-Virt also inherits all the features for communication on HPC Clusters that are available in the MVAPICH2 software stack. New features compared to MVAPICH2-Virt 2.1rc2 are indicated as (NEW).
The list of features for supporting MPI communication in virtualized environment is provided below.

MVAPICH2-EA 2.1 Features