MVAPICH2 2.3.7 Features and Supported Platforms

New features and enhancements compared to MVAPICH2 2.2 release are marked as (NEW) .

MVAPICH2 (MPI-3.1 over OpenFabrics-IB, Omni-Path, OpenFabrics-iWARP, PSM, and TCP/IP) is an MPI-3.1 implementation based on MPICH ADI3 layer. MVAPICH2 2.3.7 is available as a single integrated package (with MPICH-3.2.1). The current release supports the following ten underlying transport interfaces:

MVAPICH2 2.3.7 is compliant with MPI-3.1 standard. In addition, MVAPICH2 2.3.7 provides support and optimizations for NVIDIA GPU, multi-threading and fault-tolerance (Checkpoint-restart, Job-pause-migration-resume). New features compared to 2.2 are indicated as (NEW). A complete set of features of MVAPICH2 2.3.7 are:


MVAPICH2-X 2.3 Features

MVAPICH2-X provides a unified high-performance runtime that supports both MPI and PGAS programming models on InfiniBand clusters. It enables developers to port parts of large MPI applications that are suited for PGAS programming model. This minimizes the development overheads that have been a huge deterrent in porting MPI applications to use PGAS models. The unified runtime also delivers superior performance compared to using different MPI, UPC, UPC++, OpenSHMEM and CAF libraries by optimizing use of network and memory resources. MVAPICH2-X supports pure MPI programs, MPI+OpenMP programs, pure UPC, pure OpenSHMEM, pure CAF as well as hybrid MPI(+OpenMP) + PGAS programs. MVAPICH2-X supports UPC, UPC++, OpenSHMEM and CAF as PGAS models. High-level features of MVAPICH2-X are listed below. New features compared to MVAPICH2-X 2.2 are indicated as (NEW).

MVAPICH2-GDR 2.3.7 Features

Features for supporting GPU-GPU communication on clusters with NVIDIA and AMD GPUs.

MVAPICH2-GDR 2.3.7 derives from MVAPICH2 2.3.7, which is an MPI-3 implementation based on MPICH ADI3 layer. All the features available with the OFA-IB-CH3 channel of MVAPICH2 2.3.7 are available with this release and incorporates designs that take advantage of the GPUDirect RDMA (GDR) technology for inter-node data movement on NVIDIA GPUs clusters with Mellanox InfiniBand interconnect. MVAPICH2-GDR 2.3.7 also adds support for AMD GPUs via Radeon Open Compute (ROCm) software stack and exploits ROCmRDMA technology for direct communication between AMD GPUs and Mellanox InfiniBand adapters. It also provides support for OpenPower and NVLink, efficient intra-node CUDA-Aware unified memory communication and support for RDMA_CM, RoCE-V1, and RoCE-V2. Further, MVAPICH2-GDR 2.3.7 provides optimized large message collectives (broadcast, reduce and allreduce) for emerging Deep Learning frameworks like TensorFlow and PyTorch on NVIDIA DGX-2, ABCI system @AIST, and POWER9 systems like Sierra and Lassen @LLNL and Summit @ORNL. It also enhances the performance of dense collectives e.g., Alltoall and Allgather on multi-GPU systems like Lassen @LLNL and Summit @ORNL. Further, it provides efficient support for NonBlocking Collectives (NBC) from GPU buffers by combining GPUDirect RDMA and Core-Direct features. It also supports the CUDA managed memory feature and optimize large message collectives targeting Deep Learning frameworks. New features compared to MVAPICH2-GDR 2.2 are indicated as (NEW).
The list of features for supporting MPI communication from NVIDIA and AMD GPU device memory is provided below.

MVAPICH2-J Features

MVAPICH2-J 2.3.7 is a Java wrapper or bindings to the MVAPICH2 MPI library and its other variations that we offer. The Java MPI library, MVAPICH2-J, relies on the Java Native Interface (JNI) to allow for an implementation of a Java MPI effort that is lean and easy to develop and maintain. The Java bindings inherit all of the features for communication that our native family of MPI libraries support.
The list of features of MVAPICH2-J is listed below.

MVAPICH2-MIC 2.0 Features

MVAPICH2-Virt 2.2 Features

MVAPICH2-Virt 2.2 derives from MVAPICH2, which incorporates designs that take advantage of the new features and mechanisms of high-performance networking technologies with SR-IOV as well as other virtualization technologies such as Inter-VM Shared Memory (IVSHMEM), IPC enabled Inter-Container Shared Memory (IPC-SHM), Cross Memory Attach (CMA), and OpenStack. For SR-IOV-enabled InfiniBand virtual machine environments and InfiniBand based Docker/Singularity container environments, MVAPICH2-Virt has very little overhead compared to MVAPICH2 running over InfiniBand in native mode. MVAPICH2-Virt can deliver the best performance and scalability to MPI applications running inside both virtual machines and Docker/Singularity containers over SR-IOV enabled InfiniBand clusters. MVAPICH2-Virt also inherits all the features for communication on HPC Clusters that are available in the MVAPICH2 software stack. New features compared to MVAPICH2-Virt 2.2rc1 are indicated as (NEW).
The list of features for supporting MPI communication in virtualized environment is provided below.

MVAPICH2-EA 2.1 Features