The number of organizations using MVAPICH2 libraries has crossed 3,100 in 89 countries!! The MVAPICH team would like to express thanks to all these organizations and their users!!


8th Annual MVAPICH User Group (MUG) Meeting was held virtually on August 24-26, 2020 with more than 350 attendees. Videos and slides of the presentations are available here.


MVAPICH Delivers Sub-minute (22 sec) Job Startup for 229,376 processes!! (Details)


MVAPICH@ISC 2020


MVAPICH@OFA'20



Welcome to the home page of the MVAPICH project, led by Network-Based Computing Laboratory (NBCL) of The Ohio State University. The MVAPICH2 software, based on MPI 3.1 standard, delivers the best performance, scalability and fault tolerance for high-end computing systems and servers using InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE networking technologies. This software is being used by more than 3,100 organizations in 89 countries worldwide to extract the potential of these emerging networking technologies for modern systems. As of Sep '20, more than 862,000 downloads have taken place from this project's site. This software is also being distributed by many vendors as part of their software distributions.

The MVAPICH2 software family is ABI compatible with the version of MPICH it is based on. Please refer to our download page for more details.

The MVAPICH2 software is powering several supercomputers in the TOP500 list. Examples (from the June'20 ranking) include:

  • 4th, 10,649,600-core (Sunway TaihuLight) at National Supercomputing Center in Wuxi, China
  • 8th, 448, 448 cores (Frontera) at TACC
  • 12th, 391,680 cores (ABCI) in Japan
  • 18th, 570,020 cores (Nurion) in South Korea
  • 19th, 556,104 cores (Oakforest-PACS) in Japan
  • 22nd, 367,024 cores (Stampede2) at TACC
  • 40th, 241,108-core (Pleiades) at NASA

The MVAPICH group provides several software libraries as listed below.

High-Performance Parallel Programming Libraries

MVAPICH2Support for InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE
MVAPICH2-AzureOptimized Support for Microsoft Azure Platform with InfiniBand
MVAPICH2-XAdvanced MPI features/support (UMR, ODP, DC, Core-Direct, SHArP, XPMEM), OSU INAM (InifniBand Network Monitoring and Analysis), PGAS (OpenSHMEM, UPC, UPC++, and CAF), and MPI+PGAS programming models with unified communication runtime
MVAPICH2-X-AzureOptimized Support of MVAPICH2-X for Microsoft Azure Platform with InfiniBand
MVAPICH2-X-AWSAdvanced MPI features (SRD and XPMEM) with support for Amazon Elastic Fabric Adapter (EFA)
MVAPICH2-GDROptimized MPI for clusters with NVIDIA GPUs and for GPU-enabled Deep Learning Applications
MVAPICH2-VirtHypervisor and container based (Docker and Singularity) HPC cloud with MPI & IB (SR-IOV)
MVAPICH2-EAEnergy aware and High-performance MPI

Microbenchmarks

OMBMicrobenchmarks suite to evaluate MPI and PGAS (OpenSHMEM, UPC, and UPC++) libraries for CPUs and GPUs

Tools

OSU INAMNetwork monitoring, profiling, and analysis for clusters with MPI and scheduler integration
OEMTUtility to measure the energy consumption of MPI applications

This project is supported by funding from U.S. National Science Foundation, U.S. DOE Office of Science, U.S. Department of Defense, Ohio Board of Regents, Ohio Department of Development, arm, Cisco Systems, Cray, Intel, Linux Networx, Mellanox, Microsoft, NVIDIA, Pattern Computer, QLogic, and Sun Microsystems; and equipment donations from Advanced Clustering, AMD, Appro, arm, Broadcom, Chelsio, Dell, Fulcrum, Fujitsu, Intel, Mellanox, Microway, NetEffect, Pattern Computer, QLogic and Sun. Other technology partner includes: TotalView.

Announcements


OSU INAM 0.9.6 OSU InfiniBand Network Analysis and Monitoring (INAM) Tool 0.9.6 with support to collect and visualize MPI_T based performance data in varying scenarios, ability to gather and display Lustre I/O for MPI jobs, enable emulation mode to allow users to test OSU INAM tool in a sandbox environment without actual deployment, generate email notifications to alert users when user defined events, ability to select PBS/SLURM job schedulers at runtime, support for ARM and OpenPOWER architecture, and features in conjunction with MVAPICH2-X 2.3 is available. Click here for more details!

MVAPICH2-X 2.3 GA with optimized support for large message MPI_Allreduce and MPI_Reduce, improved communication performance using DC transport, optimized point-to-point and collective communication support for AWS EFA adapter and SRD transport protocol, availability of multiple MPI_T PVARs and CVARs, support for hybrid MPI+OpenSHMEM; optimized communication performance for AMD (EPYC), ARM, Intel and OpenPOWER platforms, and support for INAM 0.9.6 is available. [more]

MVAPICH2-GDR 2.3.4 GA (based on MVAPICH2 2.3.4 GA) with support for enhanced MPI_Allreduce performance on DGX-2 and POWER9 systems, reduced CUDA interception overhead for non-CUDA symbols, enhanced performance for point-to-point and collective operations on Frontera's RTX nodes, new runtime variable MV2_SUPPORT_DL for supporting DL frameworks, added compilation and runtime methods for checking CUDA support, tested with Horovod and common DL Frameworks (TensorFlow, PyTorch, and MXNet), and PyTorch Distributed is available. [more]

OMB 5.6.3 with support for benchmarking applications that use 'fork' system call (osu_latency_mp) and multiple bug fixes is available. [more]

MVAPICH2 2.3.4 GA based on MPICH v3.2.1, improved performance for small message collective operations, improved performance for data transfers from/to non-contiguous buffers used by user-defined datatypes, support for MPI_REAL16 based reduction operations for Fortran programs, custom API to identify if MVAPICH2 has in-built CUDA support, support to intercept aligned_alloc in ptmalloc, support to enable fork safety, multiple MPI_T PVARs and CVARs support for point-to-point and collective, enhanced point-to-point and collective tuning for AMD EPYC Rome, Frontera@TACC, Longhorn@TACC, Mayer@Sandia, Pitzer@OSC, Catalyst@EPCC, Summit@ORNL, Lassen@LLNL, and Sierra@LLNL, support for for AMD Optimizing C/C++ (AOCC) compiler v2.1.0, GCC compiler V10.1.0, update to hwloc v2.2.0 and multiple bug-fixes. [more]

MVAPICH2-X-Azure 2.3.rc3 based on MVAPICH2-X-2.3rc3, support for advanced MPI features (Direct Connect, Cooperative Protocol) and XPMEM, targeted for Azure HB, HB2 & HC virtual machine instances with InfiniBand and integrated HPC images is available. [more]

MVAPICH2-Azure 2.3.3 based on MVAPICH2 2.3.3, enhanced tuning for point-to-point and collective operations, targeted for Azure HB, HB2 & HC virtual machine instances with InfiniBand is available. [more]

MVAPICH2-X-AWS 2.3 based on MVAPICH2-X 2.3, support for Amazon EFA adapter's Scalable Reliable Datagram (SRD) transport protocol, support for XPMEM based intra-node communication for point-to-point and collectives, enhanced tuning for point-to-point and collective operations and targeted for AWS instances with Amazon Linux 2 AMI and EFA support is available.