The 9th Annual MVAPICH User Group (MUG) conference was held virtually on August 23-25, 2021 with more than 240 registrations. Slides and videos of the presentations are available.


The number of downloads has crossed 1.42 million!! The number of organizations using MVAPICH2 libraries has crossed 3,200 in 89 countries!! The MVAPICH team would like to express thanks to all these organizations and their users!!


MVAPICH Delivers Sub-minute (22 sec) Job Startup for 229,376 processes!! (Details)


MVAPICH@PEARC 21


MVAPICH@ISC '21



Welcome to the home page of the MVAPICH project, led by Network-Based Computing Laboratory (NBCL) of The Ohio State University. The MVAPICH2 software, based on MPI 3.1 standard, delivers the best performance, scalability and fault tolerance for high-end computing systems and servers using InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE networking technologies. This software is being used by more than 3,200 organizations in 89 countries worldwide to extract the potential of these emerging networking technologies for modern systems. As of Sep '21, more than 1,485,000 downloads have taken place from this project's site. This software is also being distributed by many vendors as part of their software distributions.

The MVAPICH2 software family is ABI compatible with the version of MPICH it is based on. Please refer to our download page for more details.

The MVAPICH2 software is powering several supercomputers in the TOP500 list. Examples (from the Jun'21 ranking) include:

  • 4th, 10,649,600-core (Sunway TaihuLight) at National Supercomputing Center in Wuxi, China
  • 10th, 448, 448 cores (Frontera) at TACC
  • 20th, 288,288 cores (Lassen) at LLNL
  • 31st, 570,020 cores (Nurion) in South Korea
  • 32nd, 556,104 cores (Oakforest-PACS) in Japan
  • 36th, 367,024 cores (Stampede2) at TACC

The MVAPICH group provides several software libraries as listed below.

High-Performance Parallel Programming Libraries

MVAPICH2Support for InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE
MVAPICH2-AzureOptimized Support for Microsoft Azure Platform with InfiniBand
MVAPICH2-XAdvanced MPI features/support (UMR, ODP, DC, Core-Direct, SHArP, XPMEM), OSU INAM (InifniBand Network Monitoring and Analysis), PGAS (OpenSHMEM, UPC, UPC++, and CAF), and MPI+PGAS programming models with unified communication runtime
MVAPICH2-X-AWSAdvanced MPI features (SRD and XPMEM) with support for Amazon Elastic Fabric Adapter (EFA)
MVAPICH2-GDROptimized MPI for clusters with NVIDIA GPUs, AMD GPUs, and for GPU-enabled Deep Learning Applications
MVAPICH2-VirtHypervisor and container based (Docker and Singularity) HPC cloud with MPI & IB (SR-IOV)
MVAPICH2-EAEnergy aware and High-performance MPI

Microbenchmarks

OMBMicrobenchmarks suite to evaluate MPI and PGAS (OpenSHMEM, UPC, and UPC++) libraries for CPUs and GPUs

Tools

OSU INAMNetwork monitoring, profiling, and analysis for clusters with MPI and scheduler integration
OEMTUtility to measure the energy consumption of MPI applications

This project is supported by funding from U.S. National Science Foundation, U.S. DOE Office of Science, U.S. Department of Defense, Ohio Board of Regents, Ohio Department of Development, arm, Cisco Systems, Cray, Intel, Linux Networx, Mellanox, Microsoft, NVIDIA, Pattern Computer, QLogic, ROCKPORT, and Sun Microsystems; and equipment donations from Advanced Clustering, AMD, Appro, arm, Broadcom, Chelsio, Dell, Fulcrum, Fujitsu, Intel, Mellanox, Microway, NetEffect, Pattern Computer, QLogic, ROCKPORT and Sun. Other technology partner includes: TotalView.

Announcements


MVAPICH2-GDR 2.3.6 GA (based on MVAPICH2 2.3.6 GA) with support for on-the-fly compression of point-to-point GPU-GPU communication for NVIDIA GPUs; hybrid communication protocols using NCCL-based, CUDA-based, and IB verbs-based primitives for several collective operations; full support for NVIDIA DGX, NVIDIA DGX-2 V-100, and NVIDIA DGX-2 A-100 systems; optimized support for HPC, deep learning, machine learning, and data science workloads and multiple bug fixes is available. [more]

OMB 5.8 with support for NCCL point-to-point and collective benchmarks, data validation support for some collective benchmarks, and multiple bug fixes are available. [more]

Partnership and contribution to the NSF-Awarded $20M AI-Institute on Intelligent CyberInfrastructure (ICICLE). Details.

MVAPICH2 2.3.6 GA based on MPICH v3.2.1, SHARP support for MPI_Reduce and MPI_Bcast improved performance for MPI_Allreduce and MPI_Barrier, optimized and tuned support with SHARP v2.1, enhanced performance for UD-Hybrid code, enhanced performance for shared-memory collectives, enhanced job-startup performance for flux job launcher, architecture detection and enhanced point-to-point tuning for Oracle BM.HPC2 cloud shape, support for GCC 11 and Intel IFX compilers, enhanced point-to-point and collective tuning for AMD EPYC Rome, Frontera@TACC, Expanse@SDSC, Ookami@StonyBrook, and bb5@EPFL, update to hwloc v2.4.2 and multiple bug-fixes. [more]

MPI4Dask 0.2 (based on Dask Distributed 2021.01.0) with support for MPI-based communication in Dask for a cluster of CPUs and GPUs, built on top of mpi4py over the MVAPICH2, MVAPICH2-X, and MVAPICH2-GDR library, starting execution of Dask programs using Dask-MPI, compliant with user-level Dask APIs and packages is available. [more]

MPI4cuML 0.1 (based on cuML 0.15) with support for C++ and Python APIs, built on top of mpi4py over the MVAPICH2-GDR library, handles to use MVAPICH2-GDR backend for Python cuML applications (KMeans, PCA, tSVD, RF, and LinearModels) is available. [more]

MVAPICH2-X-AWS 2.3 based on MVAPICH2-X 2.3 GA, support for Amazon EFA adapter's Scalable Reliable Datagram (SRD) transport protocol, support for XPMEM based intra-node communication with run-time detection, improved inter-node latency and bandwidth performance for large messages, improved collectives, support for currently available basic OS types on AWS EC2 including: Amazon Linux 1/2, CentOS 6/7, Ubuntu 16.04/18.04 is available. [more]

OSU INAM 0.9.6 OSU InfiniBand Network Analysis and Monitoring (INAM) Tool 0.9.6 with support to collect and visualize MPI_T based performance data in varying scenarios, ability to gather and display Lustre I/O for MPI jobs, enable emulation mode to allow users to test OSU INAM tool in a sandbox environment without actual deployment, generate email notifications to alert users when user defined events, ability to select PBS/SLURM job schedulers at runtime, support for ARM and OpenPOWER architecture, and features in conjunction with MVAPICH2-X 2.3 is available. Click here for more details!

MVAPICH2-X 2.3 GA with optimized support for large message MPI_Allreduce and MPI_Reduce, improved communication performance using DC transport, optimized point-to-point and collective communication support for AWS EFA adapter and SRD transport protocol, availability of multiple MPI_T PVARs and CVARs, support for hybrid MPI+OpenSHMEM; optimized communication performance for AMD (EPYC), ARM, Intel and OpenPOWER platforms, and support for INAM 0.9.6 is available. [more]