The number of downloads has crossed 1.2 million!! The number of organizations using MVAPICH2 libraries has crossed 3,125 in 89 countries!! The MVAPICH team would like to express thanks to all these organizations and their users!!
8th Annual MVAPICH User Group (MUG) Meeting was held virtually on August 24-26, 2020 with more than 350 attendees. Videos and slides of the presentations are available here.
MVAPICH Delivers Sub-minute (22 sec) Job Startup for 229,376 processes!! (Details)
Welcome to the home page of the MVAPICH project, led by Network-Based Computing Laboratory (NBCL) of The Ohio State University. The MVAPICH2 software, based on MPI 3.1 standard, delivers the best performance, scalability and fault tolerance for high-end computing systems and servers using InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE networking technologies. This software is being used by more than 3,125 organizations in 89 countries worldwide to extract the potential of these emerging networking technologies for modern systems. As of Feb '21, more than 1,256,000 downloads have taken place from this project's site. This software is also being distributed by many vendors as part of their software distributions.
The MVAPICH2 software family is ABI compatible with the version of MPICH it is based on. Please refer to our download page for more details.
The MVAPICH2 software is powering several supercomputers in the TOP500 list. Examples (from the Nov'20 ranking) include:
- 4th, 10,649,600-core (Sunway TaihuLight) at National Supercomputing Center in Wuxi, China
- 9th, 448, 448 cores (Frontera) at TACC
- 14th, 391,680 cores (ABCI) in Japan
- 21st, 570,020 cores (Nurion) in South Korea
- 22nd, 556,104 cores (Oakforest-PACS) in Japan
- 25th, 367,024 cores (Stampede2) at TACC
- 46th, 241,108-core (Pleiades) at NASA
The MVAPICH group provides several software libraries as listed below.
High-Performance Parallel Programming Libraries
|MVAPICH2||Support for InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE|
|MVAPICH2-Azure||Optimized Support for Microsoft Azure Platform with InfiniBand|
|MVAPICH2-X||Advanced MPI features/support (UMR, ODP, DC, Core-Direct, SHArP, XPMEM), OSU INAM (InifniBand Network Monitoring and Analysis), PGAS (OpenSHMEM, UPC, UPC++, and CAF), and MPI+PGAS programming models with unified communication runtime|
|MVAPICH2-X-Azure||Optimized Support of MVAPICH2-X for Microsoft Azure Platform with InfiniBand|
|MVAPICH2-X-AWS||Advanced MPI features (SRD and XPMEM) with support for Amazon Elastic Fabric Adapter (EFA)|
|MVAPICH2-GDR||Optimized MPI for clusters with NVIDIA GPUs, AMD GPUs, and for GPU-enabled Deep Learning Applications|
|MVAPICH2-Virt||Hypervisor and container based (Docker and Singularity) HPC cloud with MPI & IB (SR-IOV)|
|MVAPICH2-EA||Energy aware and High-performance MPI|
|OMB||Microbenchmarks suite to evaluate MPI and PGAS (OpenSHMEM, UPC, and UPC++) libraries for CPUs and GPUs|
|OSU INAM||Network monitoring, profiling, and analysis for clusters with MPI and scheduler integration|
|OEMT||Utility to measure the energy consumption of MPI applications|
This project is supported by funding from U.S. National Science Foundation, U.S. DOE Office of Science, U.S. Department of Defense, Ohio Board of Regents, Ohio Department of Development, arm, Cisco Systems, Cray, Intel, Linux Networx, Mellanox, Microsoft, NVIDIA, Pattern Computer, QLogic, ROCKPORT, and Sun Microsystems; and equipment donations from Advanced Clustering, AMD, Appro, arm, Broadcom, Chelsio, Dell, Fulcrum, Fujitsu, Intel, Mellanox, Microway, NetEffect, Pattern Computer, QLogic, ROCKPORT and Sun. Other technology partner includes: TotalView.
(NEW) MPI4cuML 0.1 (based on cuML 0.15) with support for C++ and Python APIs, built on top of mpi4py over the MVAPICH2-GDR library, handles to use MVAPICH2-GDR backend for Python cuML applications (KMeans, PCA, tSVD, RF, and LinearModels) is available. [more]
(NEW) Spack support for MVAPICH2-GDR 2.3.5 and MVAPICH2-X 2.3 GA is available. [more]
MPI4Dask 0.1 (based on Dask Distributed 2021.01.0) with support for MPI-based communication in Dask for cluster of GPUs, built on top of mpi4py over the MVAPICH2-GDR library, starting execution of Dask programs using Dask-MPI, compliant with user-level Dask APIs and packages is available. [more]
MVAPICH2-GDR 2.3.5 GA (based on MVAPICH2 2.3.5 GA) with support for AMD GPUs via Radeon Open Compute (ROCm) platform; support for ROCm PeerDirect, ROCm IPC, and unified memory based device-to-device communication for AMD GPUs, enhanced designs for GPU-aware MPI_Alltoall and GPU-aware MPI_Allgather, enhanced MPI derived datatype processing via kernel fusion, architecture specific flags to improve the performance of CUDA operations, support for Apache MXNet Deep Learning Framework, tested with PyTorch and DeepSpeed framework for distributed Deep Learning and multiple bug fixes is available. [more]
OMB 5.7 with support for evaluating the performance of various primitives with AMD GPU device and ROCm support and multiple bug fixes is available. [more]
MVAPICH2 2.3.5 GA based on MPICH v3.2.1, improved performance for MPI_Allreduce and MPI_Barrier, optimized and tuned support with SHARP v2.1, process placement aware HCA selection, support to use RoCE/Ethernet and InfiniBand HCAs simultaneously, support to select HWLOC v1 and V2 at configure time, support for Broadcom NetXtreme RoCE HCA, support for Fujitsu A64fx ARM processor, generalized code for GPU support, enhanced point-to-point and collective tuning for AMD EPYC Rome, Frontera@TACC, Expanse@SDSC, Ookami@StonyBrook and bb5@EPFL, update to hwloc v2.3.0 and multiple bug-fixes. [more]
MVAPICH2-X-AWS 2.3 based on MVAPICH2-X 2.3 GA, support for Amazon EFA adapter's Scalable Reliable Datagram (SRD) transport protocol, support for XPMEM based intra-node communication with run-time detection, improved inter-node latency and bandwidth performance for large messages, improved collectives, support for currently available basic OS types on AWS EC2 including: Amazon Linux 1/2, CentOS 6/7, Ubuntu 16.04/18.04 is available. [more]
OSU INAM 0.9.6 OSU InfiniBand Network Analysis and Monitoring (INAM) Tool 0.9.6 with support to collect and visualize MPI_T based performance data in varying scenarios, ability to gather and display Lustre I/O for MPI jobs, enable emulation mode to allow users to test OSU INAM tool in a sandbox environment without actual deployment, generate email notifications to alert users when user defined events, ability to select PBS/SLURM job schedulers at runtime, support for ARM and OpenPOWER architecture, and features in conjunction with MVAPICH2-X 2.3 is available. Click here for more details!
MVAPICH2-X 2.3 GA with optimized support for large message MPI_Allreduce and MPI_Reduce, improved communication performance using DC transport, optimized point-to-point and collective communication support for AWS EFA adapter and SRD transport protocol, availability of multiple MPI_T PVARs and CVARs, support for hybrid MPI+OpenSHMEM; optimized communication performance for AMD (EPYC), ARM, Intel and OpenPOWER platforms, and support for INAM 0.9.6 is available. [more]
MVAPICH2-X-Azure 2.3.rc3 based on MVAPICH2-X-2.3rc3, support for advanced MPI features (Direct Connect, Cooperative Protocol) and XPMEM, targeted for Azure HB, HB2 & HC virtual machine instances with InfiniBand and integrated HPC images is available. [more]
MVAPICH2-Azure 2.3.3 based on MVAPICH2 2.3.3, enhanced tuning for point-to-point and collective operations, targeted for Azure HB, HB2 & HC virtual machine instances with InfiniBand is available. [more]