The number of organizations using MVAPICH2 libraries has crossed 3,000 in 89 countries!! The MVAPICH team would like to express thanks to all these organizations and their users!!


7th Annual MVAPICH User Group (MUG) Meeting will take place on August 19-21, 2019 in Columbus, Ohio, USA. Click here for details.


MVAPICH Delivers Sub-minute (22 sec) Job Startup for 229,376 processes!! (Details)


MVAPICH@ISC'19


MVAPICH@OFA'19



Welcome to the home page of the MVAPICH project, led by Network-Based Computing Laboratory (NBCL) of The Ohio State University. The MVAPICH2 software, based on MPI 3.1 standard, delivers the best performance, scalability and fault tolerance for high-end computing systems and servers using InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE networking technologies. This software is being used by more than 3,000 organizations in 89 countries worldwide to extract the potential of these emerging networking technologies for modern systems. As of Jun '19, more than 550,000 downloads have taken place from this project's site. This software is also being distributed by many vendors as part of their software distributions.

The MVAPICH2 software family is ABI compatible with the version of MPICH it is based on. Please refer to our download page for more details.

The MVAPICH2 software is powering several supercomputers in the TOP500 list. Examples (from the June'19 ranking) include:

  • 3rd, 10,649,600-core (Sunway TaihuLight) at National Supercomputing Center in Wuxi, China
  • 8th, 391,680 cores (ABCI) in Japan
  • 15th, 570,020 cores (Neurion) in South Korea
  • 16th, 556,104 cores (Oakforest-PACS) in Japan
  • 19th, 367,024 cores (Stampede2) at TACC
  • 31st, 241,108-core (Pleiades) at NASA
  • 81st, 76,032-core (Tsubame 2.5) at Tokyo Institute of Technology

The MVAPICH group provides several software libraries as listed below.

High-Performance Parallel Programming Libraries

MVAPICH2Support for InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE
MVAPICH2-XAdvanced MPI features/support (UMR, ODP, DC, Core-Direct, SHArP, XPMEM), OSU INAM (InifniBand Network Monitoring and Analysis), PGAS (OpenSHMEM, UPC, UPC++, and CAF), and MPI+PGAS programming models with unified communication runtime
MVAPICH2-GDROptimized MPI for clusters with NVIDIA GPUs and for GPU-enabled Deep Learning Applications
MVAPICH2-VirtHypervisor and container based (Docker and Singularity) HPC cloud with MPI & IB (SR-IOV)
MVAPICH2-EAEnergy aware and High-performance MPI

Microbenchmarks

OMBMicrobenchmarks suite to evaluate MPI and PGAS (OpenSHMEM, UPC, and UPC++) libraries for CPUs and GPUs

Tools

OSU INAMNetwork monitoring, profiling, and analysis for clusters with MPI and scheduler integration
OEMTUtility to measure the energy consumption of MPI applications

This project is supported by funding from U.S. National Science Foundation, U.S. DOE Office of Science, U.S. Department of Defense, Ohio Board of Regents, Ohio Department of Development, Cisco Systems, Cray, Intel, Linux Networx, Mellanox, Microsoft, NVIDIA, Pattern Computer, QLogic, and Sun Microsystems; and equipment donations from Advanced Clustering, AMD, Appro, Chelsio, Dell, Fulcrum, Fujitsu, Intel, Mellanox, Microway, NetEffect, Pattern Computer, QLogic and Sun. Other technology partner includes: TotalView.

Announcements


(NEW) The MVAPICH team is now on Twitter! Follow us for up to date information on our events and tutorials! #MVAPICH.

MVAPICH2-X 2.3rc2 with support for advanced co-operative (COOP) rendezvous protocols for improved performance of large message communication; RGET, RPUT, and COOP protocols for CMA and XPMEM; load balanced and dynamic rendezvous protocol selection; XPMEM-based MPI collective operations (Broadcast, Gather, Scatter, and Allgather); extend XPMEM-based MPI collective operations (Reduce and All-Reduce for PSM-CH3 and PSM2-CH3 interfaces, improved connection establishment for DC transport, improved Alltoallv algorithm for small messages, hybrid MPI+OpenSHMEM; optimized support for ARM, OpenPOWER and Intel platforms, and support for INAM 0.9.4 is available. [more]

OMB 5.6.1 with a fix for an issue with latency computation in osu_latency_mt benchmark is available. [more]

MVAPICH2-GDR 2.3.1 GA (based on MVAPICH2 2.3.1 GA) with support for enhanced intra-node and inter-node point-to-point performance for DGX-2 and IBM POWER8 and IBM POWER9 systems, enhanced Allreduce performance for DGX-2 and IBM POWER8/POWER9 systems for deep learning jobs, enhanced small message performance for CUDA-Aware MPI_Put and MPI_Get, support for PGI 18.10, flexible support for running TensorFlow (Horovod) jobs, and addition of new and simplified runtime variables [more]

MVAPICH2 2.3.1 based on MPICH v3.2.1 and support for AMD EPYC system, support for JSM and Flux resource managers, performance optimization for IBM POWER9 and ARM systems, support of DDN Infinite Memory Engine (IME) to ROMIO, optimized performance of MPI_Wait operation, and update to hwloc 1.11.11. [more]

OSU InfiniBand Network Analysis and Monitoring (INAM) Tool 0.9.4 with support to enhance fabric discovery using optimized OpenMP-based multi-threaded designs, ability to gather InfiniBand performance counters at sub-second granularity for very large clusters, redesigned database layout to reduce database size, OpenMP-based multi-threaded designs to handle database purge, read, and insert operations simultaneously, and features in conjunction with MVAPICH2-X 2.3rc1 is available. [more]

Upcoming Tutorials: InfiniBand, Omni-Path, and High-Speed Ethernet (HSE) at SC '18. Past tutorials: InfiniBand, Omni-Path, and High-Speed Ethernet (HSE) at ISC '18, HPC Meets Cloud at ICDCS '18, and MVAPICH2 and MPI-T at PEARC18.

MVAPICH2-Virt 2.2 GA (based on MVAPICH2 2.2 GA) targeting virtual machine-based and container-based (docker and singularity) HPC cloud computing environments with InfiniBand, SR-IOV and OpenStack is available. [more]

MVAPICH2-EA (Energy-Aware) 2.1 with energy-efficient support for IB, RoCE and iWARP, user defined energy-performance trade-off levels, and compatibility with OEMT is is available. [more]

OSU Energy Management Tool (OEMT) 0.8 to measure the energy consumption of MPI applications is available. [more]