MVAPICH powers Sunway TaihuLight - the #1 SuperComputer in the world!


MVAPICH Delivers Sub-minute (22 sec) Job Startup for 229,376 processes!! (Details)


MVAPICH@OFA'18


MVAPICH@GTC'18


MVAPICH@OpenPOWER'18



Welcome to the home page of the MVAPICH project, led by Network-Based Computing Laboratory (NBCL) of The Ohio State University. The MVAPICH2 software, based on MPI 3.1 standard, delivers the best performance, scalability and fault tolerance for high-end computing systems and servers using InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE networking technologies. This software is being used by more than 2,875 organizations in 86 countries worldwide to extract the potential of these emerging networking technologies for modern systems. As of Apr '18, more than 465,000 downloads have taken place from this project's site. This software is also being distributed by many vendors as part of their software distributions.

The MVAPICH2 software family is ABI compatible with the version of MPICH it is based on. Please refer to our download page for more details.

The MVAPICH2 software is powering several supercomputers in the TOP500 list. Examples (from the Nov'17 ranking) include:

  • 1st, 10,649,600-core (Sunway TaihuLight) at National Supercomputing Center in Wuxi, China
  • 9th, 556,104 cores (Oakforest-PACS) in Japan
  • 12th, 368,928 cores (Stampede2) at TACC
  • 17th, 241,108-core (Pleiades) at NASA
  • 48th, 76,032-core (Tsubame 2.5) at Tokyo Institute of Technology

The MVAPICH group provides several software libraries as listed below.

High-Performance Parallel Programming Libraries

MVAPICH2Support for InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE
MVAPICH2-XAdvanced MPI features, OSU INAM, PGAS (OpenSHMEM, UPC, UPC++, and CAF), and MPI+PGAS programming models with unified communication runtime
MVAPICH2-GDROptimized MPI for clusters with NVIDIA GPUs
MVAPICH2-VirtHigh-performance and scalable MPI for hypervisor and container based HPC cloud
MVAPICH2-EAEnergy aware and High-performance MPI
MVAPICH2-MICOptimized MPI for clusters with Intel KNC

Microbenchmarks

OMBMicrobenchmarks suite to evaluate MPI and PGAS (OpenSHMEM, UPC, and UPC++) libraries for CPUs and GPUs

Tools

OSU INAMNetwork monitoring, profiling, and analysis for clusters with MPI and scheduler integration
OEMTUtility to measure the energy consumption of MPI applications

This project is supported by funding from U.S. National Science Foundation, U.S. DOE Office of Science, U.S. Department of Defense, Ohio Board of Regents, Ohio Department of Development, Cisco Systems, Cray, Intel, Linux Networx, Mellanox, Microsoft, NVIDIA, QLogic, and Sun Microsystems; and equipment donations from Advanced Clustering, AMD, Appro, Chelsio, Dell, Fulcrum, Fujitsu, Intel, Mellanox, Microway, NetEffect, QLogic and Sun. Other technology partner includes: TotalView.

Announcements


(NEW) 6th Annual MVAPICH User Group (MUG) Meeting took place on August 6-8, 2018 in Columbus, Ohio, USA. Click here for details.

OSU InfiniBand Network Analysis and Monitoring (INAM) Tool 0.9.3 with support to enhance INAMD to query end nodes based on command line option, web page to display size of the database in real-time, enhance interaction between the web application and SLURM job launcher for increased portability, update web application to use Java v1.8 and Spring Boot v1.5.9, and features in conjunction with MVAPICH2-X 2.3b is available. [more]

MVAPICH2 2.3rc1 with enhanced performance for Allreduce, Reduce_scatter_block, Allgather, and Allgatherv through, enhanced support for MPI_T PVARs and CVARs, Improved job startup time, support to automatically detect IP address of IB/RoCE interfaces, enhance HCA detection, automatically detect and use maximum supported MTU by the HCA, added logic to detect heterogeneous CPU/HFI configurations, enhanced intra-node and inter-node tuning for PSM-CH3 and PSM2-CH3, enhanced HFI selection logic, enhanced tuning and architecture detection, added 'SPREAD', 'BUNCH', and 'SCATTER' binding options [more]

MVAPICH2-GDR 2.3a (based on MVAPICH2 2.2) with support for CUDA 9.0, Volta V100 GPU, OpenPOWER with NVLink, multi-stream IPC communication, leveraging CMA for enhanced host-based communication, IB-MCAST based designs for GPU-broadcast and streaming, and optimized collectives for deep learning frameworks on various platforms [more]

MVAPICH2-X 2.3b with support for OpenSHMEM 1.3, non-blocking remote memory access routines, and hybrid MPI+OpenSHMEM; MPI support for DPML and contention-aware kernel-assisted collectives, optimized support for ARM, OpenPOWER and Intel Skylake, and support for INAM 0.9.2 is available. [more]

OMB 5.4.1 with support for OpenSHMEM non-blocking benchmarks, ability to specify min and max for pt-to-pt and one-sided benchmarks, unification of utility functions, enhanced error handling, help messages, and run-time parameters and bug fixes is available. [more]

Upcoming Tutorials: InfiniBand, Omni-Path, and High-Speed Ethernet (HSE) at SC '17 and HPC Meets Cloud at UCC17. Past tutorials: MVAPICH2 and MPI-T at PEARC17 and InfiniBand, Omini-Path, and High-Speed Ethernet (HSE) at ISC '17,

Members of the MVAPICH team win the Hans Meuer Best Paper Award@ISC'17 (Details)

NVIDIA Philanthropy Helps Power Supercomputing Software (Details)

MVAPICH2-Virt 2.2 GA (based on MVAPICH2 2.2 GA) targeting virtual machine-based and container-based (docker and singularity) HPC cloud computing environments with InfiniBand, SR-IOV and OpenStack is available. [more]

MVAPICH2-EA (Energy-Aware) 2.1 with energy-efficient support for IB, RoCE and iWARP, user defined energy-performance trade-off levels, and compatibility with OEMT is is available. [more]

OSU Energy Management Tool (OEMT) 0.8 to measure the energy consumption of MPI applications is available. [more]

MVAPICH2-MIC 2.0 (based on MVAPICH2 2.0.1) with optimized pt-to-pt and collective support for native, symmetric and offload modes on clusters with Intel MICs (Xeon Phis) is available. [more]