Welcome to the home page of the MVAPICH project, led by Network-Based Computing Laboratory (NBCL) of The Ohio State University. The MVAPICH2 software, based on MPI 3.1 standard, delivers the best performance, scalability and fault tolerance for high-end computing systems and servers using InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE networking technologies. This software is being used by more than 2,675 organizations in 83 countries worldwide to extract the potential of these emerging networking technologies for modern systems. As of Sep '16, more than 391,000 downloads have taken place from this project's site. This software is also being distributed by many vendors as part of their software distributions.
The MVAPICH2 software is powering several supercomputers in the TOP500 list. Examples (from the June '16 ranking) include:
- 12th, 462,462-core (Stampede) at TACC
- 15th, 185,344-core (Pleiades) at NASA
- 31st, 74,520-core (Tsubame 2.5) at Tokyo Institute of Technology
The MVAPICH group provides several software libraries as listed below.
High-Performance Parallel Programming Libraries
|MVAPICH2||Support for InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE|
|MVAPICH2-X||Advanced MPI features, OSU INAM, PGAS (OpenSHMEM, UPC, UPC++, and CAF), and MPI+PGAS programming models with unified communication runtime|
|MVAPICH2-GDR||Optimized MPI for clusters with NVIDIA GPUs|
|MVAPICH2-Virt||High-performance and scalable MPI for hypervisor and container based HPC cloud|
|MVAPICH2-EA||Energy aware and High-performance MPI|
|MVAPICH2-MIC||Optimized MPI for clusters with Intel KNC|
|OMB||Microbenchmarks suite to evaluate MPI and PGAS (OpenSHMEM, UPC, and UPC++) libraries for CPUs and GPUs|
|OSU INAM||Network monitoring, profiling, and analysis for clusters with MPI and scheduler integration|
|OEMT||Utility to measure the energy consumption of MPI applications|
This project is supported by funding from U.S. National Science Foundation, U.S. DOE Office of Science, Ohio Board of Regents, Ohio Department of Development, Cisco Systems, Cray, Intel, Linux Networx, Mellanox, NVIDIA, QLogic, and Sun Microsystems; and equipment donations from Advanced Clustering, AMD, Appro, Chelsio, Dell, Fulcrum, Fujitsu, Intel, Mellanox, Microway, NetEffect, QLogic and Sun. Other technology partner includes: TotalView.
(NEW) MVAPICH2 2.2 GA with support for Intel KNL, OpenPower, EDR, Omni-Path, and RoCE V2; enhanced performance for MPI_Comm_Split, support for multiple MPI initializations, graceful fallback to shared-memory if LiMIC2 or CMA fails, tuning for EDR systems, optimization and tuning for many systems is available. [more]
(NEW) MVAPICH2-X 2.2 GA with support for Intel KNL and OpenPower for MPI, PGAS (OpenSHMEM, UPC, UPC++ and CAF), and Hybrid MPI+PGAS; MPI support for On-Demand Paging (ODP) and Unified Memory Registration (UMR) and support for INAM is available. [more]
(NEW) OMB 5.3.2 with flexibility for specifying very large message sizes (>2GB) for collective benchmarks and bug fixes is available. [more]
MVAPICH2-Virt 2.2rc1 (based on MVAPICH2 2.2rc1) targeting virtual machine based and container based HPC cloud computing environments with InfiniBand, SR-IOV and OpenStack is available. [more]
4th Annual MVAPICH User Group (MUG) Meeting took place on August 15-17, 2016 in Columbus, Ohio, USA. Click here for presentation slides and videos.
Upcoming Tutorials: MVAPICH2 optimization and tuning at XSEDE '16, MPI+PGAS at IEEE Cluster '16, IB and HSE at SC '16. Past tutorials of IB and HSE presented at ISC '16, MPI+PGAS presented at ICS '16 and PPoPP '16.
MVAPICH2-GDR 2.2rc1 (based on MVAPICH2 2.2rc1) with support for high-performance non-blocking send operations, enhanced Intranode CUDA Managed-Aware communication, GPU-based tuning framework for Bcast and Gather, and support for RDMA-CM communication [more]
OSU InfiniBand Network Analysis and Monitoring (INAM) Tool 0.9.1 with support to find routes starting from / ending on the selected node, capability to view link utilization for a user-specified link, enhanced load time, and support for using internal graph rendering library and features in conjunction with MVAPICH2-X 2.2rc1 is available. [more]
MVAPICH2-EA (Energy-Aware) 2.1 with energy-efficient support for IB, RoCE and iWARP, user defined energy-performance trade-off levels, and compatibility with OEMT is is available. [more]
OSU Energy Management Tool (OEMT) 0.8 to measure the energy consumption of MPI applications is available. [more]
MVAPICH2-MIC 2.0 (based on MVAPICH2 2.0.1) with optimized pt-to-pt and collective support for native, symmetric and offload modes on clusters with Intel MICs (Xeon Phis) is available. [more]