The number of downloads from the MVAPICH site has crossed 0.3 million (300,000). The MVAPICH team would like to thank all its users and organizations!!

MVAPICH2 drives 10th ranked 5+ Petaflop TACC Stampede system with 519,640 cores, InfiniBand FDR and Intel MIC [more]

Welcome to this web page related to "MPI over InfiniBand, 10GigE/iWARP and RDMA over Converged Ethernet (RoCE)" project, lead by Network-Based Computing Laboratory (NBCL) of The Ohio State University. The MVAPICH2 software, supporting MPI 3.0 standard, delivers best performance, scalability and fault tolerance for high-end computing systems and servers using InfiniBand, 10GigE/iWARP and RoCE networking technologies. This software is being used by more than organizations world-wide in 76 countries (Current Users) to extract the potential of these emerging networking technologies for modern systems. As of , more than downloads have taken place from this project's site. This software is also being distributed by many InfiniBand, 10GigE/iWARP and RoCE vendors in their software distributions. The MVAPICH2-X software package provides support for hybrid MPI+PGAS (UPC and OpenSHMEM) programming models with unified communication runtime for emerging exascale systems. The MVAPICH2-GDR package provides support for clusters with NVIDIA GPUs supporting the GPUDirect RDMA feature. The MVAPICH2-Virt package provides support for high performance and scalable MPI in a cloud computing environment with InfiniBand and SR-IOV. The MVAPICH2-MIC package provides support for clusters with Intel MIC coprocessors. MVAPICH2 software is powering several supercomputers in the TOP 500 list. Examples (from the Nov '15 ranking) include:

  • 10th, 519,640-core (Stampede) at TACC
  • 13th, 185,344-core (Pleiades) at NASA
  • 25th, 76,032-core (Tsubame 2.5) at Tokyo Institute of Technology

This project is supported by funding from U.S. National Science Foundation, U.S. DOE Office of Science, Ohio Board of Regents, ODOD, Cisco Systems, Cray, Intel, Linux Networx, Mellanox, NVIDIA, QLogic, and Sun Microsystems; and equipment donations from Advanced Clustering, AMD, Appro, Chelsio, Dell, Fulcrum, Fujitsu, Intel, Mellanox, Microway, NetEffect, QLogic and Sun. Other technology partner includes: TotalView Technologies.


(NEW) Supercomputing '15 presentations have been linked. View them here.

(NEW) MVAPICH2 2.2b (based on MPICH 3.1.4) with enhanced performance for small messages, enhanced startup performance with SLURM (support for PMIX_Iallgather and PMIX_Ifence), affinity with asynchronous progress threads, support for enhanced MPIT performance variables, and improved startup performance for PSM channel is available. [more]

(NEW) MVAPICH2-X 2.2b providing support for advanced MPI features (User Mode Memory Registration (UMR) and Core-Direct support for non-blocking collective v variants), hybrid MPI+PGAS (UPC, OpenSHMEM and CAF) programming models, and support for INAM 0.8.5 is available. [more]

(NEW) OSU InfiniBand Network Analysis and Monitoring (INAM) Tool 0.8.5 with improved network load time, capability to profile and report MPI-level inter-node communication buffer usage (RC and UD), memory utilization, and capability to display information of currently running MVAPICH2-X jobs is available. [more]

(NEW) OMB 5.1 with support for non-blocking collectives v-variants as well as ialltoallw, support for benchmarking GPU-Aware non-blocking collectives where overlap can be computed using either CPU or GPU kernels is available. [more]

(NEW) MVAPICH2-GDR 2.2a (based on MVAPICH2 2.2a) with support for non-blocking collectives from device buffers while exploiting core-direct and GPUDirect RDMA, optimized IPC thresholds for multi-GPU nodes, and enabling support on GPU clusters with regular OFED (without GPU Direct RDMA) [more]

Upcoming Tutorial: IB and HSE at at SC '15. Sneak preview here.

MVAPICH2-Virt 2.1 GA (based on MVAPICH2 2.1 GA) with support for efficient MPI communication over SR-IOV enabled InfiniBand network, integration with OpenStack, high-performance and locality-aware MPI communication with IVSHMEM, automatic communication channel selection among SR-IOV, IVSHMEM and CMA/LiMIC2 is available. [more]

MVAPICH2-EA (Energy-Aware) 2.1 with energy-efficient support for IB, RoCE and iWARP, user defined energy-performance trade-off levels, and compatibility with OEMT is is available. [more]

OSU Energy Management Tool (OEMT) 0.8 to measure the energy consumption of MPI applications is available. [more]

MVAPICH2-MIC 2.0 (based on MVAPICH2 2.0.1) with optimized pt-to-pt and collective support for native, symmetric and offload modes on clusters with Intel MICs (Xeon Phis) is available. [more]