The number of organizations using MVAPICH2 libraries has crossed 2,600. The number of downloads from the MVAPICH site has crossed 0.37 million (379,000). The MVAPICH team would like to thank all its users and organizations!!


MVAPICH2 drives 12th ranked 5+ Petaflop TACC Stampede system with 519,640 cores, InfiniBand FDR and Intel MIC [more]


MVAPICH @ OpenStack Summit

MVAPICH @ GPU Technology Conference 2016

MVAPICH @ Open Fabrics Workshop 2016

Multiple talks presented by the MVAPICH Team.

Welcome to the home page of the MVAPICH project, led by Network-Based Computing Laboratory (NBCL) of The Ohio State University. The MVAPICH2 software, supporting MPI 3.1 standard, delivering the best performance, scalability and fault tolerance for high-end computing systems and servers using InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE networking technologies. This software is being used by more than 2,600 organizations in 81 countries worldwide to extract the potential of these emerging networking technologies for modern systems. As of Jun '16, more than 379,000 downloads have taken place from this project's site. This software is also being distributed by many vendors as part of their software distributions.

The MVAPICH2 software is powering several supercomputers in the TOP500 list. Examples (from the June '16 ranking) include:

  • 12th, 462,462-core (Stampede) at TACC
  • 15th, 185,344-core (Pleiades) at NASA
  • 31st, 74,520-core (Tsubame 2.5) at Tokyo Institute of Technology

The MVAPICH group provides several software packages as listed below.

High-Performance Parallel Programming Libraries

MVAPICH2:Basic support for InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE
MVAPICH2-X:Advanced support for hybrid MPI+PGAS (UPC, UPC++, CAF, and OpenSHMEM) programming models with unified communication runtime
MVAPICH2-GDR:Support for clusters with NVIDIA GPUs supporting the GPUDirect RDMA
MVAPICH2-Virt:Support for high-performance and scalable MPI in a cloud computing environments with InfiniBand and SR-IOV
MVAPICH2-EA:Support for high-performance and energy aware MPI

Microbenchmarks

OMB:Micro-benchmarks to gauge performance of implementations of MPI and PGAS (OpenSHMEM, UPC, UPC++) programming models for CPUs and GPUs

Tools

OSU INAM:Provides network monitoring for modern HPC systems
OEMT:Utility package tool that can be used to measure the energy consumed by MPI applications


This project is supported by funding from U.S. National Science Foundation, U.S. DOE Office of Science, Ohio Board of Regents, Ohio Department of Development, Cisco Systems, Cray, Intel, Linux Networx, Mellanox, NVIDIA, QLogic, and Sun Microsystems; and equipment donations from Advanced Clustering, AMD, Appro, Chelsio, Dell, Fulcrum, Fujitsu, Intel, Mellanox, Microway, NetEffect, QLogic and Sun. Other technology partner includes: TotalView.

Announcements


4th Annual MVAPICH User Group (MUG) Meeting to take place on August 15-17, 2016 in Columbus, Ohio, USA. Click here for more details.

Upcoming Tutorials: IB and HSE at ISC '16, SC '16, MPI+PGAS at IEEE Cluster '16, MVAPICH2 optimization and tuning at XSEDE '16 . Past tutorials of MPI+PGAS presented at ICS '16 and ICS '16.

MVAPICH2-GDR 2.2rc1 (based on MVAPICH2 2.2rc1) with support for high-performance non-blocking send operations, enhanced Intranode CUDA Managed-Aware communication, GPU-based tuning framework for Bcast and Gather, and support for RDMA-CM communication [more]

OSU InfiniBand Network Analysis and Monitoring (INAM) Tool 0.9.1 with support to find routes starting from / ending on the selected node, capability to view link utilization for a user-specified link, enhanced load time, and support for using internal graph rendering library and features in conjunction with MVAPICH2-X 2.2rc1 is available. [more]

MVAPICH2 2.2rc1 with support for OpenPower, Omni-Path (PSM2) and ROCEv2, enhanced startup performance and reduced memory footprint with SLURM together with the availability of an updated patch, enabling affinity by default for TrueScale (PSM) and Omni-Path (PSM2) channels, architecture-detection for PSC Bridges system, optimization and tuning for TACC Chameleon system, and hwloc version 1.11.2 is available. [more]

MVAPICH2-X 2.2rc1 introducing support for UPC++ and MPI+UPC++; MPI support for OpenPower, Omni-Path, and RoCEv2; UPC, OpenSHMEM and CAF support for OpenPower and RoCEv2; hybrid MPI+UPC and MPI+OpenSHMEM support for OpenPower and support for INAM 0.9 is available. [more]

OMB 5.3 with support for UPC++ benchmarks and fix for OpenSHMEM benchmarks on OpenPower machines is available. [more]

MVAPICH2-Virt 2.1 GA (based on MVAPICH2 2.1 GA) with support for efficient MPI communication over SR-IOV enabled InfiniBand network, integration with OpenStack, high-performance and locality-aware MPI communication with IVSHMEM, automatic communication channel selection among SR-IOV, IVSHMEM and CMA/LiMIC2 is available. [more]

MVAPICH2-EA (Energy-Aware) 2.1 with energy-efficient support for IB, RoCE and iWARP, user defined energy-performance trade-off levels, and compatibility with OEMT is is available. [more]

OSU Energy Management Tool (OEMT) 0.8 to measure the energy consumption of MPI applications is available. [more]

MVAPICH2-MIC 2.0 (based on MVAPICH2 2.0.1) with optimized pt-to-pt and collective support for native, symmetric and offload modes on clusters with Intel MICs (Xeon Phis) is available. [more]