The MVAPICH team members participated in multiple events at SC'19!!
The OSU booth (2094) featured leading speakers from academia and industry!!
Click here to view slides of the presentations!!


The number of organizations using MVAPICH2 libraries has crossed 3,075 in 89 countries!! The MVAPICH team would like to express thanks to all these organizations and their users!!


7th Annual MVAPICH User Group (MUG) Meeting took place on August 19-21, 2019 in Columbus, Ohio, USA. Click here for details.


MVAPICH Delivers Sub-minute (22 sec) Job Startup for 229,376 processes!! (Details)


MVAPICH@SC 2019



Welcome to the home page of the MVAPICH project, led by Network-Based Computing Laboratory (NBCL) of The Ohio State University. The MVAPICH2 software, based on MPI 3.1 standard, delivers the best performance, scalability and fault tolerance for high-end computing systems and servers using InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE networking technologies. This software is being used by more than 3,075 organizations in 89 countries worldwide to extract the potential of these emerging networking technologies for modern systems. As of Apr '20, more than 719,000 downloads have taken place from this project's site. This software is also being distributed by many vendors as part of their software distributions.

The MVAPICH2 software family is ABI compatible with the version of MPICH it is based on. Please refer to our download page for more details.

The MVAPICH2 software is powering several supercomputers in the TOP500 list. Examples (from the November'19 ranking) include:

  • 3rd, 10,649,600-core (Sunway TaihuLight) at National Supercomputing Center in Wuxi, China
  • 5th, 448, 448 cores (Frontera) at TACC
  • 8th, 391,680 cores (ABCI) in Japan
  • 14th, 570,020 cores (Nurion) in South Korea
  • 15th, 556,104 cores (Oakforest-PACS) in Japan
  • 18th, 367,024 cores (Stampede2) at TACC
  • 32nd, 241,108-core (Pleiades) at NASA
  • 90th, 76,032-core (Tsubame 2.5) at Tokyo Institute of Technology

The MVAPICH group provides several software libraries as listed below.

High-Performance Parallel Programming Libraries

MVAPICH2Support for InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE
MVAPICH2-AzureOptimized Support for Microsoft Azure Platform with InfiniBand
MVAPICH2-XAdvanced MPI features/support (UMR, ODP, DC, Core-Direct, SHArP, XPMEM), OSU INAM (InifniBand Network Monitoring and Analysis), PGAS (OpenSHMEM, UPC, UPC++, and CAF), and MPI+PGAS programming models with unified communication runtime
MVAPICH2-X-AWSAdvanced MPI features (SRD and XPMEM) with support for Amazon Elastic Fabric Adapter (EFA)
MVAPICH2-GDROptimized MPI for clusters with NVIDIA GPUs and for GPU-enabled Deep Learning Applications
MVAPICH2-VirtHypervisor and container based (Docker and Singularity) HPC cloud with MPI & IB (SR-IOV)
MVAPICH2-EAEnergy aware and High-performance MPI

Microbenchmarks

OMBMicrobenchmarks suite to evaluate MPI and PGAS (OpenSHMEM, UPC, and UPC++) libraries for CPUs and GPUs

Tools

OSU INAMNetwork monitoring, profiling, and analysis for clusters with MPI and scheduler integration
OEMTUtility to measure the energy consumption of MPI applications

This project is supported by funding from U.S. National Science Foundation, U.S. DOE Office of Science, U.S. Department of Defense, Ohio Board of Regents, Ohio Department of Development, Cisco Systems, Cray, Intel, Linux Networx, Mellanox, Microsoft, NVIDIA, Pattern Computer, QLogic, and Sun Microsystems; and equipment donations from Advanced Clustering, AMD, Appro, Chelsio, Dell, Fulcrum, Fujitsu, Intel, Mellanox, Microway, NetEffect, Pattern Computer, QLogic and Sun. Other technology partner includes: TotalView.

Announcements


MVAPICH2-X 2.3rc3 with support for improved communication performance using DC transport, improved allgatherv algorithm for small messages, optimization of XPMEM-based MPI collective operations for various platforms, support for AWS EFA adapter and SRD transport protocol, support for hybrid MPI+OpenSHMEM; optimized support for ARM, OpenPOWER and Intel platforms, and support for INAM 0.9.5 is available. [more]

MVAPICH2 2.3.3 GA based on MPICH v3.2.1, enhanced performance for intra-node collective operations, support for PMIx protocol for SLURM and JSM process managers, support for RDMA_CM based multicast group creation, enhanced point-to-point and collective tunings for Fulhame@EPCC, Catalyst@ARM, Mayer@Sandia, and Frontera@TACC, update default cache line size on x86_64 platforms to 64 bytes, enhanced spread mapping to use even distribution of ranks, multiple MPI_T PVARs and CVARs support for point-to-point and collective, operations, support for sub-communicator level MPI_T PVARs, architecture detection support for Marvel QDR RoCE HCA, runtime parameter 'MV2_SUPPRESS_HCA_WARNINGS' to suppress HCA warnings, and multiple bug-fixes. [more]

MVAPICH2-GDR 2.3.3 GA (based on MVAPICH2 2.3.3 GA) with support for GDRCopy v2.0, enhanced datatype support for CUDA kernel-based Allreduce, enhanced inter-node point-to-point performance for CUDA managed buffers on POWER9 system, enhanced CUDA-Aware MPI_Allreduce on NVLink-enabled GPU systems, enhanced CUDA-Aware MPI_Pack and MPI_Unpack, and enhanced point-to-point tuning with POWER9 system is available. [more]

OSU INAM 0.9.5 OSU InfiniBand Network Analysis and Monitoring (INAM) Tool 0.9.5 with support for PBS job launcher, enhanced fabric discovery using optimized OpenMP-based multi-threaded designs, ability to gather InfiniBand performance counters at sub-second granularity for very large clusters, redesigned database layout to reduce database size, OpenMP-based multi-threaded designs to handle database purge, read, and insert operations simultaneously, and features in conjunction with MVAPICH2-X 2.3rc2 is available. Click here for more details!

MVAPICH2-Azure 2.3.2 based on MVAPICH2 2.3.2, enhanced tuning for point-to-point and collective operations, targeted for Azure HB & HC virtual machine instances with InfiniBand, and flexibility for `one-click' deployment is available.

MVAPICH2-X-AWS 2.3 based on MVAPICH2-X 2.3, support for Amazon EFA adapter's Scalable Reliable Datagram (SRD) transport protocol, support for XPMEM based intra-node communication for point-to-point and collectives, enhanced tuning for point-to-point and collective operations and targeted for AWS instances with Amazon Linux 2 AMI and EFA support is available.

OMB 5.6.2 with support for GPU-Aware multi-threaded point-to-point operations (osu_latency_mt) and multiple bug fixes is available. [more]

The MVAPICH team is now on Twitter! Follow us for up to date information on our events and tutorials! #MVAPICH.