The number of downloads from the MVAPICH site has crossed 0.5 million!! The MVAPICH team would like to thank all its users and organizations!!


The MVAPICH team is a proud partner of the new NSF-funded Frontera supercomputer!! (Details)


MVAPICH Delivers Sub-minute (22 sec) Job Startup for 229,376 processes!! (Details)


MVAPICH@IXPUG'18


MVAPICH@ISC 2018



Welcome to the home page of the MVAPICH project, led by Network-Based Computing Laboratory (NBCL) of The Ohio State University. The MVAPICH2 software, based on MPI 3.1 standard, delivers the best performance, scalability and fault tolerance for high-end computing systems and servers using InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE networking technologies. This software is being used by more than 2,950 organizations in 86 countries worldwide to extract the potential of these emerging networking technologies for modern systems. As of Oct '18, more than 500,000 downloads have taken place from this project's site. This software is also being distributed by many vendors as part of their software distributions.

The MVAPICH2 software family is ABI compatible with the version of MPICH it is based on. Please refer to our download page for more details.

The MVAPICH2 software is powering several supercomputers in the TOP500 list. Examples (from the June'18 ranking) include:

  • 2nd, 10,649,600-core (Sunway TaihuLight) at National Supercomputing Center in Wuxi, China
  • 12th, 556,104 cores (Oakforest-PACS) in Japan
  • 15th, 367,024 cores (Stampede2) at TACC
  • 24th, 241,108-core (Pleiades) at NASA
  • 62nd, 76,032-core (Tsubame 2.5) at Tokyo Institute of Technology

The MVAPICH group provides several software libraries as listed below.

High-Performance Parallel Programming Libraries

MVAPICH2Support for InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE
MVAPICH2-XAdvanced MPI features/support (UMR, ODP, DC, Core-Direct, SHArP, XPMEM), OSU INAM (InifniBand Network Monitoring and Analysis), PGAS (OpenSHMEM, UPC, UPC++, and CAF), and MPI+PGAS programming models with unified communication runtime
MVAPICH2-GDROptimized MPI for clusters with NVIDIA GPUs and for GPU-enabled Deep Learning Applications
MVAPICH2-VirtHypervisor and container based (Docker and Singularity) HPC cloud with MPI & IB (SR-IOV)
MVAPICH2-EAEnergy aware and High-performance MPI

Microbenchmarks

OMBMicrobenchmarks suite to evaluate MPI and PGAS (OpenSHMEM, UPC, and UPC++) libraries for CPUs and GPUs

Tools

OSU INAMNetwork monitoring, profiling, and analysis for clusters with MPI and scheduler integration
OEMTUtility to measure the energy consumption of MPI applications

This project is supported by funding from U.S. National Science Foundation, U.S. DOE Office of Science, U.S. Department of Defense, Ohio Board of Regents, Ohio Department of Development, Cisco Systems, Cray, Intel, Linux Networx, Mellanox, Microsoft, NVIDIA, Pattern Computer, QLogic, and Sun Microsystems; and equipment donations from Advanced Clustering, AMD, Appro, Chelsio, Dell, Fulcrum, Fujitsu, Intel, Mellanox, Microway, NetEffect, Pattern Computer, QLogic and Sun. Other technology partner includes: TotalView.

Announcements


MVAPICH2-GDR 2.3rc1 (based on MVAPICH2 2.3) with support for CUDA 9.2, Volta GPUs, OpenPOWER with NVLink, IBM XLC and PGI compilers with CUDA kernel features, enhanced small message performance, optimized collectives with host-buffers for many platforms, SHARP support for allreduce collectives, and optimized large-message collectives (reduce, broadcast and allreduce) for deep learning workloads on various platforms [more]

OMB 5.4.4 with several bug fixes is available. [more]

MVAPICH2-X 2.3rc1 with support for XPMEM-based point-to-point, XPMEM-based MPI collective operations (Reduce and All-Reduce), enhanced asynchronous progress design for non-blocking point-to-point and collectives, hybrid MPI+OpenSHMEM; optimized support for ARM, OpenPOWER and Intel Skylake, and support for INAM 0.9.3 is available. [more]

Upcoming Tutorials: InfiniBand, Omni-Path, and High-Speed Ethernet (HSE) at SC '18. Past tutorials: InfiniBand, Omni-Path, and High-Speed Ethernet (HSE) at ISC '18, HPC Meets Cloud at ICDCS '18, and MVAPICH2 and MPI-T at PEARC18.

MVAPICH2 2.3 based on MPICH v3.2.1 and with enhanced small message performance, improved performance for host-based transfers with CUDA, enhanced collective performance, optimized support for IBM POWER9, ARM ThunderX, Intel Skylake, and Intel KNL systems, improved support for MPI process to core mapping, enhanced MPI initialization performance, support for running MPI jobs in Singularity, and tested with CLANG v5.0.0. [more]

6th Annual MVAPICH User Group (MUG) Meeting took place on August 6-8, 2018 in Columbus, Ohio, USA. Click here for details.

OSU InfiniBand Network Analysis and Monitoring (INAM) Tool 0.9.3 with support to enhance INAMD to query end nodes based on command line option, web page to display size of the database in real-time, enhance interaction between the web application and SLURM job launcher for increased portability, update web application to use Java v1.8 and Spring Boot v1.5.9, and features in conjunction with MVAPICH2-X 2.3b is available. [more]

MVAPICH2-Virt 2.2 GA (based on MVAPICH2 2.2 GA) targeting virtual machine-based and container-based (docker and singularity) HPC cloud computing environments with InfiniBand, SR-IOV and OpenStack is available. [more]

MVAPICH2-EA (Energy-Aware) 2.1 with energy-efficient support for IB, RoCE and iWARP, user defined energy-performance trade-off levels, and compatibility with OEMT is is available. [more]

OSU Energy Management Tool (OEMT) 0.8 to measure the energy consumption of MPI applications is available. [more]