MVAPICH/MVAPICH2 Project
Ohio State University



MVAPICH: MPI over InfiniBand, 10GigE/iWARP and RoCE
The number of downloads from this site has crossed 200,000. The MVAPICH team would like to thank all its users!!

Slides from recent presentations: GTC'14, IBUG'14, and HPCAC-Lugano'14.

MVAPICH2 drives 7th ranked 5+ Petaflop TACC Stampede system with 519,640 cores, InfiniBand FDR and Intel MIC [more]

Welcome to this web page related to `MPI over InfiniBand, 10GigE/iWARP and RDMA over Converged Ethernet (RoCE)' project, lead by Network-Based Computing Laboratory (NBCL) of the Ohio State University. MVAPICH2 software, supporting MPI 3.0 standard, delivers best performance, scalability and fault tolerance for high-end computing systems and servers using InfiniBand, 10GigE/iWARP and RoCE networking technologies. This software is being used by more than 2,150 organizations world-wide in 72 countries (Current Users) to extract the potential of these emerging networking technologies for modern systems. As of April '14, more than 208,000 downloads have taken place from this project's site. This software is also being distributed by many InfiniBand, 10GigE/iWARP and RoCE vendors in their software distributions. MVAPICH2-X software package provides support for hybrid MPI+PGAS (UPC and OpenSHMEM) programming models with unified communication runtime for emerging exascale systems. MVAPICH2 software is powering several supercomputers in the TOP 500 list. Examples (from the November '13 ranking) include:
  • 7th, 519,640-core (Stampede) at TACC
  • 11th, 74,358-core (Tsubame 2.5) at Tokyo Institute of Technology
  • 16th, 96,192-core (Pleiades) at NASA

This project is supported by funding from U.S. National Science Foundation , U.S. DOE Office of Science, Ohio Board of Regents, ODOD , Cray, Cisco Systems, Intel, Linux Networx, Mellanox, NVIDIA, QLogic, and Sun Microsystems; and equipment donations from Advanced Clustering , AMD, Appro, Chelsio, Dell, Fulcrum, Fujitsu , Intel, Mellanox, Microway, NetEffect , QLogic and Sun. Other technology partner includes: TotalView Technologies. More details can be found here.

Announcements                                     more
  • (NEW) MVAPICH2 2.0rc1 (based on MPICH 3.1) with support for MPI-3 RMA, MPIT interface, checkpointing with Hydra process manager, tuned communiction on IvyBrige, and improved job-startup time. [more]

  • (NEW) MVAPICH2-X 2.0rc1 providing support for hybrid MPI+PGAS (UPC and OpenSHMEM) programming models for exascale systems, UPC with optimized collectives, and OpenSHMEM with optimized intra-node performance. [more]

  • (NEW) OMB 4.3 provides MPI-3 RMA benchmarks and UPC collective benchmarks [more]

  • MVAPICH2-GDR 2.0b (based on MVAPICH2 2.0b) with support for GPU Direct RDMA (GDR) for NVIDIA GPUs. [more]

Publications                                            more
  • Papers at Recent and Upcoming Conferences (WSSSPE '13, SC '13, OpenSHMEM '13, PGAS '13, ICPP '13, Cluster '13, EuroMPI '13, HotI '13, XSCALE '13, HPDC '13, ISC '13, ICS '13, IPDPS '13, CCGrid'13, SC '12, PGAS '12, and EuroMPI '12)[more]
  • "A Scalable and Portable Approach to Accelerate Hybrid HPL on Heterogeneous CPU-GPU Clusters", Cluster '13, BEST Student Paper Award
Presentations