2nd Annual MVAPICH User Group (MUG) Meeting held on August 25-27, 2014 in Columbus, Ohio, USA. Presentation Slides are available
Slides from recent presentations:
MVAPICH2 drives 7th ranked 5+ Petaflop TACC Stampede system with 519,640 cores, InfiniBand FDR and Intel MIC [more]

Welcome to this web page related to "MPI over InfiniBand, 10GigE/iWARP and RDMA over Converged Ethernet (RoCE)" project, lead by Network-Based Computing Laboratory (NBCL) of The Ohio State University. MVAPICH2 software, supporting MPI 3.0 standard, delivers best performance, scalability and fault tolerance for high-end computing systems and servers using InfiniBand, 10GigE/iWARP and RoCE networking technologies. This software is being used by more than organizations world-wide in 73 countries (Current Users) to extract the potential of these emerging networking technologies for modern systems. As of , more than downloads have taken place from this project's site. This software is also being distributed by many InfiniBand, 10GigE/iWARP and RoCE vendors in their software distributions. MVAPICH2-X software package provides support for hybrid MPI+PGAS (UPC and OpenSHMEM) programming models with unified communication runtime for emerging exascale systems. MVAPICH2 software is powering several supercomputers in the TOP 500 list. Examples (from the June '14 ranking) include:

  • 7th, 519,640-core (Stampede) at TACC
  • 13th, 74,358-core (Tsubame 2.5) at Tokyo Institute of Technology
  • 23rd, 96,192-core (Pleiades) at NASA


This project is supported by funding from U.S. National Science Foundation, U.S. DOE Office of Science, Ohio Board of Regents, ODOD, Cray, Cisco Systems, Intel, Linux Networx, Mellanox, NVIDIA, QLogic, and Sun Microsystems; and equipment donations from Advanced Clustering, AMD, Appro, Chelsio, Dell, Fulcrum, Fujitsu, Intel, Mellanox, Microway, NetEffect, QLogic and Sun. Other technology partner includes: TotalView Technologies.

Announcements


(NEW) MVAPICH2 2.1a (based on MPICH 3.1.2) with support for PMI-2 based startup with SLURM, improved startup for UD-Hybrid, added MPI-T PVARs, CUDA support for MPI_Scan and MPI_Exscan, optimized collective for PSM, improved 2-level communicator creation. [more]

(NEW) MVAPICH2-X 2.1a providing support for hybrid MPI+PGAS (UPC and OpenSHMEM) programming models for exascale systems. [more]

Vote for MVAPICH! The MVAPICH Software Stack has been selected as a candidate for 2014 HPCWire-Readers-Choice-Awards under category 13 (Best HPC software product or technology). Interested in voting for it? Click here to vote, the vote closes on Oct 3, 2014.

MUG'14 Presentation PDFs linked Click here for more details.

MVAPICH2-GDR 2.0 (based on MVAPICH2 2.0) with optimized small message transfer, MPI-3 RMA GPU-GPU communication and atomic operations, optimized collectives and efficient datatype support using GPU Direct RDMA (GDR) for NVIDIA GPUs. [more]