3rd Annual MVAPICH User Group (MUG) Meeting to take place on August 19-21, 2015 in Columbus, Ohio, USA. Click here for an Overview of the Confirmed Speakers.


The number of downloads from the MVAPICH site has crossed a quarter million (250,000). The MVAPICH team would like to thank all its users and organizations!!


MVAPICH2 drives 8th ranked 5+ Petaflop TACC Stampede system with 519,640 cores, InfiniBand FDR and Intel MIC [more]


Welcome to this web page related to "MPI over InfiniBand, 10GigE/iWARP and RDMA over Converged Ethernet (RoCE)" project, lead by Network-Based Computing Laboratory (NBCL) of The Ohio State University. The MVAPICH2 software, supporting MPI 3.0 standard, delivers best performance, scalability and fault tolerance for high-end computing systems and servers using InfiniBand, 10GigE/iWARP and RoCE networking technologies. This software is being used by more than organizations world-wide in 75 countries (Current Users) to extract the potential of these emerging networking technologies for modern systems. As of , more than downloads have taken place from this project's site. This software is also being distributed by many InfiniBand, 10GigE/iWARP and RoCE vendors in their software distributions. The MVAPICH2-X software package provides support for hybrid MPI+PGAS (UPC and OpenSHMEM) programming models with unified communication runtime for emerging exascale systems. The MVAPICH2-GDR package provides support for clusters with NVIDIA GPUs supporting the GPUDirect RDMA feature. The MVAPICH2-Virt package provides support for high performance and scalable MPI in a cloud computing environment with InfiniBand and SR-IOV. The MVAPICH2-MIC package provides support for clusters with Intel MIC coprocessors. MVAPICH2 software is powering several supercomputers in the TOP 500 list. Examples (from the July '15 ranking) include:

  • 8th, 519,640-core (Stampede) at TACC
  • 11th, 185,344-core (Pleiades) at NASA
  • 22nd, 76,032-core (Tsubame 2.5) at Tokyo Institute of Technology


This project is supported by funding from U.S. National Science Foundation, U.S. DOE Office of Science, Ohio Board of Regents, ODOD, Cisco Systems, Cray, Intel, Linux Networx, Mellanox, NVIDIA, QLogic, and Sun Microsystems; and equipment donations from Advanced Clustering, AMD, Appro, Chelsio, Dell, Fulcrum, Fujitsu, Intel, Mellanox, Microway, NetEffect, QLogic and Sun. Other technology partner includes: TotalView Technologies.

Announcements


MVAPICH2-Virt 2.1rc2 (based on MVAPICH2 2.1rc2) Support for efficient MPI communication over SR-IOV enabled InfiniBand network, High-performance and locality-aware MPI communication with IVSHMEM, IVSHMEM device auto-detection in virtual machine, Automatic communication channel selection among SR-IOV, IVSHMEM and CMA/LiMIC2, Support for easy configuration through runtime parameters [more]

MVAPICH2-GDR 2.1rc2 (based on MVAPICH2 2.1rc2) with CUDA 7.0 compatibility, CUDA-Aware support for MPI_Rsend and MPI_Irsend primitives, added Parallel intranode communication channels, Optimized H-H, H-D, D-H, and intranode D-D communication along with tuning for point-point and collective operations, Update to sm_20 kernel optimization for Datatype processing [more]

Upcoming Tutorials: IB and HSE at ISC '15 and SC '15; and Optimization and Tuning of applications using MVAPICH2 and MVAPICH2-X at XSEDE '15.

MVAPICH2 2.1 GA (based on MPICH 3.1.4) with EDR support, enhanced startup performance, CR support with DMTCP, large message RMA, optimized collectives for 4K process, MPI-T PVARs support and enhancements for PSM interface is available. [more]

MVAPICH2-X 2.1 GA providing support for hybrid MPI+PGAS (UPC, OpenSHMEM and CAF) programming models, support for UH CAF 3.0.9 with efficient point-to-point read/write, CO_REDUCE and CO_BROADCAST collectives, support for OpenSHMEM 1.0h, improved job start up and memory footprint with OpenSHMEM, and support for UPC 2.20.0 is available. [more]

MVAPICH2-MIC 2.0 (based on MVAPICH2 2.0.1) with optimized pt-to-pt and collective support for native, symmetric and offload modes on clusters with Intel MICs (Xeon Phis) is available. [more]