MVAPICH/MVAPICH2 Project
Ohio State University



Overview | Network-Based Computing Laboratory

Overview of the Project

InfiniBand Architecture, iWARP and RDMA over Converged Ethernet (RoCE) are emerging as open interconnect and protocol standards for HPC clusters, data centers, and file/storage systems. This web page focuses on research and development of high performance and scalable designs for MPI on InfiniBand, iWARP and RoCE. This page also includes information related to the distribution of the popular MVAPICH2 software package, mailing lists, OSU MPI, UPC and OpenSHMEM benchmarks, performance results, and publications.

Information on our research and projects along designing other high-end systems (data centers, clustered storage and file systems) and interconnects can be obtained from here.

Currently, we have the following two different MPI packages:

  • MVAPICH2 2.0rc1 (MPI-3 over OFA-IB-CH3, OFA-IB-Nemesis, OFA-iWARP-CH3, OFA-RoCE-CH3, Shared Memory-CH3, PSM-CH3 (Qlogic InfiniPath), TCP/IP-CH3, TCP/IP-Nemesis and Shared Memory-Nemesis)
  • MVAPICH2-X 2.0rc1 (MPI+PGAS (UPC and OpenSHMEM) over OFA-IB-CH3)

Please refer to the menubar for more details on the above packages, download information, OSU benchmarks, sample performance numbers, publications, current users, mailing lists and FAQs.

Mailing Lists and Additional Information

There are two mailing lists associated with this project. One is for project-related announcements and the other one is for discussion of all issues (user installation/build problems, performance problems, features, contribution of patches and modules, and general questions) related to all different interfaces (OFA-IB, OFA-Hybrid, OFA-iWARP, OFA-RoCE, and PSM) of MVAPICH2 and MVAPICH2-X. If you are experiencing any problems in installing and using the above packages, please refer to the FAQs. If the problem is not solved, please post a note to mvapich-discuss mailing list.