MVAPICH2 2.3.7 Quick Start Guide
MVAPICH Team
Network-Based Computing Laboratory
Department of Computer Science and Engineering
The Ohio State University
http://mvapich.cse.ohio-state.edu
Copyright (c) 2001-2022
Network-Based Computing Laboratory,
headed by Dr. D. K. Panda.
All rights reserved.
Last revised: March 2, 2022
This Quick Start contains the necessary information for MVAPICH2 users to download, install, and use MVAPICH2 2.3.7. Please refer to our User Guide for the comprehensive list of all features and instructions about how to use them.
MVAPICH2 (pronounced as “em-vah-pich 2”) is an open-source MPI software to exploit the novel features and mechanisms of high-performance networking technologies (InfiniBand, iWARP, RDMA over Converged Enhanced Ethernet (RoCE v1 and v2), Slingshot 10, and Rockport Networks) and deliver best performance and scalability to MPI applications. This software is developed in the Network-Based Computing Laboratory (NBCL), headed by Prof. Dhabaleswar K. (DK) Panda since 2001.
More details on MVAPICH2 software, users list, mailing lists, sample performance numbers on a wide range of platforms and interconnects, a set of OSU benchmarks and related publications can be obtained from our website.
The MVAPICH2 2.3.7 source code package includes MPICH 3.2.1. All the required files are present in a single tarball.
Download the most recent distribution tarball from http://mvapich.cse.ohio-state.edu/download/mvapich/mv2/mvapich2-2.3.7.tar.gz
If you’re using a Mellanox InfiniBand, RoCE, iWARP, Slingshot, or Rockport Networks network adapter you can use the default configuration…
If you’re using a Intel TrueScale InfiniBand adapter or Intel Omni-Path adapter you should use…
MVAPICH2 supports many other configure and run time options which may be useful for advanced users. Please refer to our User Guide for more complete details.
In this section we will show how to build and run a hello world program which uses mpi.
Hostfile Format The mpirun_rsh hostfile format allows for users to specify hostnames, one per line, optionally with a multiplier, and HCA specification. The multiplier allows you to save typing by allowing you to specify blocked distribution of MPI ranks using one line per hostname. The HCA specification allows you to force an MPI rank to use a particular HCA.
The optional components are delimited by a ‘:’. Comments and empty lines are also allowed. Comments start with ‘#’ and continue to the next newline.
The following demonstrates the distribution of MPI ranks when using different hostfiles:
env variables Environment variables are specified using the ‘NAME=VALUE’ syntax directly before the command name is specified.
Pass an environment variable named FOO with the value BAR
MVAPICH2 supports other launchers and resource managers such as Hydra, SLURM, and PBS. Please look at our User Guide for more complete details.
Please see the following for more information.