MVAPICH2 1.2 User Guide
Last revised: March 21, 2009
InfiniBand and 10GbE/iWARP are emerging as high-performance interconnects delivering low latency and high bandwidth. They are also getting widespread acceptance due to their open standards.
MVAPICH (pronounced as “em-vah-pich”) is an open-source MPI software to exploit the novel features and mechanisms of InfiniBand, iWARP and other RDMA-enabled interconnects and deliver best performance and scalability to MPI applications. This software is developed in the Network-Based Computing Laboratory (NBCL), headed by Prof. Dhabaleswar K. (DK) Panda.
Currently, there are two versions of this MPI: MVAPICH with MPI-1 semantics and MVAPICH2 with MPI-2 semantics. This open-source MPI software project started in 2001 and a first high-performance implementation was demonstrated at Supercomputing ’02 conference. After that, this software has been steadily gaining acceptance in the HPC and InfiniBand community. As of February 2, 2009, more than 840 organizations (National Labs, Universities and Industry) world-wide have downloaded this software from OSU’s web site directly. In addition, many InfiniBand and iWARP vendors, server vendors, and systems integrators have been incorporating MVAPICH/MVAPICH2 into their software stacks and distributing it. Several InfiniBand systems using MVAPICH/MVAPICH2 have obtained positions in the TOP 500 ranking. MVAPICH and MVAPICH2 are also available with the Open Fabrics Enterprise Distribution (OFED) stack. Both MVAPICH and MVAPICH2 distributions are available under BSD licensing.
More details on MVAPICH/MVAPICH2 software, users list, mailing lists, sample performance numbers on a wide range of platforms and interconnect, a set of OSU benchmarks, related publications, and other InfiniBand- and iWARP-related projects (parallel file systems, storage, data centers) can be obtained from the following URL:
This document contains necessary information for MVAPICH2 users to download, install, test, use, tune and troubleshoot MVAPICH2 1.2. As we get feedbacks from users and take care of bug-fixes, we introduce new tarballs and also continuously update this document. Thus, we strongly request you to refer to our web page for updates.
This guide is designed to take the user through all the steps involved in configuring, installing, running and tuning MPI applications over InfiniBand using MVAPICH2 1.2.
In Section 3 we describe all the features in MVAPICH2 1.2. As you read through this section, please note our new features (highlighted as NEW) in the 1.2 series. Some of these features are designed in order to optimize specific type of MPI applications and achieve greater scalability. Section 4 describes in detail the configuration and installation steps. This section enables the user to identify specific compilation flags which can be used to turn some of the features on of off. Basic usage of MVAPICH2 is explained in Section 5. Section 6 provides instructions for running MVAPICH2 with some of the advanced features. Section 8 describes the usage of the OSU Benchmarks. If you have any problems using MVAPICH2, please check Section 9 where we list some of the common problems people face. In Section 10 we suggest some tuning techniques for multi-thousand node clusters using some of our new features. Finally in Section 11 we list all important run-time parameters, their default values and a small description of what that parameter stands for.
MVAPICH2 (MPI-2 over InfiniBand) is an MPI-2 implementation based on MPICH2 ADI3 layer. It also supports all MPI-1 functionalities. MVAPICH2 1.2 is available as a single integrated package (with MPICH2 1.0.7).
The current release supports the following four underlying transport interfaces:
Please note that the support for VAPI interface has been dropped from MVAPICH2 1.2 because OpenFabrics interface is getting more popular. MVAPICH2 users still using VAPI interface are strongly requested to migrate to the OpenFabrics-IB interface.
MVAPICH2-1.2 delivers the same level of performance as MVAPICH 1.0.1, the latest release package of MVAPICH supporting MPI-1 standard. In addition, MVAPICH2 1.2 provides support and optimizations for other MPI-2 features, multi-threading and fault-tolerance (Checkpoint-restart). A complete set of features of MVAPICH2 1.2 are:
This uDAPL support is generic and can work with other networks that provide uDAPL interface. Please note that the stability and performance of MVAPICH2 with uDAPL depends on the stability and performance of the uDAPL library used. Starting from version 1.2, MVAPICH2 supports both uDAPL v2 and v1.2 on Linux.
The MVAPICH2 1.2 package and the project also includes the following provisions:
The MVAPICH2 installation process is designed to enable the most widely utilized features on the target build OS by default. Supported operating systems include Linux and Solaris. The default interface is OpenFabrics IB/iWARP on Linux and uDAPL on Solaris. uDAPL and TCP/IP devices can also be explicitly selected on Linux. The installation section provides generic instructions for building from a Tarball or our latest sources. Please see the subsection for the device you are targeting for specific configuration instructions.
The MVAPICH2 1.2 source code package includes MPICH2 1.0.7. All the required files are present as a single tarball. Download the most recent distribution tarbal from:
Unpack the tarball and use the standard GNU procedure to compile:
$ tar xzf mvapich2-1.2.tar.gz
$ cd mvapich2-1.2
$ make install
These instructions assume you have already installed subversion.
The MVAPICH2 SVN repository is available at:
Please keep in mind the following guidelines before deciding which version to check out:
$ svn co https://mvapich.cse.ohio-state.edu/svn/mpi/mvapich2/branches/1.2
$ svn co https://mvapich.cse.ohio-state.edu/svn/mpi/mvapich2/trunk mvapich2
$ svn co https://mvapich.cse.ohio-state.edu/svn/mpi/mvapich2/tags/1.2
The mvapich2 directory under your present working directory contains a working copy of the MVAPICH2 source code. Now that you have obtained a copy of the source code, you need to update the files in the source tree:
$ cd mvapich2
This script will generate all of the source and configuration files you need to build MVAPICH2. If the command ”autoconf” on your machine does not run autoconf 2.59 or later, but you do have a new enough autoconf available, then you can specify the correct one with the AUTOCONF environment variable (the AUTOHEADER environment variable is similar). Once you’ve prepared the working copy by running maint/updatefiles, just follow the usual configuration and build procedure:
$ make install
With this release of MVAPICH2, the mpirun_rsh/mpispawn framework from the MVAPICH distribution is now provided as an alternative to to mpd/mpiexec. By default both process managers are installed.
The mpirun_rsh/mpispawn framework launches jobs on demand in a manner more scalable than mpd/mpiexec. Using mpirun_rsh also alleviates the need to start daemons in advance on nodes used for MPI jobs.
There is now a configuration option that can be used to allow mpicc and the other MPI compiler commands to automatically link MPI programs to the SLURM’s PMI library.
--with-slurm=<path to slurm installation>
OpenFabrics IB/iWARP is the default interface on Linux. It can be explicitly selected by configuring with:
$ ./configure --with-rdma=gen2
Configuration Options for OpenFabrics IB/iWARP
The uDAPL interface is the default on Solaris. It can be explicitly selected on both Solaris and Linux by configuring with:
$ ./configure --with-rdma=udapl
Configuration options for uDAPL
The use of TCP/IP requires the explicit selection of a TCP/IP enabled channel. The recommended channel is ch3:sock and it can be selected by configuring with:
Additional instructions for configuring with TCP/IP can be found in the MPICH2 documentation available at:
MVAPICH2 provides a variety of MPI compilers to support applications written in different programming languages. Please use mpicc, mpif77, mpiCC, or mpif90 to compile applications. The correct compiler should be selected depending upon the programming language of your MPI application.
These compilers are available in the MVAPICH2_HOME/bin directory. MVAPICH2 installation directory can also be specified by modifying $PREFIX, then all the above compilers will also be present in the $PREFIX/bin directory.
Examples of running programs using mpirun_rsh:
$ mpirun_rsh -np 4 n0 n1 n2 n3 ./cpi
This command launches cpi on nodes n0, n1, n2 and n3, one process per node. By default ssh is used.
$ mpirun_rsh -rsh -np 4 n0 n1 n2 n3 ./cpi
This command launches cpi on nodes n0, n1, n2 and n3, one process per each node using rsh instead of ssh.
$ mpirun_rsh -np 4 -hostfile hosts ./cpi
A list of target nodes must be provided in the file hosts one per line. MPI ranks are assigned in order of the hosts listed in the hosts file or in the order they are passed to mpirun_rsh. ie. if the nodes are listed as n0 n1 n0 n1, then n0 will have two processes, rank 0 and rank 2; whereas n1 will have rank 1 and 3. This rank distribution is known as “cyclic”. If the nodes are listed as n0 n0 n1 n1, then n0 will have ranks 0 and 1; whereas n1 will have ranks 2 and 3. This rank distribution is known as “block”.
Many parameters of the MPI library can be configured at run-time using environmental variables. In order to pass any environment variable to the application, simply put the variable names and values just before the executable name, like in the following example:
$ mpirun_rsh -np 4 -hostfile hosts ENV1=value ENV2=value ./cpi
Note that the environmental variables should be put immediately before the executable.
Alternatively, you may also place environmental variables in your shell environment (e.g. .bashrc). These will be automatically picked up when the application starts executing.
Note that there are many different parameters which could be used to improve the performance of applications depending upon their requirements from the MPI library. For a discussion on how to identify such parameters, see Section 10.
Other options of mpirun_rsh can be obtained using
$ mpirun_rsh --help
Note that mpirun_rsh is sensitive to the ordering of the command-line options.
SLURM is an open-source resource manager designed by Lawrence Livermore National Laboratory.
SLURM software package and its related documents can be downloaded from:
Once SLURM is installed and the daemons are started, applications compiled with MVAPICH2 can be launched by SLURM, e.g.
$ srun -n2 --mpi=none ./a.out
The use of SLURM enables many good features such as explicit CPU and memory binding. For example, if you have two processes and want to bind the first process to CPU 0 and Memory 0, and the second process to CPU 4 and Memory 1, then it can be achieved by:
$ srun --cpu_bind=v,map_cpu:0,4 --mem_bind=v,map_mem:0,1 -n2 --mpi=none ./a.out
For more information about SLURM and its features please visit SLURM website.
Prerequisites: ssh should be enabled between the front nodes and the computing nodes.
Please follow these steps to setup MPD:
$ export MPD_BIN=$MVAPICH2_HOME/bin
$ export PATH=$MVAPICH2_HOME/bin:$PATH
$MVAPICH2_HOME is the installation path of your MVAPICH2, as specified by $PREFIX when you configure MVAPICH2.
Specify the hostnames of the compute nodes in a file. If you have a hostfile like:
then one process per node will be started on each one of these compute nodes.
$ mpdboot -n 4 -f hostfile
Note: The command, mpdboot, also takes a default hostfile name mpd.hosts. If you have created the hostfile as mpd.hosts, you can omit the option “-f hostfile”.
This should list all the nodes specified in the hostfile, not necessarily in the order specified in the hostfile.
Up to now, we have specified setting up the environment, which is independent of the underlying device supported by MVAPICH2. In the next sections, we present details specific to different devices.
To start multiple processes, mpiexec can be used in the following fashion:
$ mpiexec -n 4 ./cpi
Four processes will be started on the compute nodes n0, n1, n2 and n3. mpiexec can also be run with several options. “$ mpiexec --help” lists all the possible options. A useful option is to specify a machinefile which holds the process mapping to machines. It can also be used to specify the number of processes to be run on each host. The machinefile option can be used with mpiexec as follows:
$ mpiexec -machinefile mf -n 4 ./cpi
where the machine file ”mf” contains the process to machine mapping. For example, if you want to run all the 4 processes on n0, then ”mf” contains the following lines:
$ cat mf
Environmental variables can be set with mpiexec as follows:
$ mpiexec -n 4 -env ENV1 value1 -env ENV2 value2 ./cpi
Note that the environmental variables should be put immediately before the executable file. The mpiexec command also propagates exported variables in its runtime environment to all processes by default. Exporting a variable before running mpiexec has the same effect as explicitly passing its value with the -env command line option. The command above could be done in the following manner when using a Bourne shell derivative:
$ export ENV1=value1
$ export ENV2=value2
$ mpiexec -n 4 ./cpi
In MVAPICH2, Gen2-iWARP support is enabled with the use of the run time environment variable ‘‘MV2_USE_IWARP_MODE’’.
In addition to this flag, all the systems to be used need the following one time setup for enabling RDMA CM usage.
Programs can be executed as follows:
$ mpiexec -n 4 -env MV2_USE_IWARP_MODE 1 -env ENV1 value1 prog
The iWARP device also provides totalview debugging and shared library support. Please refer to section 5.2.9 and 5.2.10 for shared library and totalview support, respectively.
MVAPICH2 can be configured with the uDAPL device, as described in the Section 4.5 . To compile MPI applications, please refer to the Section 5.1. In order to run MPI applications with uDAPL support, please specify the environmental variable MV2_DAPL_PROVIDER. As an example,
$ mpiexec -n 4 -env MV2_DAPL_PROVIDER OpenIB-cma ./cpi
$ export MV2_DAPL_PROVIDER=OpenIB-cma
$ mpiexec -n 4 ./cpi
Please check the /etc/dat.conf file on Linux or /etc/dat/dat.conf on Solaris to find all the available uDAPL service providers. The default value for the uDAPL provider will be chosen, if no environment variable is provided at runtime. If you are using OpenFabrics software stack on Linux, the default DAPL provider is OpenIB-cma for DAPL-1.2, and ofa-v2-ib0 for DAPL-2.0. If you are using Solaris, the default DAPL provider is ibd0.
The uDAPL device also provides totalview debugging and shared library support. Please refer to section 5.2.9 and 5.2.10 for shared library and totalview support, respectively.
If you would like to run an MPI job using IPoIB but your IB card is not the default interface for ip traffic you have two options. For both of the options , assume that you have a cluster setup as the following:
The MPI Job Uses IPoIB: In this scenario, you will start up mpd like normal. However, you
will need to create a machine file for mpiexec that tells mpiexec to use a particular interface.
$ cat - > $(MPD_HOSTFILE) compute1
$ mpdboot -n 4 -f $(MPD_HOSTFILE)
The ifhn portion tells mpiexec to use the interface associated with that ip address for
each machine. You can now run your MPI application using IPoIB similar to the following.
$ mpiexec -n $(NUM_PROCESS) -f $(MACHINE_FILE) $(MPI_APPLICATION)
Both MPD And the MPI Job Use IPoIB: In this scenario you will start up mpd in a modified
fashion. However, you will not need to create a machine file for mpiexec. Your hostsfile for mpdboot must
contain the ip addresses, or hostnames mapped to these addresses, of each machine’s IPoIB interface. The
only exception is that you do not list the ip address or hostname of the local machine. This will be
specified on the command line of the mpdboot command using the –ifhn option. Example:
$ cat - > $(MPD_HOSTFILE)
$ mpdboot -n 4 -f $HOSTSFILE --ifhn=192.168.1.1
The –ifhn option tells mpdboot to use the interface corresponding to that ip address to create the mpd
ring and run MPI jobs. You can now run your MPI application using IPoIB similar to the following.
$ mpiexec -n $(NUM_PROCES) $(MPI_APPLICATION)
Note: For both options, you can replace the IPoIB addresses with aliases
MVAPICH2 contains optimized Lustre ADIO support for the OpenFabrics/Gen2 device. The Lustre directory should be mounted on all nodes on which MVAPICH2 processes will be running. Compile MVAPICH2 with ADIO support for Lustre as described in Section 4. If your Lustre mount is /mnt/datafs on nodes n0 and n1, on node n0, you can compile and run your program as follows:
$ mpicc -o perf romio/test/perf.c
$ mpirun_rsh -np 2 n0 n1 <path to perf>/perf -fname /mnt/datafs/testfile
If you have enabled support for multiple file systems, append the prefix ”lustre:” to the name of the file. For example:
$ mpicc -o perf romio/test/perf.c
$ mpirun_rsh -np 2 n0 n1 ./perf -fname lustre:/mnt/datafs/testfile
MVAPICH2 provides shared library support. This feature allows you to build your application on top of MPI shared library. If you choose this option, you still will be able to compile applications with static libraries. But as default, when you have shared library support enabled your applications will be built on top of shared libraries automatically. the following commands provide some examples of how to build and run your application with shared library support.
MVAPICH2 provides TotalView support. The following commands provide an example of how to build and run your application with TotalView support. Note: running TotalView requires correct setup in your environment, if you encounter any problem with your setup, please check with your system administrator for help.
In this section, we present the usage instructions for advanced features provided by MVAPICH2.
MVAPICH2 has integrated multi-rail support. Run-time variables are used to specify the control parameters of the multi-rail support; number of adapters with MV2_NUM_HCAS (section 11.26), number of ports per adapter with MV2_NUM_PORTS (section 11.27), and number of queue pairs per port with MV2_NUM_QP_PER_PORT (section 11.28). Those variables are default to 1 if you do not specify them.
Large messages are striped across all HCA’s. The threshold for striping = (MV2_VBUF_TOTAL_SIZE × MV2_NUM_PORTS × MV2_NUM_QP_PER_PORT × MV2_NUM_HCAS).
MVAPICH2 also gives the flexibility to balance short message traffic over multiple HCAs in a multi-rail configuration. The run-time variable MV2_SM_SCHEDULING can be used to choose between the various load balancing options available. It can be set to USE_FIRST (Default) or ROUND_ROBIN. In the USE_FIRST scheme, the HCA in slot 0 is always used to transmit the short messages. If ROUND_ROBIN is chosen, messages are sent accross all HCAs alternately.
Following is an example to run multi-rail support with two adapters, using one port per adapter and
one queue pair per port:
$ mpirun_rsh -np 2 n0 n1 MV2_NUM_HCAS=2 MV2_NUM_PORTS=1 MV2_NUM_QP_PER_PORT=1 prog
$ mpiexec -n 2 -env MV2_NUM_HCAS 2 -env MV2_NUM_PORTS 1 -env MV2_NUM_QP_PER_PORT 1 prog
Note that you don’t need to specify MV2_NUM_PORTS and MV2_NUM_QP_PER_PORT since they default to 1, so you can type:
$ mpirun_rsh -np 2 n0 n1 MV2_NUM_HCAS=2 prog
$ mpirun_rsh -np 2 n0 n1 MV2_NUM_HCAS=2 MV2_SM_SCHEDULING=ROUND_ROBIN prog
$ mpiexec -n 2 -env MV2_NUM_HCAS=2 prog
In MVAPICH2-1.2, run-time variables are used to switch various optimization schemes on and off. Following is a list of optimizations schemes and the control environmental variables, for a full list please refer to the section 11:
MVAPICH2 provides system-level checkpoint/restart functionality for the OpenFabrics Gen2-IB interface. Three methods are provided to invoke checkpointing: Manual, Automated and Application Initiated Synchronous Checkpointing. In order to utilize the checkpoint/restart functionality there a couple of steps that need to be followed.
And users are strongly encouraged to read the Administrators guide of BLCR, and test the BLCR on the target platform, before using the checkpointing feature of MVAPICH2.
Now, your system is set up to use the Checkpoint/Restart features of MVAPICH2. There are several parameters related to MVAPICH2 to be setup to control the configuration and useage of this feature.
In order to provide maximum flexibility to end users who wish to use the checkpoint/restart features of MVAPICH2, we’ve provided three different methods which can be used to take the checkpoints during the execution of the MPI application. These methods are described as follows:
int main(int argc,char *argv)
To restart a job from a checkpoint, users need to issue another command of BLCR, ‘‘cr_restart’’ with the checkpoint file name of the MPI job console as the parameter, usually context.<pid>. The checkpoint file name of the MPI job console can be specified when issuing the checkpoint, see the ‘‘cr_checkpoint --help’’ for more information. Please note that the names of checkpoint files of the MPI processes will be assigned according to the environment variable MV2_CKPT_FILE, ($MV2_CKPT_FILE.<number of checkpoint>.<process rank>).
Please refer to the Section 9.6 for troubleshooting with Checkpoint/Restart.
In MVAPICH2, for using RDMA CM the runtime variable MV2_USE_RDMA_CM needs to be used as described in 11.
In addition to these flags, all the systems to be used need the following one time setup for enabling RDMA CM usage.
Programs can be executed as follows:
$ mpirun_rsh -np 2 n0 n1 MV2_USE_RDMA_CM=1 prog
$ mpiexec -n 2 -env MV2_USE_RDMA_CM 1 prog
In MVAPICH2, support for shared memory based collectives has been enabled for MPI applications running over OpenFabrics Gen2-IB, Gen2-iWARP and uDAPL stack. Currently, this support is available for the following collective operations:
Optionally, these feature can be turned off at runtime by using the following parameters:
Please refer to Section 11 for further details.
MVAPICH2 supports hot-spot and congestion avoidance using InfiniBand multi-pathing mechanism. This support is available for MPI applications using OpebFabrics stack and InfiniBand adapters.
To enable this functionality, a run-time variable, MV2_USE_HSAM (Section 11.52) can be enabled, as
shown in the following example:
$ mpirun_rsh -np 2 n0 n1 MV2_USE_HSAM=1 ./cpi
$ mpiexec -n 2 -env MV2_USE_HSAM 1 ./cpi
This functionality automatically defines the number of paths for hot-spot avoidance. Alternatively, the maximum number of paths to be used between a pair of processes can be defined by using a run-time variable MV2_NUM_QP_PER_PORT (Section 11.28).
We expect this functionality to show benefits in the presence of at least partially non-overlapping paths in the network. OpenSM, the subnet manager distributed with OpenFabrics supports LMC mechanism, which can be used to create multiple paths:
$ opensm -l4
will start the subnet manager with LMC value to four, creating sixteen paths between every pair of nodes.
MVAPICH2 supports network fault recovery by using InfiniBand Automatic Path Migration mechanism. This support is available for MPI applications using OpebFabrics stack and InfiniBand adapters.
To enable this functionality, a run-time variable, MV2_USE_APM (section 11.48) can be enabled, as
shown in the following example:
$ mpirun_rsh -np 2 n0 n1 MV2_USE_APM=1 ./cpi
$ mpiexec -n 2 -env MV2_USE_APM 1 ./cpi
MVAPICH2 also supports testing Automatic Path Migration in the subnet in the absence of network
faults. This can be controlled by using a run-time variable MV2_USE_APM_TEST (section 11.49). This
should be combined with MV2_USE_APM as follows:
$ mpirun_rsh -np 2 n0 n1 MV2_USE_APM=1 MV2_USE_APM_TEST=1 ./cpi
$ mpiexec -n 2 -env MV2_USE_APM 1 -env MV2_USE_APM_TEST 1 ./cpi
MVAPICH2 supports user defined CPU mapping through Portable Linux Processor Affinity (PLPA) library (http://www.open-mpi.org/projects/plpa/). The feature is especially useful on multi-core systems, where performance may be different if processes are mapped to different cores. The mapping can be specified by setting the environment variable MV2_CPU_MAPPING.
For example, if you want to run 4 processes per node and utilize cores 0, 1, 4, 5 on each node, you can specify:
$ mpirun_rsh -np 64 -hostfile hosts MV2_CPU_MAPPING=0:1:4:5 ./a.out
$ mpiexec -n 64 -env MV2_CPU_MAPPING 0:1:4:5 ./a.out
In this way, process 0 on each node will be mapped to core 0, process 1 will be mapped to core 1, process 2 will be mapped to core 4, and process 3 will be mapped to core 5. For each process, the mapping is separated by a single “:”.
PLPA supports more flexible notations when specifying core mapping. More details can be found at:
The mpiname application is provided with MVAPICH2 to assist with determining the MPI library version and related information. The usage of mpiname is as follows:
Print MPI library information. With no OPTION, the output is the same as -v.
-a print all information
-c print compilers
-d print device
-h display this help and exit
-n print the MPI name
-o print configuration options
-r print release date
-v print library version
If you have arrived at this point, you have successfully installed MVAPICH2. Congratulations!! In the mvapich2-1.2/osu_benchmarks directory, we provide these basic performance tests:
The benchmarks are also periodically updated. The latest copy of the benchmarks can be downloaded from http://mvapich.cse.ohio-state.edu/benchmarks/. Sample performance numbers for these benchmarks on representative platforms with InfiniBand and iWARP adapters are also included on our projects’ web page. You are welcome to compare your performance numbers with our numbers. If you see any big discrepancy, please let us know by sending an email to email@example.com.
Based on our experience and feedback we have received from our users, here we include some of the problems a user may experience and the steps to resolve them. If you are experiencing any other problem, please feel free to contact us by sending an email to firstname.lastname@example.org.
MVAPICH2 can be used over five underlying transport interfaces, namely OpenFabrics (Gen2), OpenFabrics (Gen2-iWARP), uDAPL and TCP/IP. Based on the underlying library being utilized, the troubleshooting steps may be different. However, some of the troubleshooting hints are common for all underlying libraries. Thus, in this section, we have divided the troubleshooting tips into four sections: General troubleshooting and Troubleshooting over any one of the five transport interfaces.
This is a problem which typically occurs due to the presence of multiple installations of MVAPICH2 on the same set of nodes. The problem is due to the presence of mpi.h other than the one, which is used for executing the program. This problem can be resolved by making sure that the mpi.h from other installation is not included.
fork() and system() is supported for the OpenFabrics device as long as the kernel is being used is Linux 2.6.16 or newer. Additionally, the version of OFED used should be 1.2 or higher. The environment variable IBV_FORK_SAFE=1 must also be set to enable fork support.
There is a known bug with the PathScale compiler (before version 2.5) when building MVAPICH2. This problem will be solved in the next major release of the PathScale compiler. To work around this bug, use the the “-LNO:simd=0” C compiler option. This can be set in the build script similarly to:
export CC=‘‘pathcc -LNO:simd=0’’
Please note the use of double quotes. If you are building shared libraries and are using the PathScale compiler (version below 2.5), then you should add “-g” to your CFLAGS, in order to get around a compiler bug.
mvapich2-1.2 introduces a new startup mechanism with much improved scalability on large scale clusters. The new mechanism uses mpirun_rsh to launch the MPI processes. Refer to Section 5.2.1 for details to run applications using mpirun_rsh.
Users are recommended to use mpirun_rsh, especially on large clusters. If you want to use the traditional startup through mpd daemon, however, you can do so by configure mvapich2 with --with-pm=mpd option. Please refer to Section 4 for details.
MVAPICH2 uses CPU affinity to have better performance for single-threaded programs. For multi-threaded programs, e.g. MPI+OpenMP, it may schedule all the threads of a process to run on the same CPU. CPU affinity should be disabled in this case to solve the problem, i.e. set -env MV2_ENABLE_AFFINITY 0.
If you are using ADIO support for Lustre, please make sure that:
– Lustre is setup correctly, and that you are able to create, read to and write from files in the Lustre mounted directory.
– The Lustre directory is mounted on all nodes on which MVAPICH2 processes with ADIO support for Lustre are running.
– The path to the file is correctly specified.
– The permissions for the file or directory are correctly specified.
If you are using ADIO support for Lustre, the recent Lustre releases require an additional mount option to
have correct file locks.
So please include the following option with your lustre mount command: ”-o localflock”.
$ mount -o localflock -t lustre xxxx@o2ib:/datafs /mnt/datafs
MPI programs built with gfortran might not appear to run correctly due to the default output buffering
used by gfortran. If it seems there is an issue with program output, the GFORTRAN_UNBUFFERED_ALL
variable can be set to “y” and exported into the environment before using the mpiexec command to
launch the program, as done in the bash shell example below:
$ export GFORTRAN_UNBUFFERED_ALL=y
Or, if using mpirun_rsh, export the environment variable as in the example:
$ mpirun_rsh -np 2 n1 n2 GFORTRAN_UNBUFFERED_ALL=y ./a.out
Yes, as long as you compile MVAPICH2 and your programs on one of the systems, either AMD or Intel, and run the same binary across the systems. MVAPICH2 has platform specific parameters for performance optimizations and it may not work if you compile MVAPICH2 and your programs on different systems and try to run the binaries together.
This is a known limitation of the current MVAPICH2 version. As a workaround, you can disable shared memory collectives to make it work, i.e. set the environment variable MV2_USE_SHMEM_COLL=0.
If you get this error, please set your .mpd.conf and .mpdpasswd files.
This failure may be an indication that there is a problem with your cluster configuration. If you are confident in the correctness of your cluster configuration, then you can tune the timeout with MV2_MPD_RECVTIMEOUT_MULTIPLIER.
If mpirun_rsh fails with this error message, it was unable to locate a necessary utility. This can be fixed by ensuring that all MVAPICH2 executables are in the PATH on all nodes.
If PATHs cannot be setup as mentioned, then invoke mpirun_rsh with a path prefix. For example:
/path/to/mpirun_rsh -np 2 node1 node2 ./mpi_proc
../../path/to/mpirun_rsh -np 2 node1 node2 ./mpi_proc
Ensure that the MVAPICH2 job launcher mpirun_rsh is compiled with debug symbols. Details are available in Section 5.2.10.
The above error reports that the InfiniBand Adapter is not ready for communication. Make sure that the drivers are up. This can be done by executing the following command which gives the path at which drivers are setup.
% locate libibverbs
In order to check the status of the IB link, one of the following commands can be used:
Add -DGEN2_OLD_DEVICE_LIST_VERB macro to CFLAGS and rebuild MVAPICH2-gen2. If this happens, this means that your Gen2 installation is old and needs to be updated.
A possible reason could be inability to pin the memory required. Make sure the following steps are taken.
* soft memlock phys_mem_in_KB
ulimit -l phys_mem_in_KB
With some distros, we’ve found that adding the ulimit -l line to the sshd init script is no longer necessary. For instance, the following steps work for our rhel5 systems.
* soft memlock unlimited
* hard memlock unlimited
HSAM functionality uses multi-pathing mechanism with LMC functionality. However, some versions of OpenFabrics Drivers (including OpenFabrics Enterprise Distribution (OFED) 1.1) and using the Up*/Down* routing engine do not configure the routes correctly using the LMC mechanism. We strongly suggest to upgrade to OFED 1.2, which supports Up*/Down* routing engine and LMC mechanism correctly.
MVAPICH2 provides network fault tolerance with Automatic Path Migration (APM). However, APM is supported only with OFED 1.2 onwards. With OFED 1.1 and prior versions of OpenFabrics drivers, APM functionality is not completely supported. Please refer to Section 11.48 and section 11.49
If you configure MVAPICH2 with RDMA_CM and see this error, you need to verify if you have setup up the local IP address to be used by RDMA_CM in the file /etc/mv2.conf. Further, you need to make sure that this file has the appropriate file read permissions. Please follow Section 6.4 for more details on this.
If you get this error, please verify that the IP address specified /etc/mv2.conf is correctly specified with the IP address of the device you plan to use RDMA_CM with.
If see this error, you need to check whether the specified network is working or not.
If you configure MVAPICH2 with RDMA_CM and see this error, you need to verify if you have setup up the local IP address to be used by RDMA_CM in the file /etc/mv2.conf. Further, you need to make sure that this file has the appropriate file read permissions. Please follow Section 5.2.5 for more details on this.
If you get this error, please verify that the IP address specified /etc/mv2.conf is correctly specified with the IP address of the device you plan to use RDMA_CM with.
If see this error, you need to check whether the specified network is working or not.
To enable Fortran support, you would need to install the IBM compiler located at (there is a 60-day free trial version) available from IBM.
Once you unpack the tarball, you can customize and use make.mvapich2.vapi to compile and install the package or manually configure, compile and install the package.
If you configure MVAPICH2 with uDAPL and see this error, you need to check whether you have specified the correct uDAPL service provider (Section 5.2.6). If you have specified the uDAPL provider but still see this error, you need to check whether the specified network is working or not. If you are using OpenFabrics software stack on Linux, the default DAPL provider is OpenIB-cma for DAPL-1.2, and ofa-v2-ib0 for DAPL-2.0. If you are using Solaris, the default DAPL provider is ibd0.
If you configure MVAPICH2 with uDAPL and see this error, you need to reduce the value of the environmental variables RDMA_DEFAULT_MAX_SEND_WQE and/or RDMA_DEFAULT_MAX_RECV_WQE depending on the underlying network.
If you get the error: “error while loading shared libraries, libdat.so”, The location of the dat shared library is incorrect. You need to find the correct path of libdat.so and export LD_LIBRARY_PATH to this correct location. For example:
$ mpirun_rsh -np 2 n1 n2 LD_LIBRARY_PATH=/path/to/libdat.so ./a.out
$ export LD_LIBRARY_PATH=/path/to/libdat.so:$LD_LIBRARY_PATH
$ mpiexec -n 2 ./a.out
If you get this error, please set your .mpd.conf and .mpdpasswd files as mentioned in Section 5.2.4.
We recommend that uDAPL IB consumers needing large scale-out use socket cm provider (libdaplscm.so) in leiu of rdma_cm (libdaplcma.so). iWARP users can remain using uDAPL rdma_cm provider. For detailed discussion of this issue please refer to:
Please make sure the following things for a successful restart:
The following things can cause a restart to fail:
FAQ regarding Berkeley Lab Checkpoint/Restart (BLCR) can be found at
http://upc-bugs.lbl.gov/blcr/doc/html/FAQ.html And the userguide for BLCR can be found at
If you encounter any problem with the Checkpoint/Restart support, please feel free to contact us as email@example.com.
MVAPICH2 provides many different parameters for tuning performance for a wide variety of platforms and applications. These parameters can be either compile time parameters or runtime parameters. please refer to Section 10 for a complete description of all these parameters. In this section we classify these parameters depending on what you are tuning for and provide guidelines on how to use them.
MVAPICH2 1.2 has a new, scalable job launcher – mpirun_rsh which uses a tree based mechanism to spawn processes. The degree of this tree is determined dynamically to keep the depth low. For large clusters, it might be beneficial to further flatten the tree by specifying a higher degree. The degree can be overridden with the environment variable MV2_MT_DEGREE (section 11.24).
The following parameters affect memory requirements for each QP.
MV2_DEFAULT_MAX_SEND_WQE and MV2_DEFAULT_MAX_RECV_WQE control the maximum number of WQEs per QP and MV2_MAX_INLINE_SIZE controls the maximum inline size. Reducing the values of these two parameters leads to less memory consumption. They are especially important for large scale clusters with a large amount of connections and multiple rails.
These two parameters are run-time adjustable. Please refer to Sections 11.12 and 11.21 for details.
The following parameters are important in tuning the memory requirements for adaptive rdma fast path feature.
MV2_RDMA_VBUF_POOL_SIZE is a fixed number of pool of vbufs. These vbufs can be shared among all different connections depending on the communication needs of each connection.
On the other hand, the product of MV2_VBUF_TOTAL_SIZE and MV2_NUM_RDMA_BUFFER generally is a measure of the amount of memory registered for eager message passing. These buffers are not shared across connections.
In MVAPICH2-1.2, MV2_VBUF_TOTAL_SIZE is adjustable by environmental variables. Please refer to Section 11.68 for details.
The main environmental parameters controlling the behavior of the Shared Receive Queue design are:
MV2_SRQ_SIZE is the maximum size of the Shared Receive Queue. You may increase this
to value 1000 if the application requires very large number of processors (4K and beyond).
MV2_SRQ_LIMIT defines the low watermark for the flow control handler. This can be reduced if your aim is to reduce the number of interrupts.
MVAPICH2 uses shared memory communication channel to achieve high-performance message passing among processes that are on the same physical node. The two main parameters which are used for tuning shared memory performance for small messages are SMPI_LENGTH_QUEUE ( 11.70) and SMP_EAGER_SIZE ( 11.69). The two main parameters which are used for tuning shared memory performance for large messages are SMP_SEND_BUF_SIZE( 11.72) and SMP_NUM_SEND_BUFFER ( 11.71).
SMPI_LENGTH_QUEUE is the size of the shared memory buffer which is used to store outstanding small and control messages. SMP_EAGER_SIZE defines the switch point from Eager protocol to Rendezvous protocol.
Messages larger than SMP_EAGER_SIZE are packetized and sent out in a pipelined manner.
SMP_SEND_BUF_SIZE is the packet size, i.e. the send buffer size. SMP_NUM_SEND_BUFFER is the number of send buffers.
MVAPICH2 uses on-demand connection management to reduce the memory usage of MPI library. There are 4 parameters to tune connection manager: MV2_ON_DEMAND_THRESHOLD ( 11.30), MV2_CM_RECV_BUFFERS ( 11.7), MV2_CM_TIMEOUT ( 11.9), and MV2_CM_SPIN_COUNT ( 11.8). The first one applies to Gen2-IB, Gen2-iWARP and uDAPL devices and the other three only apply to Gen2 device.
MV2_ON_DEMAND_THRESHOLD defines threshold for enabling on-demand connection management scheme. When the size of the job is larger than the threshold value, on-demand connection management will be used.
MV2_CM_RECV_BUFFERS defines the number of buffers used by connection manager to establish new connections. These buffers are quite small and are shared for all connections, so this value may be increased to 8192 for large clusters to avoid reties in case of packet drops.
MV2_CM_TIMEOUT is the timeout value associated with connection management messages via UD channel. Decreasing this value may lead to faster retries but at the cost of generating duplicate messages.
MV2_CM_SPIN_COUNT is the number of the connection manager polls for new control messages from UD channel for each interrupt. This may be increased to reduce the interrupt overhead when many incoming control messages from UD channel at the same time.
MVAPICH2 uses shared memory to get the best performance for many collective operations: MPI_Allreduce, MPI_Reduce, MPI_Barrier, MPI_Bcast.
The important parameters for tuning these collectives are as follows. For MPI_Allreduce, the optimized shared memory algorithm is used until the MV2_SHMEM_ALLREDUCE_MSG( 11.39).
Similarly for MPI_Reduce the corresponding threshold is MV2_SHMEM_REDUCE_MSG( 11.44) and for MPI_BCAST the threshold can be set using MV2_SHMEM_BCAST_MSG( 11.41).
This parameter specifies the path and the base filename for checkpoint files of MPI processes. The checkpoint files will be named as $MV2_CKPT_FILE.<number of checkpoint>.<process rank>, for example, /tmp/ckpt.1.0 is the checkpoint file for process 0’s first checkpoint. To checkpoint on network-based file systems, user just need to specify the path to it, such as /mnt/pvfs2/my_ckpt_file.
This parameter can be used to enable automatic checkpointing. To let MPI job console automatically take checkpoints, this value needs to be set to the desired checkpointing interval. A zero will disable automatic checkpointing. Using automatic checkpointing, the checkpoint file for the MPI job console will be named as $MV2_CKPT_FILE.<number of checkpoint>.auto. Users need to use this file for restart.
This parameter is used to limit the number of checkpoints saved on file system to save the file system space. When set to a positive value N, only the last N checkpoints will be saved.
This parameter specifies the ports of socket connections to pass checkpointing control messages
between MPD manager and MPI process. Users need to have a set of unused ports starting
with $MV2_CKPT_MPD_BASE_PORT on the compute nodes. The used port will be the
$MV2_CKPT_MPD_BASE_PORT + <process rank> for each MPI processes.
This parameter specifies the port of the socket connection for passing checkpointing control messages on MPI
job console node. Users need to have an unused port to be set to
$MV2_CKPT_MPIEXEC_PORT on the console node.
When this parameter is set to any value, the checkpoints will not be required to sync to disk. It can reduce the checkpointing delay in many cases. But if users are using local file system, or any parallel file system with local cache, to store the checkpoints, it is recommended not to set this parameter because otherwise the checkpoint files will be cached in local memory and will likely be lost upon failure.
This defines the number of buffers used by connection manager to establish new connections. These buffers are quite small and are shared for all connections, so this value may be increased to 8192 for large clusters to avoid reties in case of packet drops.
This is the number of the connection manager polls for new control messages from UD channel for each interrupt. This may be increased to reduce the interrupt overhead when many incoming control messages from UD channel at the same time.
This is the timeout value associated with connection management messages via UD channel. Decreasing this value may lead to faster retries but at the cost of generating duplicate messages.
This allows users to specify process to CPU (core) mapping. The detailed usage of this parameter is described in Section 6.8. This parameter will not take effect if MV2_ENABLE_AFFINITY is set to 0. MV2_CPU_MAPPING is currently not supported on Solaris.
This is to specify the underlying uDAPL library that the user would like to use if MVAPICH2 is built with uDAPL.
This specifies the maximum number of send WQEs on each QP. Please note that for Gen2 and Gen2-iWARP, the default value of this parameter will be 16 if the number of processes is larger than 256 for better memory scalability.
This specifies the maximum number of receive WQEs on each QP (maximum number of receives that can be posted on a single QP).
The internal MTU size. For Gen2, this parameter should be a string instead of an integer. Valid values are: IBV_MTU_256, IBV_MTU_512, IBV_MTU_1024, IBV_MTU_2048, IBV_MTU_4096.
Select the partition to be used for the job.
Enable CPU affinity by setting MV2_ENABLE_AFFINITY to 1 or disable it by setting
MV2_ENABLE_AFFINITY to 0. MV2_ENABLE_AFFINITY is currently not supported on Solaris.
This defines the threshold beyond which the MPI_Get implementation is based on direct one sided RDMA operations.
This specifies the switch point between eager and rendezvous protocol in MVAPICH2. For better performance, the value of MV2_IBA_EAGER_THRESHOLD should be set the same as MV2_VBUF_TOTAL_SIZE.
This specifies the HCA to be used for performing network operations.
This defines the initial number of pre-posted receive buffers for each connection. If communication happen
for a particular connection, the number of buffers will be increased to
This defines the maximum inline size for data transfer. Please note that the default value of this parameter will be 0 when the number of processes is larger than 256 to improve memory usage scalability.
The multiplier to be added to the MPD mpiexec timeout for each process in a job.
The number of seconds after which mpirun_rsh aborts job launch. Note that unlike most other parameters described in this section, this is an environment variable that has to be set in the runtime environment (for e.g. through export in the bash shell).
The degree of the hierarchical tree used by mpirun_rsh. By default mpirun_rsh uses a value that tries to keep the depth of the tree to 4. Note that unlike most other parameters described in this section, this is an environment variable that has to be set in the runtime environment (for e.g. through export in the bash shell).
This defines the total number of buffers that can be stored in the registration cache. It has no effect if MV2_USE_LAZY_MEM_UNREGISTER is not set. A larger value will lead to less frequent lazy de-registration.
This parameter indicates number of InfiniBand adapters to be used for communication on an end node.
This parameter indicates number of ports per InfiniBand adapter to be used for communication per adapter on an end node.
This parameter indicates number of queue pairs per port to be used for communication on an end node. This is useful in the presence of multiple send/recv engines available per port for data transfer.
The number of RDMA buffers used for the RDMA fast path. This fast path is used to reduce latency and overhead of small data and control messages. This value will be ineffective if MV2_USE_RDMA_FAST_PATH is not set.
This defines threshold for enabling on-demand connection management scheme. When the size of the job is larger than the threshold value, on-demand connection management will be used.
This defines the number of buffers pre-posted for each connection to handle send/receive operations.
This defines the threshold beyond which the MPI_Put implementation is based on direct one sided RDMA operations.
This parameter specifies the arp timeout to be used by RDMA CM module.
This parameter specifies the upper limit of the port range to be used by the RDMA CM module when choosing the port on which it listens for connections.
This parameter specifies the lower limit of the port range to be used by the RDMA CM module when choosing the port on which it listens for connections.
The value of this variable can be set to choose different Rendezvous protocols. RPUT (default RDMA-Write) RGET (RDMA Read based), R3 (send/recv based).
The value of this variable controls what message sizes go over the R3 rendezvous protocol. Messages above this message size use MV2_RNDV_PROTOCOL.
The value of this variable controls what message sizes go over the R3 rendezvous protocol when the registration cache is disabled (MV2_USE_LAZY_MEM_UNREGISTER=0). Messages above this message size use MV2_RNDV_PROTOCOL.
The shmem allreduce is used for messages less than this threshold.
The number of leader processes that will take part in the shmem broadcast operation. Must be greater than the number of nodes in the job.
The shmem bcast is used for messages less than this threshold.
This parameter can be used to select the max buffer size of message for shared memory collectives.
This parameter can be used to select the number of communicators using shared memory collectives.
The shmem reduce is used for messages less than this threshold.
This is the low watermark limit for the Shared Receive Queue. If the number of available work entries on the SRQ drops below this limit, the flow control will be activated.
This is the maximum number of work requests allowed on the Shared Receive Queue.
This parameter is used for recovery from network faults using Automatic Path Migration. This functionality is beneficial in the presence of multiple paths in the network, which can be enabled by using lmc mechanism.
This parameter is used for testing the Automatic Path Migration functionality. It periodically moves the alternate path as the primary path of communication and re-loads another alternate path.
Setting this parameter enables mvapich2 to use blocking mode progress. MPI applications do not take up any CPU when they are waiting for incoming messages.
Setting this parameter enables message coalescing to increase small message throughput
This parameter is used for utilizing hot-spot avoidance with InfiniBand clusters. To leverage this functionality, the subnet should be configured with lmc greater than zero. Please refer to section 6.6 for detailed information.
This parameter enables the library to run in iWARP mode. The library has to be built using the flag -DRDMA_CM for using this feature.
Setting this parameter enables mvapich2 to use memory registration cache.
This parameter enables the use of RDMA CM for establishing the connections. The library has to be built using the flag -DRDMA_CM for using this feature.
Setting this parameter enables mvapich2 to use adaptive rdma fast path features for Gen2 interface and static rdma fast path features for uDAPL interface.
Setting this parameter allows mvapich2 to use optimized one sided implementation based RDMA operations.
Setting this parameter enables mvapich2 to use ring based startup.
Use shared memory for intra-node communication.
This parameter can be used to turn off shared memory based MPI_Allreduce for Gen2 over IBA by setting this to 0.
This parameter can be used to turn off shared memory based MPI_Barrier for Gen2 over IBA by setting this to 0.
This parameter can be used to turn off shared memory based MPI_Bcast for Gen2 over IBA by setting this to 0.
Use shared memory for collective communication. Set this to 0 for disabling shared memory collectives.
This parameter can be used to turn off shared memory based MPI_Reduce for Gen2 over IBA by setting this to 0.
Setting this parameter enables mvapich2 to use shared receive queue.
The number of vbufs in the initial pool. This pool is shared among all the connections.
The number of vbufs allocated each time when the global pool is running out in the initial pool. This is also shared among all the connections.
The size of each vbuf, the basic communication buffer of MVAPICH2. For better performance, the value of MV2_IBA_EAGER_THRESHOLD should be set the same as MV2_VBUF_TOTAL_SIZE.
This parameter defines the switch point from Eager protocol to Rendezvous protocol for intra-node communication. Note that this variable should be set in KBytes.
This parameter defines the size of shared buffer between every two processes on the same node for transferring messages smaller than or equal to SMP_EAGERSIZE. Note that this variable should be set in KBytes.
This parameter defines the number of internal send buffers for sending intra-node messages larger than SMP_EAGERSIZE.
This parameter defines the packet size when sending intra-node messages larger than SMP_EAGERSIZE.