1. Overview

The MVAPICH2-J library provides experimental Java bindings to the MVAPICH2 family of libraries. The implementation of the MVAPICH2-J software is inspired by MPJ Express and mpiJava and it follows the Open MPI Java bindings API. The software currently provides support for communicating data from basic Java datatypes as well as direct ByteBuffers from the Java New I/O (NIO) package. MVAPICH2-J does not support communication to/from GPU device memory. As a Java wrapper library to MVAPICH2, the library supports a variety of high-speed interconnects including InfiniBand, Internet Wide-area RDMA Protocol (iWARP), RDMA over Converged Ethernet (RoCE), Intel's Performance Scaled Messaging (PSM), Omni-Path, etc. In the current experimental phase, the following MPI-like operations are supported:

  • Blocking/non-blocking point-to-point functions for Java arrays (basic datatypes) and Java NIO direct ByteBuffers.

  • Blocking collective functions for Java arrays (basic datatypes) and Java NIO direct ByteBuffers.

  • Blocking strided collective functions.

More details on the API for communicating direct ByteBuffers or Java arrays with point-to-point or collective MPI operations can be found in the MVAPICH2-J Java docs.

2. Prerequisites for Running MVAPICH2-J

2A. Install Java Development Kit (JDK)

A copy of JDK can be obtained from Open JDK or Sun JDK. It will be needed to setup a JAVA_HOME environment variable as it will be used in building the C source files. The following commands setup the JAVA_HOME environment variable and confirm the installation of the JDK.

$ export JAVA_HOME=path/to/java-install-dir
$ java -version
$ javac -version
2B. Install MVAPICH2

Download and install the latest version (2.3.7) of the MVAPICH2 library. The MPILIB is required to point to the MPI installation. The MPILIB environment variable is needed later in compiling the C source code for the Java MPI library. The detailed user-guide for the MVAPICH2 library can be seen here.

$ export MPILIB=path/to/MPI-install-dir

3. Downloading the MVAPICH2-J Library

3A. Download

Download the software from here. Use the following commands:

$ wget http://mvapich.cse.ohio-state.edu/download/mvapich/mv2j/mvapich2-j-2.3.7.tar.gz
$ tar -xzvf mvapich2-j-2.3.7.tar.gz

The folder that ends being created is called mvapich2-j-2.3.7.

3B. Set the MV2J_HOME Variable

The MVAPICH2-J software requires the MV2J_HOME environment variable.

$ export MV2J_HOME=/path/to/root/directory/of/MVAPICH2-J

4. Compiling MVAPICH2-J

The MVAPICH2-J software consists of Java and C source files that need to be compiled separately. The Java source files are compiled using Apache Ant.

4A. Install Apache Ant

The Apache Ant software can be installed from here. The installation can be confirmed using the command:

$ ant -version
4B. Compiling Java Source Files

The Java source files are compiled using Apache Ant. These files can be compiled as follows.

$ cd $MV2J_HOME/src/java
$ ant
4c. Compiling Native C Source Files

The C source files are compiled using a Makefile. The Makefile assumes that the MPILIB environment variable points to the MVAPICH2 installation directory. A successful compilation of C source files creates the libmvapich2-j.so file and copies it to $MV2J_HOME/lib folder. The C source files can be compiled using the following commands:

$ cd $MV2J_HOME/src/c
$ make

5. HelloWorld with MVAPICH2-J

This section provides an overview of writing, compiling, and executing HelloWorld program with the MVAPICH2-J software.

5A. Write the HelloWorld Program

Write the simple HelloWorld program. You can find this file in $MV2J_HOME/examples.

import mpi.*;

public class HelloWorld {
  public static void main(String[] args) throws Exception {
    MPI.Init(args);

    int me = MPI.COMM_WORLD.getRank();
    int ntasks = MPI.COMM_WORLD.getSize();
    String host = MPI.getProcessorName();

    if (me == 0) {
      System.out.println("Java Bindings for the MVAPICH2 Library");
    }

    MPI.COMM_WORLD.barrier();
    System.out.println("Process " + me + " of " + ntasks + " on " + host);
    MPI.COMM_WORLD.barrier();

    MPI.Finalize();
  }
}
5B. Compile the HelloWorld Program

Compile the HelloWorld code as follows:

$ javac -cp $MV2J_HOME/lib/mvapich2-j.jar:. HelloWorld.java
5C. Write the Hosts File

Create a host file that contains names of hosts that are used to execute the HelloWorld program in parallel. This guide assumes that the name of this file is hosts.

$ cat hosts 
<host1>
<host2>
5D. Run the HelloWorld Program with mpirun_rsh

The HelloWorld program can be executed using mpirun_rsh as follows. The MPILIB environment variable points to the MVAPICH2 installation directory.

$ $MPILIB/bin/mpirun_rsh -np 2 -hostfile hosts java -cp $MV2J_HOME/lib/mvapich2-j.jar:. -Djava.library.path=$MV2J_HOME/lib HelloWorld

6. Calculating Pi with Point-to-Point and Collective Communication

This section illustrates writing a program that computes Pi using MVAPICH2-J with point-to-point and collective communication functions.

6A. Pi Example using Point-to-Point Functions

6A-i.
Write the Pi program PiExamplePt2pt.java using blocking send and receive primitives. The Pi source files can be seen in the $MV2J_HOME/examples folder of the software.

import mpi.*;

public class PiExamplePt2pt {
  public PiExamplePt2pt() {}

  public PiExamplePt2pt(String[] args) throws Exception {
    int intervals[] = new int[1];
    int rank, tasks, i = 0;
    final double PI25DT = 3.141592653589793238462643;
    double mypi[] = new double[1];
    double pi[] = new double[1];
    double h, sum, x;

    MPI.Init(args);
    rank = MPI.COMM_WORLD.getRank();
    tasks = MPI.COMM_WORLD.getSize();

    if (args[0] == null) {
      intervals[0] = 10;
    } else {
      intervals[0] = Integer.parseInt(args[0]);
    }

    h = 1.0 / (double) intervals[0];
    sum = 0.0;
    for (i = rank + 1; i <= intervals[0]; i += tasks) {
      x = h * ((double)i - 0.5);
      sum += 4.0 / (1.0 + x*x);
    }
    mypi[0] = h * sum;

    if (rank != 0) {
      MPI.COMM_WORLD.send(mypi, 1, MPI.DOUBLE, 0, rank);
    }

    if (rank == 0) {
      double finalPi = mypi[0];
      for (i = 1; i <= tasks - 1; i++) {
        MPI.COMM_WORLD.recv(pi, 1, MPI.DOUBLE, i, i);
        finalPi += pi[0];
      }

      System.out.println();
      System.out.printf("pi is approximately %.16f, Error is %.16f\n", finalPi,
              Math.abs(finalPi - PI25DT));
    }
    MPI.Finalize();
  }

  public static void main (String[] args) {
    try {
      PiExamplePt2pt c = new PiExamplePt2pt(args);
    } catch (Exception e) {
      e.printStackTrace();
    }
  }
}

6A-ii.
Create a host file that contains names of compute nodes used for parallel execution.

$ cat hosts 
<hostname1>
<hostname2>

6A-iii.
Compile the code:

$ javac -cp $MV2J_HOME/lib/mvapich2-j.jar:. PiExamplePt2pt.java

6A-iv.
Run the parallel code:

$ $MPILIB/bin/mpirun_rsh -np 4 -hostfile hosts java -cp $MV2J_HOME/lib/mvapich2-j.jar:. -Djava.library.path=$MV2J_HOME/lib PiExamplePt2pt 100
6B. Pi Example using Collectives

6B-i.
Write the Pi program PiExampleCCL.java using collective primitives. The Pi source files can be seen in the $MV2J_HOME/examples folder of the software.

import mpi.*;

public class PiExampleCCL {
  public PiExampleCCL() {}

  public PiExampleCCL(String[] args) throws Exception {
    int intervals[] = new int[1];
    int rank, tasks, i = 0;
    final double PI25DT = 3.141592653589793238462643;
    double mypi[] = new double[1];
    double pi[] = new double[1];
    double h, sum, x;

    MPI.Init(args);
    rank = MPI.COMM_WORLD.getRank();
    tasks = MPI.COMM_WORLD.getSize();

    intervals[0] = Integer.parseInt(args[0]);

    h = 1.0 / (double) intervals[0];
    sum = 0.0;
    for (i = rank + 1; i <= intervals[0]; i += tasks) {
      x = h * ((double)i - 0.5);
      sum += 4.0 / (1.0 + x*x);
    }
    mypi[0] = h * sum;

    MPI.COMM_WORLD.reduce(mypi, pi, 1, MPI.DOUBLE, MPI.SUM, 0);

    if (rank == 0) {
      System.out.println();
      System.out.printf("pi is approximately %.16f, Error is %.16f\n", pi[0],
              Math.abs(pi[0] - PI25DT));
    }
    MPI.Finalize();
  }

  public static void main (String[] args) {
    try {
      PiExampleCCL c = new PiExampleCCL(args);
    } catch (Exception e) {
      e.printStackTrace();
    }
  }
}

6B-ii.
Create a host file that contains names of compute nodes used for parallel execution.

$ cat hosts 
<hostname1>
<hostname2>

6B-iii.
Compile the code:

$ javac -cp $MV2J_HOME/lib/mvapich2-j.jar:. PiExampleCCL.java

6B-iv.
Run the parallel code:

$ $MPILIB/bin/mpirun_rsh -np 4 -hostfile hosts java -cp $MV2J_HOME/lib/mvapich2-j.jar:. -Djava.library.path=$MV2J_HOME/lib PiExampleCCL 100

7. Configuration Parameters for MVAPICH2 Library

The performance of the MVAPICH2-J software is largely dependent on the configuration of the underlying native MVAPICH2 library. There is a wide range of configurable parameters for the MVAPICH2 library that can be seen in the userguide. These parameters allow choosing various point-to-point and collective protocols, algorithms, and process mapping policies. It is also possible to use different batch schedulers including Slurm.

8. Contact and Support

For support, please contact us via MVAPICH2 mailing lists or help.