University of Texas, Austin wins overall SC'14 Student Cluster Competition Championship using MVAPICH2. Congratulations to the University of Texas, Austin team!!
Multiple events presented by the MVAPICH Team.
MVAPICH2 drives 7
Welcome to this web page related to "MPI over InfiniBand, 10GigE/iWARP and RDMA over Converged Ethernet (RoCE)" project, lead by Network-Based Computing Laboratory (NBCL) of The Ohio State University. MVAPICH2 software, supporting MPI 3.0 standard, delivers best performance, scalability and fault tolerance for high-end computing systems and servers using InfiniBand, 10GigE/iWARP and RoCE networking technologies. This software is being used by more than organizations world-wide in 74 countries (Current Users) to extract the potential of these emerging networking technologies for modern systems. As of , more than downloads have taken place from this project's site. This software is also being distributed by many InfiniBand, 10GigE/iWARP and RoCE vendors in their software distributions. MVAPICH2-X software package provides support for hybrid MPI+PGAS (UPC and OpenSHMEM) programming models with unified communication runtime for emerging exascale systems. MVAPICH2 software is powering several supercomputers in the TOP 500 list. Examples (from the November '14 ranking) include:
- 7th, 519,640-core (Stampede) at TACC
- 11th, 160,768-core (Pleiades) at NASA
- 15th, 76,032-core (Tsubame 2.5) at Tokyo Institute of Technology
This project is supported by funding from U.S. National Science Foundation, U.S. DOE Office of Science, Ohio Board of Regents, ODOD, Cray, Cisco Systems, Intel, Linux Networx, Mellanox, NVIDIA, QLogic, and Sun Microsystems; and equipment donations from Advanced Clustering, AMD, Appro, Chelsio, Dell, Fulcrum, Fujitsu, Intel, Mellanox, Microway, NetEffect, QLogic and Sun. Other technology partner includes: TotalView Technologies.
(NEW) MVAPICH2 2.1rc1 (based on MPICH 3.1.3) with enhanced communication performance for small and medium messages, improved memory footprint, optimization and tuning for Haswell architecture. [more]
(NEW) MVAPICH2-X 2.1rc1 providing support for hybrid MPI+PGAS (UPC and OpenSHMEM) programming models, support for OpenSHMEM 1.0h, on-demand connection establishment, improved job start-up and memory footprint for OpenSHMEM, support for UPC 2.20.0. [more]
(NEW) MVAPICH2-MIC 2.0 (based on MVAPICH2 2.0.1) with optimized pt-to-pt and collective support for native, symmetric and offload modes on clusters with Intel MICs (Xeon Phis) is available. [more]
MVAPICH2 2.0.1 is available with minor feature enhancements and bug-fixes. [more]
MVAPICH2-X 2.0.1 is available with minor bug-fixes. [more]