- 7th, 519,640-core (Stampede) at TACC
- 11th, 74,358-core (Tsubame 2.5) at Tokyo Institute of Technology
- 16th, 96,192-core (Pleiades) at NASA
This project is supported by funding from U.S. National Science Foundation , U.S. DOE Office of Science, Ohio Board of Regents, ODOD , Cray, Cisco Systems, Intel, Linux Networx, Mellanox, NVIDIA, QLogic, and Sun Microsystems; and equipment donations from Advanced Clustering , AMD, Appro, Chelsio, Dell, Fulcrum, Fujitsu , Intel, Mellanox, Microway, NetEffect , QLogic and Sun. Other technology partner includes: TotalView Technologies. More details can be found here.
- (NEW) MVAPICH2 2.0rc1
(based on MPICH 3.1) with support for MPI-3 RMA,
checkpointing with Hydra process manager, tuned communiction on IvyBrige,
and improved job-startup time.
- (NEW) MVAPICH2-X 2.0rc1 providing support for hybrid MPI+PGAS (UPC and OpenSHMEM) programming models for exascale systems, UPC with optimized collectives, and OpenSHMEM with optimized intra-node performance. [more]
- (NEW) OMB 4.3
provides MPI-3 RMA benchmarks and UPC collective benchmarks
- MVAPICH2-GDR 2.0b
(based on MVAPICH2 2.0b) with support for GPU Direct RDMA (GDR)
for NVIDIA GPUs.
- Papers at Recent and Upcoming Conferences
(WSSSPE '13, SC '13, OpenSHMEM '13, PGAS '13, ICPP '13, Cluster '13, EuroMPI '13, HotI '13, XSCALE '13, HPDC '13, ISC '13, ICS '13, IPDPS '13, CCGrid'13, SC '12, PGAS '12, and EuroMPI '12)[more]
- "A Scalable and Portable Approach
to Accelerate Hybrid HPL on
Heterogeneous CPU-GPU Clusters", Cluster '13, BEST Student Paper Award
- (NEW) Upcoming Tutorials on InfiniBand and High-speed Ethernet at ISC '14. PGAS and Hybrid MPI+PGAS at ICS '14, and Big Data with RDMA-Hadoop at CCGrid '14 and ISCA '14.
- Past tutorials at
PPoPP '14, and
- (NEW) Recent Keynote Talks on MPI, MPI+PGAS, BigData (Hadoop) and Memcached at
HPC Advisory Council Lugano Conference 2014 and
HPC Advisory Council Stanford Conference 2014.
- OpenFabrics/IBUG 2014 Presentations [more]