- 7th, 462,462-core (Stampede) at TACC
- 11th, 74,358-core (Tsubame 2.5) at Tokyo Institute of Technology
- 16th, 96,192-core (Pleiades) at NASA
This project is supported by funding from U.S. National Science Foundation , U.S. DOE Office of Science, Ohio Board of Regents, ODOD , Cisco Systems, Intel, Linux Networx, Mellanox, NVIDIA, QLogic, and Sun Microsystems; and equipment donations from Advanced Clustering , AMD, Appro, Chelsio, Dell, Fulcrum, Fujitsu , Intel, Mellanox, Microway, NetEffect , QLogic and Sun. Other technology partner includes: TotalView Technologies. More details can be found here.
- (NEW) MVAPICH2-GDR 2.0b
(based on MVAPICH2 2.0b) with support for GPU Direct RDMA (GDR)
for NVIDIA GPUs.
- (NEW) MVAPICH2 2.0b
(based on MPICH 3.1b1) with support for MPI-3 RMA,
checkpointing with Hydra process manager, tuned communiction on IvyBrige,
and improved job-startup time.
- (NEW) MVAPICH2-X 2.0b providing support for hybrid MPI+PGAS (UPC and OpenSHMEM) programming models for exascale systems, OpenSHMEM with optimized collectives, and UPC with GNU UPC translator. [more]
- (NEW) OMB 4.2
provides GPU support for MPI collectives and OpenSHMEM with fcollect
- Papers at Recent and Upcoming Conferences
(WSSSPE '13, SC '13, OpenSHMEM '13, PGAS '13, ICPP '13, Cluster '13, EuroMPI '13, HotI '13, XSCALE '13, HPDC '13, ISC '13, ICS '13, IPDPS '13, CCGrid'13, SC '12, PGAS '12, and EuroMPI '12)[more]
- "A Scalable and Portable Approach
to Accelerate Hybrid HPL on
Heterogeneous CPU-GPU Clusters", Cluster '13, BEST Student Paper Award
- (NEW) Upcoming Tutorials on PGAS and Hybrid MPI+PGAS at PPoPP '14, and Big Data with RDMA-Hadoop at HPCA '14 and ASPLOS '14.
- Past tutorials at
HotI '13, and
- (NEW) Recent Keynote Talks on MPI, MPI+PGAS, BigData (Hadoop) and Memcached at
HPC Advisory Council China Conference 2013 and
HPC Advisory Council Switzerland Conference 2013.
- OpenFabrics Monterey 2013 Presentations [more]