MVAPICH/MVAPICH2 Project
Ohio State University



Hardware Multicast-aware Bcast | Collectives | Performance | Network-Based Computing Laboratory

Performance of Hardware Multicast-aware Collectives (05/06/13)

  • Experimental Testbed: Each node of our testbed has eight cores (2.53 GHz dual quad-core) and 12 GB main memory. The CPUs based on Westmere architecture and run in 64 bit mode. The nodes support 16x PCI Express Gen2 interfaces and are equipped with Mellanox ConnectX-2 QDR HCAs with PCI Express interfaces. The nodes are connected using a 36 port Mellanox QDR InfiniBand switch. The operating system used was RedHat Enterprise Linux Server release 5.4 (Tikanga).
  • The following graphs demonstrate the improvement obtained in the latency of the MPI_Bcast operation for by using InfiniBand hardware UD-Multicast