User Tools

Site Tools


cluster:94


Back

  • JAC and Factor_IX are two sample programs included with Amber (one memory intensive, one IO intensive, forgot what is what).
  • “1g6r” is a program from Surjit Dixit that should scale really well in his opinion.

Swallowtail

Amber What Host N JAC Factor_IX
9+openmpi-1.2+intel-9 Sander.MPI C-00 02 2m42s 4m54s
9+openmpi-1.2+intel-9 Sander.MPI C-00 04 1m27s 2m33s
9+openmpi-1.2+intel-9 Sander.MPI C-00 08 1m00s 1m57s
Amber What Host N 1g6r
9+openmpi-1.2+intel-9 pmemd C-00 02 2m55s
9+openmpi-1.2+intel-9 pmemd C-00 04 1m32s
9+openmpi-1.2+intel-9 pmemd C-00 08 1m03s

Greentail

Above some old benchmark runs against swallowtail's infiniband switch.

Below new benchmarks with IPoIB. “ptile” is the number of jobs slots to allocate per node, so with 16 requested and ptile=1 sixteen individual hosts will be selected and all MPI traffic between hosts will go via the Voltaire infiniband switch.

Amber What Host N JAC Factor_IX ptile=
9+openmpi-1.2+intel-9 Sander.MPI n1 02 2m36s 4m14s
9+openmpi-1.2+intel-9 Sander.MPI n1 04 1m23s 2m14s
9+openmpi-1.2+intel-9 Sander.MPI n1 08 0m47s 1m20s
9+openmpi-1.2+intel-9 Sander.MPI 16 1m04s 2m17s 8
9+openmpi-1.2+intel-9 Sander.MPI 16 0m42s 2m07s 1
Amber What Host N 1g6r ptile=
9+openmpi-1.2+intel-9 pmemd n1 02 2m40s
9+openmpi-1.2+intel-9 pmemd n1 04 1m23s
9+openmpi-1.2+intel-9 pmemd n1 08 0m47s
9+openmpi-1.2+intel-9 pmemd 16 0m43s 8
9+openmpi-1.2+intel-9 pmemd 16 0m39s 1

Back

cluster/94.txt · Last modified: 2011/01/25 16:39 by hmeij