User Tools

Site Tools


cluster:175

This is an old revision of the document!



Back

GTX vs P100

Comparing these two GPUs yields the following data. These are not “benchmark suites” so your mileage may vary. It will give us some comparative information for decision making on our 2018 GPU Expansion Project. The GTX data comes from this page 2018 GPU Expansion

Credits: This work was made possible, in part, through HPC time donated by Microway, Inc. We gratefully acknowledge Microway for providing access to their GPU-accelerated compute cluster. Microway

Amber

Amber16 continues to run best when one MPI process launches the GPU counterpart. One can not complain about utilization rates. So a dual P100 server delivers 24 ns/day and a quad P100 server delivers near 48 ns/day. Our quad GTX1080 server delivers 48.96 ns/day (4.5x faster than K20).

mpirun -x LD_LIBRARY_PATH -np 1 -H localhost pmemd.cuda.MPI \
 -O -o mdout.0 -inf mdinfo.1K10 -x mdcrd.1K10 -r restrt.1K10.0 -ref inpcrd

gpu=1 mpi=1 11.94 ns/day
any mpi>1 performace goes down...

[heme@login1 amber]$ ssh node6 ~/p100-info
index, name, temp.gpu, mem.used [MiB], mem.free [MiB], util.gpu [%], util.mem [%]
0, Tesla P100-PCIE-16GB, 71, 327 MiB, 15953 MiB, 100 %, 0 %
1, Tesla P100-PCIE-16GB, 49, 327 MiB, 15953 MiB, 100 %, 0 %
2, Tesla P100-PCIE-16GB, 44, 327 MiB, 15953 MiB, 100 %, 0 %
3, Tesla P100-PCIE-16GB, 43, 327 MiB, 15953 MiB, 100 %, 0 %


Back

cluster/175.1537538728.txt.gz · Last modified: 2018/09/21 14:05 by hmeij07