This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:175 [2018/09/21 14:29] hmeij07 [Amber] |
cluster:175 [2018/09/22 12:43] hmeij07 [Amber] |
||
---|---|---|---|
Line 3: | Line 3: | ||
**[[cluster: | **[[cluster: | ||
- | ==== GTX vs P100 vs K20 ==== | + | ==== GTX vs P100 & K20 ==== |
- | Comparing these GPUs yields the following data. These are not " | + | Comparing these GPUs yields the following data. These are not " |
Credits: This work was made possible, in part, through HPC time donated by Microway, Inc. We gratefully acknowledge Microway for providing access to their GPU-accelerated compute cluster. | Credits: This work was made possible, in part, through HPC time donated by Microway, Inc. We gratefully acknowledge Microway for providing access to their GPU-accelerated compute cluster. | ||
Line 20: | Line 20: | ||
gpu=1 mpi=1 11.94 ns/day | gpu=1 mpi=1 11.94 ns/day | ||
- | any mpi> | + | any mpi> |
[heme@login1 amber]$ ssh node6 ~/p100-info | [heme@login1 amber]$ ssh node6 ~/p100-info | ||
Line 33: | Line 33: | ||
==== Lammps ==== | ==== Lammps ==== | ||
- | We can also not complain about gpu utilization in this example. | + | We can also not complain about gpu utilization in this example. |
+ | |||
+ | On our GTX server best performance was a ratio of 16:4 cpu:gpu for 932,493 tau/day (11x faster than our K20). However scaling the job to a ratio of cpu:gpu of 4:2 yields 819,207 tau/day which means a quad server can deliver about 1.6 million tau/day. | ||
+ | |||
+ | A single P100 beat this easily coming in at 2.6 million tau/day. Spreading the problem over more gpus did raise overall performance to 3.3 million tau/day. However, four cpu:gpu 1:1 jobs would achieve slightly over 10 million tau/day. That is almost 10x faster than the GTX server. | ||
- | On our GTX server best performance was a ratio of 16:4 cpu:gpu for 932,493 tau/day (11x faster than our K20). However scaling the job to a ration cpu:gpu of 4:2 yields 819,207 tau/day which means a quad server can deliver about 1.6 million tau/day. | ||
- | | ||
< | < | ||
Line 57: | Line 59: | ||
2, Tesla P100-PCIE-16GB, | 2, Tesla P100-PCIE-16GB, | ||
3, Tesla P100-PCIE-16GB, | 3, Tesla P100-PCIE-16GB, | ||
- | |||
</ | </ | ||
+ | ==== Gromacs ==== | ||
+ | |||
+ | Gromacs has shown vastly improved performance between versions. v5 delivered about 20 ns/day per K20 server and 350 ns/day on GTX server. v2018 delivered 75 ns/day per K20 server and 900 ns/day on GTX server. A roughly 3x improvement. | ||
+ | |||
+ | On the P100 I could not invoke the multidir option of gromacs (have run it on GTX, weird). The utilization of the gpu drops as more and more gpus are deployed. | ||
+ | |||
+ | < | ||
+ | |||
+ | mpirun -np 25 --oversubscribe -x LD_LIBRARY_PATH -H \ | ||
+ | localhost, | ||
+ | localhost, | ||
+ | localhost, | ||
+ | localhost, | ||
+ | gmx_mpi mdrun -gpu_id 0123 -ntmpi 0 \ | ||
+ | -s topol.tpr -ntomp 4 -npme 1 -nsteps 20000 -pin on -nb gpu | ||
+ | |||
+ | # this does not run | ||
+ | #gmx_mpi mdrun -multidir 01 02 03 04 -gpu_id 0123 -ntmpi 0 -nt 0 \ | ||
+ | # -s topol.tpr -ntomp 4 -npme 1 -maxh 0.5 -pin on -nb gpu | ||
+ | |||
+ | | ||
+ | gpu=4 mpi=25 ntomp=4 -npme 1 | ||
+ | Performance: | ||
+ | gpu=3 (same) | ||
+ | Performance: | ||
+ | gpu=2 (same) | ||
+ | Performance: | ||
+ | gpu=1 (same) | ||
+ | Performance: | ||
+ | |||
+ | index, name, temp.gpu, mem.used [MiB], mem.free [MiB], util.gpu [%], util.mem [%] | ||
+ | 0, Tesla P100-PCIE-16GB, | ||
+ | |||
+ | </ | ||
\\ | \\ |