This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
cluster:182 [2019/08/12 17:50] hmeij07 [Gromacs] |
cluster:182 [2019/12/13 13:33] (current) hmeij07 |
||
---|---|---|---|
Line 2: | Line 2: | ||
**[[cluster: | **[[cluster: | ||
+ | |||
==== P100 vs RTX 6000 & T4 ==== | ==== P100 vs RTX 6000 & T4 ==== | ||
Line 54: | Line 54: | ||
| DPFP | 5.21| 18.35| | | DPFP | 5.21| 18.35| | ||
| SXFP | 11.82| | | SXFP | 11.82| | ||
- | | | + | | |
Like last testing outcome, in the SFFP precision mode it is best to run four individual jobs, one per GPU (mpi=1, gpu=1). Best performance is the P100 at 47.64 vs the RTX at 39.69 ns/day per node. The T4 runs about 1/3 as fast and really falters in DPFP precision mode. But in SXFP (experimental) precision mode the T4 makes up in performance. | Like last testing outcome, in the SFFP precision mode it is best to run four individual jobs, one per GPU (mpi=1, gpu=1). Best performance is the P100 at 47.64 vs the RTX at 39.69 ns/day per node. The T4 runs about 1/3 as fast and really falters in DPFP precision mode. But in SXFP (experimental) precision mode the T4 makes up in performance. | ||
Line 81: | Line 81: | ||
| SD | 981, | | SD | 981, | ||
| SS | 1, | | SS | 1, | ||
+ | |||
+ | Forgot run run the 4 GPU P100 scenarios, deh. | ||
As with Amber, it is best to run one job per GPU to achieve max node performance. Depending on problem set performance can be boosted by requesting more MPI threads. In previous tests the P100 in double_double precision mode achieved 2.7 million tau/day, so these results are surprising. The RTX 6000 does a decent job of keeping up with the P100. | As with Amber, it is best to run one job per GPU to achieve max node performance. Depending on problem set performance can be boosted by requesting more MPI threads. In previous tests the P100 in double_double precision mode achieved 2.7 million tau/day, so these results are surprising. The RTX 6000 does a decent job of keeping up with the P100. | ||
Line 88: | Line 90: | ||
==== Gromacs ==== | ==== Gromacs ==== | ||
- | Gromacs was build on each of the nodes locally letting it select the optimal CPU (AVX, SSE) and GPU accelerators. The '' | + | Gromacs was build on each of the nodes locally letting it select the optimal CPU (AVX, SSE) and GPU accelerators. |
Line 97: | Line 99: | ||
| Mixed | | | | | 733| gpu=4, 01-16 | | | Mixed | | | | | 733| gpu=4, 01-16 | | ||
- | The T4 is P100's equal in mixed precision performance. Add the wattage factor and you have a favorite. | + | The T4 is P100's equal in mixed precision performance. Add the wattage factor and you have a favorite. And GPU utilization was outstanding. |
+ | [heme@login1 gromacs-2018]$ ssh node9 ./ | ||
+ | id, | ||
+ | 0, Tesla T4, 66, 866 MiB, 14213 MiB, 98 %, 9 %\\ | ||
+ | 1, Tesla T4, 67, 866 MiB, 14213 MiB, 98 %, 9 %\\ | ||
+ | 2, Tesla T4, 66, 866 MiB, 14213 MiB, 99 %, 9 %\\ | ||
+ | 3, Tesla T4, 64, 866 MiB, 14213 MiB, 97 %, 9 %\\ | ||
==== Scripts ==== | ==== Scripts ==== | ||
Line 189: | Line 197: | ||
</ | </ | ||
- | And GPU utilization was outstanding. | ||
- | [heme@login1 gromacs-2018]$ ssh node9 ./ | ||
- | id, | ||
- | 0, Tesla T4, 66, 866 MiB, 14213 MiB, 98 %, 9 %\\ | ||
- | 1, Tesla T4, 67, 866 MiB, 14213 MiB, 98 %, 9 %\\ | ||
- | 2, Tesla T4, 66, 866 MiB, 14213 MiB, 99 %, 9 %\\ | ||
- | 3, Tesla T4, 64, 866 MiB, 14213 MiB, 97 %, 9 %\\ | ||
\\ | \\ | ||
**[[cluster: | **[[cluster: |