This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:182 [2019/08/12 14:08] hmeij07 [P100 vs RTX & T4] |
cluster:182 [2019/08/12 17:33] hmeij07 [Gromacs] |
||
---|---|---|---|
Line 12: | Line 12: | ||
[[http:// | [[http:// | ||
- | First though...mixed precision calculations are on the rise, driven by Deep Learning. | + | First though...mixed precision calculations are on the rise, driven by Deep Learning. |
==== the DPP ==== | ==== the DPP ==== | ||
Line 18: | Line 18: | ||
The Double Precision Problem. | The Double Precision Problem. | ||
- | [[http://https:// | + | [[https:// |
[[https:// | [[https:// | ||
- | "Every GPU with SM 1.3 (Tesla/ | + | "Every GPU with SM 1.3 (Tesla/ |
The explanation was found [[https:// | The explanation was found [[https:// | ||
Line 42: | Line 42: | ||
</ | </ | ||
+ | ==== Amber ==== | ||
+ | The RTX compute node only had one GPU, the other nodes had 4 GPUs. In each run the mpi threads requested equaled the number of GPUs involved. Sample script bottom of page. | ||
+ | |||
+ | * [DPFP] - Double Precision Forces, 64-bit Fixed point Accumulation. | ||
+ | * [SPXP] - Single Precision Forces, Mixed Precision [interger] Accumulation. | ||
+ | * [SPFP] - Single Precision Forces, 64-bit Fixed Point Accumulation. (Default) | ||
+ | |||
+ | |||
+ | ^ ns/ | ||
+ | | DPFP | 5.21| 18.35| | ||
+ | | SXFP | 11.82| | ||
+ | | SFFP | 11.91| | ||
+ | |||
+ | Like last testing outcome, in the SFFP precision mode it is best to run four individual jobs, one per GPU (mpi=1, gpu=1). Best performance is the P100 at 47.64 vs the RTX at 39.69 ns/day per node. The T4 runs about 1/3 as fast and really falters in DPFP precision mode. But in SXFP (experimental) precision mode the T4 makes up in performance. | ||
+ | |||
+ | Can't complain about utilization rates.\\ | ||
+ | Amber mpi=4 gpu=4\\ | ||
+ | |||
+ | [heme@login1 amber16]$ ssh node7 ./ | ||
+ | id, | ||
+ | 0, Tesla P100-PCIE-16GB, | ||
+ | 1, Tesla P100-PCIE-16GB, | ||
+ | 2, Tesla P100-PCIE-16GB, | ||
+ | 3, Tesla P100-PCIE-16GB, | ||
+ | |||
+ | ==== Lammps ==== | ||
+ | |||
+ | Precision for GPU calculations | ||
+ | |||
+ | * [DD] -D_DOUBLE_DOUBLE | ||
+ | * [SD] -D_SINGLE_DOUBLE | ||
+ | * [SS] -D_SINGLE_SINGLE | ||
+ | |||
+ | |||
+ | ^ tau/ | ||
+ | | DD | 856, | ||
+ | | SD | 981, | ||
+ | | SS | 1, | ||
+ | |||
+ | As with Amber, it is best to run one job per GPU to achieve max node performance. Depending on problem set performance can be boosted by requesting more MPI threads. In previous tests the P100 in double_double precision mode achieved 2.7 million tau/day, so these results are surprising. The RTX 6000 does a decent job of keeping up with the P100. | ||
+ | |||
+ | But the T4 shines in this application. The mixed or single precision modes compete well given the T4's price and wattage consumption. | ||
+ | |||
+ | ==== Gromacs ==== | ||
+ | |||
+ | Gromacs was build on each of the nodes locally letting it select the optimal CPU (AVX, SSE) and GPU accelerators. The '' | ||
+ | |||
+ | |||
+ | ^ ns/ | ||
+ | | Mixed | | | 254| | | gpu=1 | | ||
+ | | Mixed | | | 254| | | gpu=1 | | ||
+ | |||
+ | ==== Scripts ==== | ||
+ | |||
+ | All 3 software applications were compiled within default environment and Cuda 10.1 | ||
+ | |||
+ | Currently Loaded Modules:\\ | ||
+ | 1) GCCcore/ | ||
+ | 2) zlib/ | ||
+ | 3) binutils/ | ||
+ | |||
+ | Follow\\ | ||
+ | https:// | ||
+ | |||
+ | * Amber | ||
+ | |||
+ | < | ||
+ | |||
+ | #!/bin/bash | ||
+ | |||
+ | #SBATCH --nodes=1 | ||
+ | #SBATCH --nodelist=node7 | ||
+ | #SBATCH --job-name=" | ||
+ | #SBATCH --ntasks-per-node=1 | ||
+ | #SBATCH --gres=gpu: | ||
+ | #SBATCH --exclusive | ||
+ | |||
+ | # NSTEP = 40000 | ||
+ | rm -f restrt.1K10 | ||
+ | mpirun --oversubscribe -x LD_LIBRARY_PATH -np 1 \ | ||
+ | -H localhost \ | ||
+ | ~/ | ||
+ | -inf mdinfo.1K10 -x mdcrd.1K10 -r restrt.1K10 -ref inpcrd | ||
+ | |||
+ | </ | ||
+ | |||
+ | * Lammps | ||
+ | |||
+ | < | ||
+ | |||
+ | #!/bin/bash | ||
+ | |||
+ | #SBATCH --nodes=1 | ||
+ | #SBATCH --nodelist=node5 | ||
+ | #SBATCH --job-name=" | ||
+ | #SBATCH --gres=gpu: | ||
+ | #SBATCH --ntasks-per-node=1 | ||
+ | #SBATCH --exclusive | ||
+ | |||
+ | # RTX | ||
+ | mpirun --oversubscribe -x LD_LIBRARY_PATH -np 1 \ | ||
+ | -H localhost \ | ||
+ | ~/ | ||
+ | -in in.colloid > rtx-1:1 | ||
+ | |||
+ | [heme@login1 lammps-5Jun19]$ squeue | ||
+ | JOBID PARTITION | ||
+ | 2239 normal | ||
+ | |||
+ | [heme@login1 lammps-5Jun19]$ ssh node5 ./gpu-info | ||
+ | id, | ||
+ | 0, Quadro RTX 6000, 50, 186 MiB, 24004 MiB, 51 %, 0 % | ||
+ | |||
+ | </ | ||
\\ | \\ | ||
**[[cluster: | **[[cluster: |