Both sides previous revision
Previous revision
Next revision
|
Previous revision
Next revision
Both sides next revision
|
cluster:182 [2019/08/12 14:08] hmeij07 [P100 vs RTX & T4] |
cluster:182 [2019/08/12 16:26] hmeij07 [Scripts] |
[[http://www.microway.com|Microway]] | [[http://www.microway.com|Microway]] |
| |
First though...mixed precision calculations are on the rise, driven by Deep Learning. Obviously thwe researcher needs to evaluated if veering away from double precision calculations is scientifically sound. [[https://www.hpcwire.com/2019/08/05/llnl-purdue-researchers-harness-gpu-mixed-precision-for-accuracy-performance-tradeoff/|GPUmixer: harness gpu mixed precision]] | First though...mixed precision calculations are on the rise, driven by Deep Learning. Obviously the researcher needs to evaluate if veering away from double precision calculations is scientifically sound. [[https://www.hpcwire.com/2019/08/05/llnl-purdue-researchers-harness-gpu-mixed-precision-for-accuracy-performance-tradeoff/|GPUmixer: harness gpu mixed precision]] |
| |
==== the DPP ==== | ==== the DPP ==== |
The Double Precision Problem. | The Double Precision Problem. |
| |
[[http://https://www.microway.com/knowledge-center-articles/comparison-of-nvidia-geforce-gpus-and-nvidia-tesla-gpus/|Comparison of Nvidia, GeForce GPUs and Nvidia Tesla GPUs]] | [[https://www.microway.com/knowledge-center-articles/comparison-of-nvidia-geforce-gpus-and-nvidia-tesla-gpus/|Comparison of Nvidia, GeForce GPUs and Nvidia Tesla GPUs]] |
| |
[[https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units|List of Nvidia Graphics Processing Units]] | [[https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units|List of Nvidia Graphics Processing Units]] |
| |
"Every GPU with SM 1.3 (Tesla/GTX2xx) or better has hardware double-precision support. Starting with the Fermi architecture, Quadro and Tesla variants have better double-precision support than consumer Ge Force models." So I'm utterly confused by this outcome. The P100 is best at double precision (fp64), the RXT6000 is modest and the T4 actually has no specs regarding fp64. Running a colloid example in Lammps compiled for these GPUs with DOUBLE_DOUBLE, all three models obtain the same result in 500,000 loops. | "Every GPU with SM 1.3 (Tesla/GTX2xx) or better has hardware double-precision support. Starting with the Fermi architecture, Quadro and Tesla variants have better double-precision support than consumer Ge Force models." So I'm utterly confused by this outcome. The P100 is best at double precision (FP64), the RXT6000 is modest and the T4 actually has no specs regarding FP64. Nvidia does not publish any data on FP64 for T4 and certain RTX models. But running a colloid example in Lammps compiled for these GPUs with DOUBLE_DOUBLE, all three models obtain the same result in 500,000 loops. |
| |
The explanation was found [[https://www.microway.com/hpc-tech-tips/nvidia-turing-tesla-t4-hpc-performance-benchmarks/|T4 benchmarks fp64 and fp32]]. The T4 can do double precision if needed but it's strength is mixed and single precision. | The explanation was found [[https://www.microway.com/hpc-tech-tips/nvidia-turing-tesla-t4-hpc-performance-benchmarks/|T4 benchmarks fp64 and fp32]]. The T4 can do double precision if needed but it's strength is mixed and single precision. |
| |
</code> | </code> |
| |
| ==== Amber ==== |
| |
| The RTX compute node only had one GPU, the other nodes had 4 GPUs. In each run the mpi threads requested equaled the number of GPUs involved. Sample script bottom of page. |
| |
| * [DPFP] - Double Precision Forces, 64-bit Fixed point Accumulation. |
| * [SPXP] - Single Precision Forces, Mixed Precision [interger] Accumulation. |
| * [SPFP] - Single Precision Forces, 64-bit Fixed Point Accumulation. (Default) |
| |
| |
| ^ ns/day ^ P100[1] ^ P100[4] ^ RTX[1] ^ T4[1] ^ T4[4] ^ Notes ^ |
| | DPFP | 5.21| 18.35| 0.75| 0.35| 1.29| |
| | SXFP | 11.82| 37.44| 17.05| 7.01| 18.91| |
| | SFFP | 11.91| 40.98| 9.92| 4.35| 16.22| |
| |
| Like last testing outcome, in the SFFP precision mode it is best to run four individual jobs, one per GPU (mpi=1, gpu=1). Best performance is the P100 at 47.64 vs the RTX at 39.69 ns/day per node. The T4 runs about 1/3 as fast and really falters in DPFP precision mode. But in SXFP (experimental) precision mode the T4 makes up in performance. |
| |
| Can't complain about utilization rates.\\ |
| Amber mpi=4 gpu=4\\ |
| |
| [heme@login1 amber16]$ ssh node7 ./gpu-info\\ |
| id,name,temp.gpu,mem.used,mem.free,util.gpu,util.mem\\ |
| 0, Tesla P100-PCIE-16GB, 79, 1052 MiB, 15228 MiB, 87 %, 1 %\\ |
| 1, Tesla P100-PCIE-16GB, 79, 1052 MiB, 15228 MiB, 95 %, 0 %\\ |
| 2, Tesla P100-PCIE-16GB, 79, 1052 MiB, 15228 MiB, 87 %, 0 %\\ |
| 3, Tesla P100-PCIE-16GB, 78, 1052 MiB, 15228 MiB, 94 %, 0 %\\ |
| |
| ==== Lammps ==== |
| |
| |
| |
| |
| ^ ns/day ^ P100[1] ^ P100[4] ^ RTX[1] ^ T4[1] ^ T4[4] ^ Notes ^ |
| | DPFP | | | | | | |
| | SXFP | | | | | | |
| | SFFP | | | | | | |
| ==== Scripts ==== |
| |
| All 3 software applications were compiled within default environment and Cuda 10.1 |
| |
| Currently Loaded Modules:\\ |
| 1) GCCcore/8.2.0 4) GCC/8.2.0-2.31.1 7) XZ/5.2.4 10) hwloc/1.11.11 13) FFTW/3.3.8\\ |
| 2) zlib/1.2.11 5) CUDA/10.1.105 8) libxml2/2.9.8 11) OpenMPI/3.1.3 14) ScaLAPACK/2.0.2-OpenBLAS-0.3.5\\ |
| 3) binutils/2.31.1 6) numactl/2.0.12 9) libpciaccess/0.14 12) OpenBLAS/0.3.5 15) fosscuda/2019a\\ |
| |
| Follow\\ |
| https://dokuwiki.wesleyan.edu/doku.php?id=cluster:161\\ |
| |
| * Amber |
| |
| <code> |
| |
| #!/bin/bash |
| |
| #SBATCH --nodes=1 |
| #SBATCH --nodelist=node7 |
| #SBATCH --job-name="P100 dd" |
| #SBATCH --ntasks-per-node=1 |
| #SBATCH --gres=gpu:1 |
| #SBATCH --exclusive |
| |
| # NSTEP = 40000 |
| rm -f restrt.1K10 |
| mpirun --oversubscribe -x LD_LIBRARY_PATH -np 1 \ |
| -H localhost \ |
| ~/amber16/bin/pmemd.cuda_DPFP.MPI -O -o p100-dd-1-1 \ |
| -inf mdinfo.1K10 -x mdcrd.1K10 -r restrt.1K10 -ref inpcrd |
| |
| </code> |
| |
| |
\\ | \\ |
**[[cluster:0|Back]]** | **[[cluster:0|Back]]** |