\\ **[[cluster:0|Back]]** ==== P100 vs RTX 6000 & T4 ==== The specifications of these GPU models are detailed at this page [[cluster:181|2019 GPU Models]] This page will mimic the work done on this page in 2018 [[cluster:175|P100 vs GTX & K20]] Credits: This work was made possible, in part, through HPC time donated by Microway, Inc. We gratefully acknowledge Microway for providing access to their GPU-accelerated compute cluster. [[http://www.microway.com|Microway]] First though...mixed precision calculations are on the rise, driven by Deep Learning. Obviously the researcher needs to evaluate if veering away from double precision calculations is scientifically sound. [[https://www.hpcwire.com/2019/08/05/llnl-purdue-researchers-harness-gpu-mixed-precision-for-accuracy-performance-tradeoff/|GPUmixer: harness gpu mixed precision]] ==== the DPP ==== The Double Precision Problem. [[https://www.microway.com/knowledge-center-articles/comparison-of-nvidia-geforce-gpus-and-nvidia-tesla-gpus/|Comparison of Nvidia, GeForce GPUs and Nvidia Tesla GPUs]] [[https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units|List of Nvidia Graphics Processing Units]] "Every GPU with SM 1.3 (Tesla/GTX2xx) or better has hardware double-precision support. Starting with the Fermi architecture, Quadro and Tesla variants have better double-precision support than consumer Ge Force models." So I'm utterly confused by this outcome. The P100 is best at double precision (FP64), the RXT6000 is modest and the T4 actually has no specs regarding FP64. Nvidia does not publish any data on FP64 for T4 and certain RTX models. But running a colloid example in Lammps compiled for these GPUs with DOUBLE_DOUBLE, all three models obtain the same result in 500,000 loops. The explanation was found [[https://www.microway.com/hpc-tech-tips/nvidia-turing-tesla-t4-hpc-performance-benchmarks/|T4 benchmarks fp64 and fp32]]. The T4 can do double precision if needed but it's strength is mixed and single precision. p100-dd-1-1:Device 0: Tesla P100-PCIE-16GB, 56 CUs, 16/16 GB, 1.3 GHZ (Double Precision) p100-dd-1-1: 500000 1.9935932 0.097293139 2.0905319 1.0497421 22963.374 p100-dd-1-1:Performance: 855254.719 tau/day, 1979.756 timesteps/s rtx-dd-1-1:Device 0: Quadro RTX 6000, 72 CUs, 23/24 GB, 2.1 GHZ (Double Precision) rtx-dd-1-1: 500000 1.9935932 0.097293139 2.0905319 1.0497421 22963.374 rtx-dd-1-1:Performance: 600048.822 tau/day, 1389.002 timesteps/s t4-dd-1-1:Device 0: Tesla T4, 40 CUs, 15/15 GB, 1.6 GHZ (Double Precision) t4-dd-1-1: 500000 1.9935932 0.097293139 2.0905319 1.0497421 22963.374 t4-dd-1-1:Performance: 518164.721 tau/day, 1199.455 timesteps/s ==== Amber ==== The RTX compute node only had one GPU, the other nodes had 4 GPUs. In each run the mpi threads requested equaled the number of GPUs involved. Sample script bottom of page. * [DPFP] - Double Precision Forces, 64-bit Fixed point Accumulation. * [SPXP] - Single Precision Forces, Mixed Precision [interger] Accumulation. * [SPFP] - Single Precision Forces, 64-bit Fixed Point Accumulation. (Default) ^ ns/day ^ P100[1] ^ P100[4] ^ RTX[1] ^ T4[1] ^ T4[4] ^ Notes ^ | DPFP | 5.21| 18.35| 0.75| 0.35| 1.29| | SXFP | 11.82| 37.44| 17.05| 7.01| 18.91| | SPFP | 11.91| 40.98| 9.92| 4.35| 16.22| Like last testing outcome, in the SFFP precision mode it is best to run four individual jobs, one per GPU (mpi=1, gpu=1). Best performance is the P100 at 47.64 vs the RTX at 39.69 ns/day per node. The T4 runs about 1/3 as fast and really falters in DPFP precision mode. But in SXFP (experimental) precision mode the T4 makes up in performance. Can't complain about utilization rates.\\ Amber mpi=4 gpu=4\\ [heme@login1 amber16]$ ssh node7 ./gpu-info\\ id,name,temp.gpu,mem.used,mem.free,util.gpu,util.mem\\ 0, Tesla P100-PCIE-16GB, 79, 1052 MiB, 15228 MiB, 87 %, 1 %\\ 1, Tesla P100-PCIE-16GB, 79, 1052 MiB, 15228 MiB, 95 %, 0 %\\ 2, Tesla P100-PCIE-16GB, 79, 1052 MiB, 15228 MiB, 87 %, 0 %\\ 3, Tesla P100-PCIE-16GB, 78, 1052 MiB, 15228 MiB, 94 %, 0 %\\ ==== Lammps ==== Precision for GPU calculations * [DD] -D_DOUBLE_DOUBLE # Double precision for all calculations * [SD] -D_SINGLE_DOUBLE # Accumulation of forces, etc. in double * [SS] -D_SINGLE_SINGLE # Single precision for all calculations ^ tau/day ^ P100[1] ^ P100[4] ^ RTX[1] ^ T4[1] ^ T4[4] ^ Notes ^ | DD | 856,669| ?| 600,048| 518,164| 1,098,621| | | SD | 981,897| ?| 916,225| 881,247| 2,294,344| | | SS | 1,050,796| ?| 1,035,041| 1,021,477| 2,541,435| | Forgot run run the 4 GPU P100 scenarios, deh. As with Amber, it is best to run one job per GPU to achieve max node performance. Depending on problem set performance can be boosted by requesting more MPI threads. In previous tests the P100 in double_double precision mode achieved 2.7 million tau/day, so these results are surprising. The RTX 6000 does a decent job of keeping up with the P100. But the T4 shines in this application. The mixed or single precision modes compete well given the T4's price and wattage consumption. ==== Gromacs ==== Gromacs was build on each of the nodes locally letting it select the optimal CPU (AVX, SSE) and GPU accelerators. "GROMACS simulations are normally run in “mixed” floating-point precision, which is suited for the use of single precision in FFTW. " The ''cmake'' flag ''-DGMX_BUILD_OWN_FFTW=ON'' yields a mixed precision compilation which is recommended. Then we ran multidir options 01-04 on single GPU, and 01-08 and 01-16 on all 4 GPUs when possible. ^ ns/day ^ P100[1] ^ P100[4] ^ RTX[1] ^ T4[1] ^ T4[4] ^ Notes ^ | Mixed | | | 254| | | gpu=1, 01-04 | | Mixed | | 551| | | 546| gpu=4, 01-04 | | Mixed | | | | | 650| gpu=4, 01-08 | | Mixed | | | | | 733| gpu=4, 01-16 | The T4 is P100's equal in mixed precision performance. Add the wattage factor and you have a favorite. And GPU utilization was outstanding. [heme@login1 gromacs-2018]$ ssh node9 ./gpu-info\\ id,name,temp.gpu,mem.used,mem.free,util.gpu,util.mem\\ 0, Tesla T4, 66, 866 MiB, 14213 MiB, 98 %, 9 %\\ 1, Tesla T4, 67, 866 MiB, 14213 MiB, 98 %, 9 %\\ 2, Tesla T4, 66, 866 MiB, 14213 MiB, 99 %, 9 %\\ 3, Tesla T4, 64, 866 MiB, 14213 MiB, 97 %, 9 %\\ ==== Scripts ==== All 3 software applications were compiled within default environment and Cuda 10.1 Currently Loaded Modules:\\ 1) GCCcore/8.2.0 4) GCC/8.2.0-2.31.1 7) XZ/5.2.4 10) hwloc/1.11.11 13) FFTW/3.3.8\\ 2) zlib/1.2.11 5) CUDA/10.1.105 8) libxml2/2.9.8 11) OpenMPI/3.1.3 14) ScaLAPACK/2.0.2-OpenBLAS-0.3.5\\ 3) binutils/2.31.1 6) numactl/2.0.12 9) libpciaccess/0.14 12) OpenBLAS/0.3.5 15) fosscuda/2019a\\ Follow\\ https://dokuwiki.wesleyan.edu/doku.php?id=cluster:161\\ * Amber #!/bin/bash #SBATCH --nodes=1 #SBATCH --nodelist=node7 #SBATCH --job-name="P100 dd" #SBATCH --ntasks-per-node=1 #SBATCH --gres=gpu:1 #SBATCH --exclusive # NSTEP = 40000 rm -f restrt.1K10 mpirun --oversubscribe -x LD_LIBRARY_PATH -np 1 \ -H localhost \ ~/amber16/bin/pmemd.cuda_DPFP.MPI -O -o p100-dd-1-1 \ -inf mdinfo.1K10 -x mdcrd.1K10 -r restrt.1K10 -ref inpcrd * Lammps #!/bin/bash #SBATCH --nodes=1 #SBATCH --nodelist=node5 #SBATCH --job-name="RTX dd" #SBATCH --gres=gpu:1 #SBATCH --ntasks-per-node=1 #SBATCH --exclusive # RTX mpirun --oversubscribe -x LD_LIBRARY_PATH -np 1 \ -H localhost \ ~/lammps-5Jun19/lmp_mpi_double_double -suffix gpu -pk gpu 1 \ -in in.colloid > rtx-1:1 [heme@login1 lammps-5Jun19]$ squeue JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 2239 normal RTX dd heme R 3:17 1 node5 [heme@login1 lammps-5Jun19]$ ssh node5 ./gpu-info id,name,temp.gpu,mem.used,mem.free,util.gpu,util.mem 0, Quadro RTX 6000, 50, 186 MiB, 24004 MiB, 51 %, 0 % * Gromacs #!/bin/bash #SBATCH --nodes=1 #SBATCH --nodelist=node9 #SBATCH --job-name="T4 dd" #SBATCH --ntasks-per-node=32 #SBATCH --gres=gpu:4 #SBATCH --exclusive export PATH=$HOME/gromacs-2018/bin:$PATH export LD_LIBRARY_PATH=$HOME/gromacs-2018/lib:$LD_LIBRARY_PATH . $HOME/gromacs-2018/bin/GMXRC.bash rm -f gpu/??/c* gpu/??/e* gpu/??/s* gpu/??/traj* gpu/??/#* gpu/??/m* cd gpu # T4 #export CUDA_VISIBLE_DEVICES=0123 mpirun -np 8 gmx_mpi mdrun -maxh 1 -gpu_id 0123 \ -nsteps 1000000 -multidir 05 06 07 08 \ -ntmpi 0 -npme 0 -s topol.tpr -ntomp 0 -pin on -nb gpu \\ **[[cluster:0|Back]]**