User Tools

Site Tools


cluster:182

Warning: Undefined array key -1 in /usr/share/dokuwiki/inc/html.php on line 1458

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cluster:182 [2019/08/12 12:26]
hmeij07 [Lammps]
cluster:182 [2019/12/13 08:33] (current)
hmeij07
Line 2: Line 2:
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
  
 + 
 ==== P100 vs RTX 6000 & T4 ==== ==== P100 vs RTX 6000 & T4 ====
  
Line 54: Line 54:
 |  DPFP  |  5.21|  18.35|  0.75|  0.35|  1.29| |  DPFP  |  5.21|  18.35|  0.75|  0.35|  1.29|
 |  SXFP  |  11.82|  37.44|  17.05|  7.01|  18.91| |  SXFP  |  11.82|  37.44|  17.05|  7.01|  18.91|
-|  SFFP   11.91|  40.98|  9.92|  4.35|  16.22|+|  SPFP   11.91|  40.98|  9.92|  4.35|  16.22|
  
 Like last testing outcome, in the SFFP precision mode it is best to run four individual jobs, one per GPU (mpi=1, gpu=1). Best performance is the P100 at 47.64 vs the RTX at 39.69 ns/day per node. The T4 runs about 1/3 as fast and really falters in DPFP precision mode. But in SXFP (experimental) precision mode the T4 makes up in performance.  Like last testing outcome, in the SFFP precision mode it is best to run four individual jobs, one per GPU (mpi=1, gpu=1). Best performance is the P100 at 47.64 vs the RTX at 39.69 ns/day per node. The T4 runs about 1/3 as fast and really falters in DPFP precision mode. But in SXFP (experimental) precision mode the T4 makes up in performance. 
Line 70: Line 70:
 ==== Lammps ==== ==== Lammps ====
  
 +Precision for GPU calculations
  
 +  * [DD] -D_DOUBLE_DOUBLE  # Double precision for all calculations
 +  * [SD] -D_SINGLE_DOUBLE  # Accumulation of forces, etc. in double
 +  * [SS] -D_SINGLE_SINGLE  # Single precision for all calculations
 +
 +
 +^  tau/day  ^  P100[1]  ^  P100[4]  ^  RTX[1]  ^  T4[1]  ^  T4[4]  ^  Notes  ^
 +|  DD  |  856,669|  ?|  600,048|  518,164|  1,098,621|  |
 +|  SD  |  981,897|  ?|  916,225|  881,247|  2,294,344|  |
 +|  SS  |  1,050,796|  ?|  1,035,041|  1,021,477|  2,541,435|  |
 +
 +Forgot run run the 4 GPU P100 scenarios, deh.
 +
 +As with Amber, it is best to run one job per GPU to achieve max node performance. Depending on problem set performance can be boosted by requesting more MPI threads. In previous tests the P100 in double_double precision mode achieved 2.7 million tau/day, so these results are surprising. The RTX 6000 does a decent job of keeping up with the P100.
 +
 +But the T4 shines in this application. The mixed or single precision modes compete well given the T4's price and  wattage consumption.
 +
 +==== Gromacs ====
 +
 +Gromacs was build on each of the nodes locally letting it select the optimal CPU (AVX, SSE) and GPU accelerators. "GROMACS simulations are normally run in “mixed” floating-point precision, which is suited for the use of single precision in FFTW. " The ''cmake'' flag ''-DGMX_BUILD_OWN_FFTW=ON'' yields a mixed precision compilation which is recommended. Then we ran multidir options 01-04 on single GPU, and 01-08 and 01-16 on all 4 GPUs when possible.
  
  
 ^  ns/day  ^  P100[1]  ^  P100[4]  ^  RTX[1]  ^  T4[1]  ^  T4[4]  ^  Notes  ^ ^  ns/day  ^  P100[1]  ^  P100[4]  ^  RTX[1]  ^  T4[1]  ^  T4[4]  ^  Notes  ^
-|  DPFP   |  |  |  |  | +|  Mixed   |  |  254|  |   gpu=1, 01-04  | 
-|  SXFP   |  |  |  |  | +|  Mixed   |  551|  |  |  546|  gpu=4, 01-04  | 
-|  SFFP   |  |  |  |  |+|  Mixed   |  |  |  |  650 gpu=4, 01-08  | 
 +|  Mixed  |  |  |  |  |  733|  gpu=4, 01-16  | 
 + 
 +The T4 is P100's equal in mixed precision performance. Add the wattage factor and you have a favorite. And GPU utilization was outstanding. 
 + 
 +[heme@login1 gromacs-2018]$ ssh node9 ./gpu-info\\ 
 +id,name,temp.gpu,mem.used,mem.free,util.gpu,util.mem\\ 
 +0, Tesla T4, 66, 866 MiB, 14213 MiB, 98 %, 9 %\\ 
 +1, Tesla T4, 67, 866 MiB, 14213 MiB, 98 %, 9 %\\ 
 +2, Tesla T4, 66, 866 MiB, 14213 MiB, 99 %, 9 %\\ 
 +3, Tesla T4, 64, 866 MiB, 14213 MiB, 97 %, 9 %\\
 ==== Scripts ==== ==== Scripts ====
 +
 +All 3 software applications were compiled within default environment and Cuda 10.1
 +
 +Currently Loaded Modules:\\
 +  1) GCCcore/8.2.0     4) GCC/8.2.0-2.31.1   7) XZ/5.2.4           10) hwloc/1.11.11   13) FFTW/3.3.8\\
 +  2) zlib/1.2.11       5) CUDA/10.1.105      8) libxml2/2.9.8      11) OpenMPI/3.1.3   14) ScaLAPACK/2.0.2-OpenBLAS-0.3.5\\
 +  3) binutils/2.31.1   6) numactl/2.0.12     9) libpciaccess/0.14  12) OpenBLAS/0.3.5  15) fosscuda/2019a\\
 +
 +Follow\\
 +https://dokuwiki.wesleyan.edu/doku.php?id=cluster:161\\
  
   * Amber   * Amber
Line 100: Line 140:
  
 </code> </code>
 +
 +  * Lammps
 +
 +<code>
 +
 +#!/bin/bash
 +
 +#SBATCH --nodes=1
 +#SBATCH --nodelist=node5
 +#SBATCH --job-name="RTX dd"
 +#SBATCH --gres=gpu:1
 +#SBATCH --ntasks-per-node=1
 +#SBATCH --exclusive
 +
 +# RTX
 +mpirun --oversubscribe -x LD_LIBRARY_PATH -np 1 \
 +-H localhost \
 +~/lammps-5Jun19/lmp_mpi_double_double -suffix gpu -pk gpu 1 \
 +-in in.colloid > rtx-1:1
 +
 +[heme@login1 lammps-5Jun19]$ squeue
 +             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
 +              2239    normal   RTX dd     heme  R       3:17      1 node5
 +
 +[heme@login1 lammps-5Jun19]$ ssh node5 ./gpu-info
 +id,name,temp.gpu,mem.used,mem.free,util.gpu,util.mem
 +0, Quadro RTX 6000, 50, 186 MiB, 24004 MiB, 51 %, 0 %
 +
 +</code>
 +
 +  * Gromacs
 +
 +<code>
 +
 +#!/bin/bash
 +#SBATCH --nodes=1
 +#SBATCH --nodelist=node9
 +#SBATCH --job-name="T4 dd"
 +#SBATCH --ntasks-per-node=32
 +#SBATCH --gres=gpu:4
 +#SBATCH --exclusive
 +
 +export PATH=$HOME/gromacs-2018/bin:$PATH
 +export LD_LIBRARY_PATH=$HOME/gromacs-2018/lib:$LD_LIBRARY_PATH
 +. $HOME/gromacs-2018/bin/GMXRC.bash
 +rm -f gpu/??/c* gpu/??/e* gpu/??/s* gpu/??/traj* gpu/??/#* gpu/??/m*
 +cd gpu
 +
 +# T4
 +#export CUDA_VISIBLE_DEVICES=0123
 +
 +mpirun -np 8 gmx_mpi mdrun -maxh 1 -gpu_id 0123 \
 +-nsteps 1000000 -multidir 05 06 07 08 \
 +-ntmpi 0 -npme 0 -s topol.tpr -ntomp 0 -pin on -nb gpu
 +
 +</code>
 +
  
  
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
cluster/182.1565627176.txt.gz · Last modified: 2019/08/12 12:26 by hmeij07