User Tools

Site Tools


cluster:182

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:182 [2019/08/12 15:00]
hmeij07 [Amber]
cluster:182 [2019/08/12 17:20]
hmeij07 [Lammps]
Line 52: Line 52:
  
 ^  ns/day  ^  P100[1]  ^  P100[4]  ^  RTX[1]  ^  T4[1]  ^  T4[4]  ^  Notes  ^ ^  ns/day  ^  P100[1]  ^  P100[4]  ^  RTX[1]  ^  T4[1]  ^  T4[4]  ^  Notes  ^
-|  DPFP  |  5.21| +|  DPFP  |  5.21|  18.35|  0.75|  0.35|  1.29
-|  SXFP  |  11.82| +|  SXFP  |  11.82|  37.44|  17.05|  7.01|  18.91
-|  SFFP  |  11.91|+|  SFFP  |  11.91|  40.98|  9.92|  4.35|  16.22|
  
 +Like last testing outcome, in the SFFP precision mode it is best to run four individual jobs, one per GPU (mpi=1, gpu=1). Best performance is the P100 at 47.64 vs the RTX at 39.69 ns/day per node. The T4 runs about 1/3 as fast and really falters in DPFP precision mode. But in SXFP (experimental) precision mode the T4 makes up in performance. 
 +
 +Can't complain about utilization rates.\\
 +Amber mpi=4 gpu=4\\
 +
 +[heme@login1 amber16]$ ssh node7 ./gpu-info\\
 +id,name,temp.gpu,mem.used,mem.free,util.gpu,util.mem\\
 +0, Tesla P100-PCIE-16GB, 79, 1052 MiB, 15228 MiB, 87 %, 1 %\\
 +1, Tesla P100-PCIE-16GB, 79, 1052 MiB, 15228 MiB, 95 %, 0 %\\
 +2, Tesla P100-PCIE-16GB, 79, 1052 MiB, 15228 MiB, 87 %, 0 %\\
 +3, Tesla P100-PCIE-16GB, 78, 1052 MiB, 15228 MiB, 94 %, 0 %\\
 +
 +==== Lammps ====
 +
 +Precision for GPU calculations
 +
 +  * [DD] -D_DOUBLE_DOUBLE  # Double precision for all calculations
 +  * [SD] -D_SINGLE_DOUBLE  # Accumulation of forces, etc. in double
 +  * [SS] -D_SINGLE_SINGLE  # Single precision for all calculations
 +
 +
 +^  tau/day  ^  P100[1]  ^  P100[4]  ^  RTX[1]  ^  T4[1]  ^  T4[4]  ^  Notes  ^
 +|  DD  |  856,669|  ?|  600,048|  518,164|  1,098,621|  |
 +|  SD  |  981,897|  ?|  916,225|  881,247|  2,294,344|  |
 +|  SS  |  1,050,796|  ?|  1,035,041|  1,021,477|  2,541,435|  |
 +
 +As with Amber, it is best to run one job per GPU to achieve max node performance. Depending on problem set performance can be boosted by requesting more MPI threads. In previous tests the P100 in double_double precision mode achieved 2.7 million tau/day, so these results are surprising. The RTX 6000 does a decent job of keeping up with the P100.
 +
 +But the T4 shines in this application. The mixed or single precision modes compete well given the T4's price and  wattage consumption.
 +
 +==== Gromacs ====
 +
 +
 +
 +
 +^  tau/day  ^  P100[1]  ^  P100[4]  ^  RTX[1]  ^  T4[1]  ^  T4[4]  ^  Notes  ^
 +|  Mixed  |  981,897|  ?|  916,225|  881,247|  2,294,344|  |
  
 ==== Scripts ==== ==== Scripts ====
 +
 +All 3 software applications were compiled within default environment and Cuda 10.1
 +
 +Currently Loaded Modules:\\
 +  1) GCCcore/8.2.0     4) GCC/8.2.0-2.31.1   7) XZ/5.2.4           10) hwloc/1.11.11   13) FFTW/3.3.8\\
 +  2) zlib/1.2.11       5) CUDA/10.1.105      8) libxml2/2.9.8      11) OpenMPI/3.1.3   14) ScaLAPACK/2.0.2-OpenBLAS-0.3.5\\
 +  3) binutils/2.31.1   6) numactl/2.0.12     9) libpciaccess/0.14  12) OpenBLAS/0.3.5  15) fosscuda/2019a\\
 +
 +Follow\\
 +https://dokuwiki.wesleyan.edu/doku.php?id=cluster:161\\
  
   * Amber   * Amber
Line 81: Line 128:
 </code> </code>
  
 +  * Lammps
  
 +<code>
 +
 +#!/bin/bash
 +
 +#SBATCH --nodes=1
 +#SBATCH --nodelist=node5
 +#SBATCH --job-name="RTX dd"
 +#SBATCH --gres=gpu:1
 +#SBATCH --ntasks-per-node=1
 +#SBATCH --exclusive
 +
 +# RTX
 +mpirun --oversubscribe -x LD_LIBRARY_PATH -np 1 \
 +-H localhost \
 +~/lammps-5Jun19/lmp_mpi_double_double -suffix gpu -pk gpu 1 \
 +-in in.colloid > rtx-1:1
 +
 +[heme@login1 lammps-5Jun19]$ squeue
 +             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
 +              2239    normal   RTX dd     heme  R       3:17      1 node5
 +
 +[heme@login1 lammps-5Jun19]$ ssh node5 ./gpu-info
 +id,name,temp.gpu,mem.used,mem.free,util.gpu,util.mem
 +0, Quadro RTX 6000, 50, 186 MiB, 24004 MiB, 51 %, 0 %
 +
 +</code>
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
cluster/182.txt · Last modified: 2019/12/13 13:33 by hmeij07