This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:182 [2019/08/12 12:19] hmeij07 [Amber] |
cluster:182 [2019/08/13 08:38] hmeij07 [Scripts] |
||
---|---|---|---|
Line 68: | Line 68: | ||
3, Tesla P100-PCIE-16GB, | 3, Tesla P100-PCIE-16GB, | ||
+ | ==== Lammps ==== | ||
+ | Precision for GPU calculations | ||
+ | |||
+ | * [DD] -D_DOUBLE_DOUBLE | ||
+ | * [SD] -D_SINGLE_DOUBLE | ||
+ | * [SS] -D_SINGLE_SINGLE | ||
+ | |||
+ | |||
+ | ^ tau/ | ||
+ | | DD | 856, | ||
+ | | SD | 981, | ||
+ | | SS | 1, | ||
+ | |||
+ | Forgot run run the 4 GPU P100 scenarios, deh. | ||
+ | |||
+ | As with Amber, it is best to run one job per GPU to achieve max node performance. Depending on problem set performance can be boosted by requesting more MPI threads. In previous tests the P100 in double_double precision mode achieved 2.7 million tau/day, so these results are surprising. The RTX 6000 does a decent job of keeping up with the P100. | ||
+ | |||
+ | But the T4 shines in this application. The mixed or single precision modes compete well given the T4's price and wattage consumption. | ||
+ | |||
+ | ==== Gromacs ==== | ||
+ | |||
+ | Gromacs was build on each of the nodes locally letting it select the optimal CPU (AVX, SSE) and GPU accelerators. The '' | ||
+ | |||
+ | |||
+ | ^ ns/ | ||
+ | | Mixed | | | 254| | | gpu=1, 01-04 | | ||
+ | | Mixed | | 551| | | 546| gpu=4, 01-04 | | ||
+ | | Mixed | | | | | 650| gpu=4, 01-08 | | ||
+ | | Mixed | | | | | 733| gpu=4, 01-16 | | ||
+ | |||
+ | The T4 is P100's equal in mixed precision performance. Add the wattage factor and you have a favorite. | ||
==== Scripts ==== | ==== Scripts ==== | ||
+ | |||
+ | All 3 software applications were compiled within default environment and Cuda 10.1 | ||
+ | |||
+ | Currently Loaded Modules:\\ | ||
+ | 1) GCCcore/ | ||
+ | 2) zlib/ | ||
+ | 3) binutils/ | ||
+ | |||
+ | Follow\\ | ||
+ | https:// | ||
* Amber | * Amber | ||
Line 93: | Line 134: | ||
</ | </ | ||
+ | |||
+ | * Lammps | ||
+ | |||
+ | < | ||
+ | |||
+ | #!/bin/bash | ||
+ | |||
+ | #SBATCH --nodes=1 | ||
+ | #SBATCH --nodelist=node5 | ||
+ | #SBATCH --job-name=" | ||
+ | #SBATCH --gres=gpu: | ||
+ | #SBATCH --ntasks-per-node=1 | ||
+ | #SBATCH --exclusive | ||
+ | |||
+ | # RTX | ||
+ | mpirun --oversubscribe -x LD_LIBRARY_PATH -np 1 \ | ||
+ | -H localhost \ | ||
+ | ~/ | ||
+ | -in in.colloid > rtx-1:1 | ||
+ | |||
+ | [heme@login1 lammps-5Jun19]$ squeue | ||
+ | JOBID PARTITION | ||
+ | 2239 normal | ||
+ | |||
+ | [heme@login1 lammps-5Jun19]$ ssh node5 ./gpu-info | ||
+ | id, | ||
+ | 0, Quadro RTX 6000, 50, 186 MiB, 24004 MiB, 51 %, 0 % | ||
+ | |||
+ | </ | ||
+ | |||
+ | * Gromacs | ||
+ | |||
+ | < | ||
+ | |||
+ | #!/bin/bash | ||
+ | #SBATCH --nodes=1 | ||
+ | #SBATCH --nodelist=node9 | ||
+ | #SBATCH --job-name=" | ||
+ | #SBATCH --ntasks-per-node=32 | ||
+ | #SBATCH --gres=gpu: | ||
+ | #SBATCH --exclusive | ||
+ | |||
+ | export PATH=$HOME/ | ||
+ | export LD_LIBRARY_PATH=$HOME/ | ||
+ | . $HOME/ | ||
+ | rm -f gpu/??/c* gpu/??/e* gpu/??/s* gpu/??/ | ||
+ | cd gpu | ||
+ | |||
+ | # T4 | ||
+ | #export CUDA_VISIBLE_DEVICES=0123 | ||
+ | |||
+ | mpirun -np 8 gmx_mpi mdrun -maxh 1 -gpu_id 0123 \ | ||
+ | -nsteps 1000000 -multidir 05 06 07 08 \ | ||
+ | -ntmpi 0 -npme 0 -s topol.tpr -ntomp 0 -pin on -nb gpu | ||
+ | |||
+ | </ | ||
+ | |||
\\ | \\ | ||
**[[cluster: | **[[cluster: |