This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:182 [2019/08/12 16:26] hmeij07 [Lammps] |
cluster:182 [2019/08/12 17:07] hmeij07 [Lammps] |
||
---|---|---|---|
Line 70: | Line 70: | ||
==== Lammps ==== | ==== Lammps ==== | ||
+ | Precision for GPU calculations | ||
+ | * [DD] -D_DOUBLE_DOUBLE | ||
+ | * [SD] -D_SINGLE_DOUBLE | ||
+ | * [SS] -D_SINGLE_SINGLE | ||
- | ^ | + | ^ |
- | | | + | | |
- | | | + | | |
- | | | + | | |
+ | |||
+ | As with Amber, it is best to run one job per GPU to achieve max node performance. Depending on problem set performance can be boosted by requesting more MPI threads. In previous tests the P100 in double_double precision mode achieved 2.7 million tau/day, so these results are surprising. The RTX 6000 does a decent job of keeping up with the P100. | ||
+ | |||
+ | But the T4 shines in this application. The mixed or single precision modes compete well given the T4's price and wattage consumption. | ||
==== Scripts ==== | ==== Scripts ==== | ||
+ | |||
+ | All 3 software applications were compiled within default environment and Cuda 10.1 | ||
+ | |||
+ | Currently Loaded Modules:\\ | ||
+ | 1) GCCcore/ | ||
+ | 2) zlib/ | ||
+ | 3) binutils/ | ||
+ | |||
+ | Follow\\ | ||
+ | https:// | ||
* Amber | * Amber | ||
Line 101: | Line 119: | ||
</ | </ | ||
+ | * Lammps | ||
+ | < | ||
+ | |||
+ | #!/bin/bash | ||
+ | |||
+ | #SBATCH --nodes=1 | ||
+ | #SBATCH --nodelist=node5 | ||
+ | #SBATCH --job-name=" | ||
+ | #SBATCH --gres=gpu: | ||
+ | #SBATCH --ntasks-per-node=1 | ||
+ | #SBATCH --exclusive | ||
+ | |||
+ | # RTX | ||
+ | mpirun --oversubscribe -x LD_LIBRARY_PATH -np 1 \ | ||
+ | -H localhost \ | ||
+ | ~/ | ||
+ | -in in.colloid > rtx-1:1 | ||
+ | |||
+ | [heme@login1 lammps-5Jun19]$ squeue | ||
+ | JOBID PARTITION | ||
+ | 2239 normal | ||
+ | |||
+ | [heme@login1 lammps-5Jun19]$ ssh node5 ./gpu-info | ||
+ | id, | ||
+ | 0, Quadro RTX 6000, 50, 186 MiB, 24004 MiB, 51 %, 0 % | ||
+ | |||
+ | </ | ||
\\ | \\ | ||
**[[cluster: | **[[cluster: |