This is an old revision of the document!
Same tests (12 cpu cores) using lj/cut, eam, lj/expand, and morse: AU.reduced
CPU only 6 mins 1 secs 1 GPU 1 mins 1 secs (a 5-6 times speed up) 2 GPUs 1 mins 0 secs (never saw 2nd GPU used, problem set too small?)
Same tests (12 cpu cores) using a restart file and using gayberne: GB
CPU only 1 hour 5 mins 1 GPU 3 mins and 33 secs (a 18-19 times peed up) 2 GPUs 2 mins (see below)
Vendor: “There are currently two systems available, each with two 8-core Xeon E5-2670 processors, 32GB memory, 120GB SSD and two Tesla K20 GPUs. The hostnames are master and node2. You will see that a GPU-accelerated version of LAMMPS with MPI support is installed in /usr/local/LAMMPS.”
Actually, turns out there are 32 cores on node so I suspect four CPUs.
First, we expose the GPUs to Lammps (so running with a value of -1 ignores the GPUs) in our input file.
# Enable GPU's if variable is set. if "(${GPUIDX} >= 0)" then & "suffix gpu" & "newton off" & "package gpu force 0 ${GPUIDX} 1.0"
Then we invoke the Lammps executable with MPI.
NODES=1 # number of nodes [=>1] GPUIDX=0 # GPU indices range from [0,1], this is the upper bound. # set GPUIDX=0 for 1 GPU/node or GPUIDX=1 for 2 GPU/node CORES=12 # Cores per node. (i.e. 2 CPUs with 6 cores ea =12 cores per node) which mpirun echo "*** GPU run with one MPI process per core ***" date mpirun -np $((NODES*CORES)) -bycore ./lmp_ex1 -c off -var GPUIDX $GPUIDX \ -in film.inp -l film_1_gpu_1_node.log date
Some tests using lj/cut, eam, lj/expand, and morse:
Some tests using a restart file and using gayberne,
node2$ gpu-info ==================================================== Device Model Temperature Utilization ==================================================== 0 Tesla K20m 36 C 96 % 1 Tesla K20m 34 C 92 % ====================================================