This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:109 [2013/01/17 16:23] hmeij [GPU Testing at MW] |
cluster:109 [2013/02/02 20:26] hmeij [Lammps GPU Testing (EC)] |
||
---|---|---|---|
Line 2: | Line 2: | ||
**[[cluster: | **[[cluster: | ||
- | ===== Laamps | + | ===== Lammps |
+ | |||
+ | * 32 cores E2660 | ||
+ | * 4 K20 GPU | ||
+ | * workstation | ||
+ | * MPICH2 flavor | ||
+ | |||
+ | |||
+ | Same tests (12 cpu cores) using lj/cut, eam, lj/expand, and morse: **AU.reduced** | ||
+ | |||
+ | CPU only 6 mins 1 secs | ||
+ | 1 GPU 1 mins 1 secs (a 5-6 times speed up) | ||
+ | 2 GPUs 1 mins 0 secs (never saw 2nd GPU used, problem set too small?) | ||
+ | |||
+ | Same tests (12 cpu cores) using a restart file and using gayberne: **GB** | ||
+ | |||
+ | CPU only 1 hour 5 mins | ||
+ | 1 GPU 5 mins and 15 secs (a 18-19 times peed up) | ||
+ | 2 GPUs 2 mins | ||
+ | |||
+ | Francis' | ||
+ | |||
+ | ^3d Lennard-Jones melt: for 10,000 steps with 32,000 atoms^^^^^^ | ||
+ | |CPU only| -np 1 | -np 6 | -np 12 | -np 24 | -np 36 | | ||
+ | |loop times| | ||
+ | |GPU only| 1xK20 | 2xK20 | 3xK20 | 4xK20 | (-np 1-4) | | ||
+ | |loop times| | ||
+ | ^3d Lennard-Jones melt: for 100,000 steps with 32,000 atoms^^^^^^ | ||
+ | |GPU only| 1xK20 | 2xK20 | 3xK20 | 4xK20 | (-np 1-4) | | ||
+ | |loop times| | ||
+ | |||
+ | |||
+ | |||
+ | |||
+ | ===== Lammps GPU Testing (MW) ===== | ||
Vendor: "There are currently two systems available, each with two 8-core Xeon E5-2670 processors, 32GB memory, 120GB SSD and two Tesla K20 GPUs. The hostnames are master and node2. | Vendor: "There are currently two systems available, each with two 8-core Xeon E5-2670 processors, 32GB memory, 120GB SSD and two Tesla K20 GPUs. The hostnames are master and node2. | ||
Line 22: | Line 56: | ||
< | < | ||
- | NODES=1 | + | NODES=1 |
GPUIDX=0 | GPUIDX=0 | ||
# set GPUIDX=0 for 1 GPU/node or GPUIDX=1 for 2 GPU/node | # set GPUIDX=0 for 1 GPU/node or GPUIDX=1 for 2 GPU/node | ||
Line 34: | Line 68: | ||
-in film.inp -l film_1_gpu_1_node.log | -in film.inp -l film_1_gpu_1_node.log | ||
date | date | ||
+ | </ | ||
+ | |||
+ | Some tests using **lj/cut**, **eam**, **lj/ | ||
+ | |||
+ | * CPU only 4 mins 30 secs | ||
+ | * 1 GPU 0 mins 47 secs (a 5-6 times speed up) | ||
+ | * 2 GPUs 0 mins 46 secs (never saw 2nd GPU used, problem set too small?) | ||
+ | |||
+ | Some tests using a restart file and using **gayberne**, | ||
+ | |||
+ | * CPU only 1 hour 5 mins | ||
+ | * 1 GPU 3 mins and 33 secs (a 18-19 times peed up) | ||
+ | * 2 GPUs 2 mins (see below) | ||
+ | |||
+ | < | ||
+ | node2$ gpu-info | ||
+ | ==================================================== | ||
+ | Device | ||
+ | ==================================================== | ||
+ | 0 Tesla K20m 36 C 96 % | ||
+ | 1 Tesla K20m 34 C 92 % | ||
+ | ==================================================== | ||
</ | </ | ||
\\ | \\ | ||
**[[cluster: | **[[cluster: |