This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:107 [2013/01/03 15:00] hmeij [ConfCall & Quote: AC] |
cluster:107 [2013/01/16 15:20] hmeij [ConfCall & Quote: MW] |
||
---|---|---|---|
Line 320: | Line 320: | ||
* What MPI flavor is used most in regards to GPU computing? | * What MPI flavor is used most in regards to GPU computing? | ||
* Do you leverage the CPU HPC of the GPU HPC? For example, if there are 16 GPUs and 64 CPU cores on a cluster, do you allow 48 standard jobs on the idle cores? (assuming the max of 16 serial GPU jobs) | * Do you leverage the CPU HPC of the GPU HPC? For example, if there are 16 GPUs and 64 CPU cores on a cluster, do you allow 48 standard jobs on the idle cores? (assuming the max of 16 serial GPU jobs) | ||
+ | |||
+ | Notes 04/01/2012 ConfCall | ||
+ | |||
+ | * Applications drive the CPU-to-GPU ratio and most will be 1-to-1, certainly not larger then 1-to-3 | ||
+ | * Users did not share GPUs but could obtain more than one, always on same node | ||
+ | * Experimental setup with 36 gb/node, dual 8 core chips | ||
+ | * Nothing larger than that memory wise as CPU and GPU HPC work environments were not mixed | ||
+ | * No raw code development | ||
+ | * Speed ups was hard to tell | ||
+ | * PGI Accelerator was used because it is needed with any Fortran code (Note!) | ||
+ | * Double precision was most important in scientific applications | ||
+ | * MPI flavor was OpenMPI, and others (including MVApich) showed no advantages | ||
+ | * Book: Programming Massively Parallel Processors, Second Edition: | ||
+ | * A Hands-on Approach by David B. Kirk and Wen-mei W. Hwu (Dec 28, 2012) | ||
+ | * Has examples of how to expose GPUs across nodes | ||
==== ConfCall & Quote: AC ==== | ==== ConfCall & Quote: AC ==== | ||
Line 620: | Line 635: | ||
* or get 5 years total warranty | * or get 5 years total warranty | ||
+ | * Testing notes | ||
+ | * Amber, LAMMPS, NAMD | ||
+ | * cuda v4&5 | ||
+ | * install/ | ||
+ | * use gnu ... with openmpi | ||
+ | * make deviceQuery | ||
\\ | \\ | ||
**[[cluster: | **[[cluster: |