This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:107 [2013/01/04 15:40] hmeij [Yale Qs] |
cluster:107 [2013/01/16 14:57] hmeij [ConfCall & Quote: MW] |
||
---|---|---|---|
Line 327: | Line 327: | ||
* Experimental setup with 36 gb/node, dual 8 core chips | * Experimental setup with 36 gb/node, dual 8 core chips | ||
* Nothing larger than that memory wise as CPU and GPU HPC work environments were not mixed | * Nothing larger than that memory wise as CPU and GPU HPC work environments were not mixed | ||
- | * No raw code develoment | + | * No raw code development |
* Speed ups was hard to tell | * Speed ups was hard to tell | ||
- | * PGI Accelerator was used becuase | + | * PGI Accelerator was used because |
* Double precision was most important in scientific applications | * Double precision was most important in scientific applications | ||
* MPI flavor was OpenMPI, and others (including MVApich) showed no advantages | * MPI flavor was OpenMPI, and others (including MVApich) showed no advantages | ||
- | * Book: Programming Massively Parallel Processors, Second Edition: A Hands-on Approach by David B. Kirk and Wen-mei W. Hwu (Dec 28, 2012) | + | * Book: Programming Massively Parallel Processors, Second Edition: |
+ | * A Hands-on Approach by David B. Kirk and Wen-mei W. Hwu (Dec 28, 2012) | ||
* Has examples of how to expose GPUs across nodes | * Has examples of how to expose GPUs across nodes | ||
Line 634: | Line 635: | ||
* or get 5 years total warranty | * or get 5 years total warranty | ||
+ | * Amber, LAMMPS, NAMD | ||
+ | * cuda v4&5 | ||
+ | * install/ | ||
+ | * | ||
\\ | \\ | ||
**[[cluster: | **[[cluster: |