Warning: Undefined array key "DOKU_PREFS" in /usr/share/dokuwiki/inc/common.php on line 2082
cluster:116 [DokuWiki]

User Tools

Site Tools


cluster:116

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:116 [2013/07/11 15:48]
hmeij
cluster:116 [2013/08/21 10:48]
hmeij [Clashes]
Line 78: Line 78:
  
 In both cases you do not need to target any specific core, the operating system will handle that part of the scheduling. In both cases you do not need to target any specific core, the operating system will handle that part of the scheduling.
 +
 +==== NOTE ====
 +
 +
 +----
 +
 +Instructions below are obsolete, resources are now available via the scheduler.
 +
 +Please read [[cluster:119|Submitting GPU Jobs]]
 +
 + --- //[[hmeij@wesleyan.edu|Meij, Henk]] 2013/08/21 10:46//
 +
 +----
  
  
 ==== CPU-HPC ==== ==== CPU-HPC ====
  
-With hyperthreading on the 5 nodes, it provides for 160 cores.  We need to reserve 20 cores for the GPUs (one per GPU), and lets reserve another 20 cores for the OS (per node).  That still leaves 120 cores for regular jobs like you are used to on greentail.  These 120 cores (24 per node) will show up later as a new queue on greentail/swallowtail; one that is fit for jobs that need much memory. On average 256 gb per node minus 20 gb for 4 GPUs minus 20 gb for OS leaves 5.6 gb ''per core''+With hyperthreading on the 5 nodes, it provides for 160 cores.  We need to reserve 20 cores for the GPUs (one per GPU, 4 per node), and lets reserve another 20 cores for the OS (per node).  That still leaves 120 cores for regular jobs like you are used to on greentail.  These 120 cores (24 per node) will show up later as a new queue on greentail/swallowtail; one that is fit for jobs that need much memory. On average 256 gb per node minus 20 gb for 4 GPUs minus 20 gb for OS leaves 5.6 gb ''per core''
  
 So since there is no scheduler, you need to setup your environment and execute your program.  Here is an example of a program that normally runs on the imw queue.  If your program involves MPI you need to be a bit up to speed on what the lava wrapper actually does for you. So since there is no scheduler, you need to setup your environment and execute your program.  Here is an example of a program that normally runs on the imw queue.  If your program involves MPI you need to be a bit up to speed on what the lava wrapper actually does for you.
Line 242: Line 255:
  
 Note: ran out of time to get an example running but it should follow the LAMMPS approach of above pretty closely.  The binary is in /cm/share/apps/amber/amber12/bin/pmemd.cuda.MPI Note: ran out of time to get an example running but it should follow the LAMMPS approach of above pretty closely.  The binary is in /cm/share/apps/amber/amber12/bin/pmemd.cuda.MPI
 +
 +Here is quick Amber example
 +
 +<code>
 +
 +[hmeij@sharptail nucleosome]$ export AMBER_HOME=/cm/shared/apps/amber/amber12
 +
 +# find a GPU ID with gpu-info then expose that GPU to pmemd
 +[hmeij@sharptail nucleosome]$ export CUDA_VISIBLE_DEVICES=1
 +
 +# you only need one cpu core
 +[hmeij@sharptail nucleosome]$ mpirun_rsh -ssh -hostfile ~/sharptail/hostfile -np 1 \
 +/cm/shared/apps/amber/amber12/bin/pmemd.cuda.MPI -O -o mdout.1K10 -inf mdinfo.1K10 -x mdcrd.1K10 -r restrt.1K10 -ref inpcrd &
 +
 +</code>
 +
  
  
cluster/116.txt ยท Last modified: 2014/02/04 13:57 by hmeij