This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:116 [2013/08/05 15:44] hmeij |
cluster:116 [2013/08/21 10:48] hmeij [Clashes] |
||
---|---|---|---|
Line 78: | Line 78: | ||
In both cases you do not need to target any specific core, the operating system will handle that part of the scheduling. | In both cases you do not need to target any specific core, the operating system will handle that part of the scheduling. | ||
+ | |||
+ | ==== NOTE ==== | ||
+ | |||
+ | |||
+ | ---- | ||
+ | |||
+ | Instructions below are obsolete, resources are now available via the scheduler. | ||
+ | |||
+ | Please read [[cluster: | ||
+ | |||
+ | --- // | ||
+ | |||
+ | ---- | ||
Line 242: | Line 255: | ||
Note: ran out of time to get an example running but it should follow the LAMMPS approach of above pretty closely. | Note: ran out of time to get an example running but it should follow the LAMMPS approach of above pretty closely. | ||
+ | |||
+ | Here is quick Amber example | ||
+ | |||
+ | < | ||
+ | |||
+ | [hmeij@sharptail nucleosome]$ export AMBER_HOME=/ | ||
+ | |||
+ | # find a GPU ID with gpu-info then expose that GPU to pmemd | ||
+ | [hmeij@sharptail nucleosome]$ export CUDA_VISIBLE_DEVICES=1 | ||
+ | |||
+ | # you only need one cpu core | ||
+ | [hmeij@sharptail nucleosome]$ mpirun_rsh -ssh -hostfile ~/ | ||
+ | / | ||
+ | |||
+ | </ | ||
+ | |||