cluster:116
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| cluster:116 [2013/08/05 19:05] – hmeij | cluster:116 [2014/02/04 18:57] (current) – hmeij | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| \\ | \\ | ||
| **[[cluster: | **[[cluster: | ||
| + | |||
| + | Since deployment of sharptail the information below is out of date. /home is now the same across the entire HPCC and served out by sharptail. | ||
| + | |||
| + | --- // | ||
| ===== Sharptail Cluster ===== | ===== Sharptail Cluster ===== | ||
| Line 39: | Line 43: | ||
| ==== /sanscratch ==== | ==== /sanscratch ==== | ||
| - | Sharptail will provide the users (and scheduler) with another 5 TB scratch file system. | + | Sharptail will provide the users (and scheduler) with another 5 TB scratch file system. |
| * Please offload as much IO from /home by staging your jobs in /sanscratch | * Please offload as much IO from /home by staging your jobs in /sanscratch | ||
| Line 78: | Line 82: | ||
| In both cases you do not need to target any specific core, the operating system will handle that part of the scheduling. | In both cases you do not need to target any specific core, the operating system will handle that part of the scheduling. | ||
| + | |||
| + | ==== NOTE ==== | ||
| + | |||
| + | |||
| + | ---- | ||
| + | |||
| + | Instructions below are obsolete, resources are now available via the scheduler. | ||
| + | |||
| + | Please read [[cluster: | ||
| + | |||
| + | --- // | ||
| + | |||
| + | ---- | ||
| ==== CPU-HPC ==== | ==== CPU-HPC ==== | ||
| - | With hyperthreading on the 5 nodes, it provides for 160 cores. | + | With hyperthreading on the 5 nodes, it provides for 160 cores. |
| So since there is no scheduler, you need to setup your environment and execute your program. | So since there is no scheduler, you need to setup your environment and execute your program. | ||
| Line 242: | Line 259: | ||
| Note: ran out of time to get an example running but it should follow the LAMMPS approach of above pretty closely. | Note: ran out of time to get an example running but it should follow the LAMMPS approach of above pretty closely. | ||
| + | |||
| + | Here is quick Amber example | ||
| + | |||
| + | < | ||
| + | |||
| + | [hmeij@sharptail nucleosome]$ export AMBER_HOME=/ | ||
| + | |||
| + | # find a GPU ID with gpu-info then expose that GPU to pmemd | ||
| + | [hmeij@sharptail nucleosome]$ export CUDA_VISIBLE_DEVICES=1 | ||
| + | |||
| + | # you only need one cpu core | ||
| + | [hmeij@sharptail nucleosome]$ mpirun_rsh -ssh -hostfile ~/ | ||
| + | / | ||
| + | |||
| + | </ | ||
| + | |||
cluster/116.1375729551.txt.gz · Last modified: by hmeij
