This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
cluster:142 [2015/08/03 18:31] hmeij created |
cluster:142 [2020/02/27 13:59] (current) hmeij07 |
||
---|---|---|---|
Line 1: | Line 1: | ||
\\ | \\ | ||
**[[cluster: | **[[cluster: | ||
+ | |||
===== Scratch Spaces ===== | ===== Scratch Spaces ===== | ||
- | We have different ... blah, blah, to come | + | We have different |
+ | |||
+ | * **/ | ||
+ | * Local to each node, different sizes roughly around 50-80 GB | ||
+ | * Warning: on nodes n46-n59 there is no hard disk but a SataDOM (usb device plugged directly into system board, 16 GB in size, holds just the OS). Do not use / | ||
+ | |||
+ | * **/ | ||
+ | * 55 TB file system mounted IpoIB using NFS or plain Ethernet | ||
+ | * greentail52 is the file server | ||
+ | * / | ||
+ | * / | ||
+ | |||
+ | |||
+ | * **/ | ||
+ | * 5 TB file system provided by local drives (3x2TB, Raid 0) on each node in the '' | ||
+ | * The list of nodes done: n38-n45, all are done (10sep15) | ||
+ | |||
+ | * **/ | ||
+ | * 2 TB file system on nodes in queue '' | ||
+ | |||
+ | * **/ | ||
+ | * ~800GB file system on nodes in queue '' | ||
+ | |||
+ | |||
+ | 48 TB of local scratch space will be made available in 6 TB chunks on the nodes in the queue '' | ||
+ | |||
+ | **Note: Everybody is welcome to store content in ''/ | ||
+ | |||
+ | |||
+ | You need to change your working directory to the location the scheduler has made for you. Also save your output before the job terminates, the scheduler will remove that working directory. Here is the workflow... | ||
+ | |||
+ | < | ||
+ | |||
+ | # | ||
+ | # submit like so: bsub < run.forked | ||
+ | |||
+ | # if writing large checkpoint files uncomment next lines | ||
+ | #ionice -c 2 -n 7 -p $$ | ||
+ | #ionice -p $$ | ||
+ | |||
+ | #BSUB -q mw256fd | ||
+ | #BSUB -o out | ||
+ | #BSUB -e err | ||
+ | #BSUB -J test | ||
+ | |||
+ | # job slots: match inside gaussian.com | ||
+ | #BSUB -n 4 | ||
+ | # force all onto one host (shared code and data stack) | ||
+ | #BSUB -R " | ||
+ | |||
+ | # unique job scratch dirs | ||
+ | MYSANSCRATCH=/ | ||
+ | MYLOCALSCRATCH=/ | ||
+ | MYLOCALSCRATCH5TB=/ | ||
+ | export MYSANSCRATCH MYLOCALSCRATCH MYLOCALSCRATCH5TB | ||
+ | |||
+ | # cd to remote working directory | ||
+ | cd $MYLOCALSCRATCH5TB | ||
+ | pwd | ||
+ | |||
+ | # environment | ||
+ | export GAUSS_SCRDIR=" | ||
+ | |||
+ | export g09root="/ | ||
+ | . $g09root/ | ||
+ | |||
+ | #export gdvroot="/ | ||
+ | #. $gdvroot/ | ||
+ | |||
+ | # stage input data to localscratch5tb | ||
+ | cp ~/ | ||
+ | touch gaussian.log | ||
+ | |||
+ | # run plain vanilla | ||
+ | g09 < gaussian.com > gaussian.log | ||
+ | |||
+ | # run dev | ||
+ | #gdv < gaussian.com > gaussian.log | ||
+ | |||
+ | # save results back to homedir !!! | ||
+ | cp gaussian.log ~/ | ||
+ | |||
+ | </ | ||