cluster:103
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| cluster:103 [2011/12/21 19:01] – [Submit] hmeij | cluster:103 [2011/12/22 19:34] (current) – [Submit 2] hmeij | ||
|---|---|---|---|
| Line 128: | Line 128: | ||
| And so the job was dispatched to host '' | And so the job was dispatched to host '' | ||
| - | The '' | + | The '' |
| + | |||
| + | ==== Submit 2 ==== | ||
| + | |||
| + | On the back end compute nodes, unless specified, the job runs inside your home directory. | ||
| + | |||
| + | In the SAS program we add the following lines | ||
| + | |||
| + | < | ||
| + | %let jobpid = %sysget(LSB_JOBID); | ||
| + | libname here "/ | ||
| + | </ | ||
| + | |||
| + | And change this line to use local disks for storage | ||
| + | |||
| + | < | ||
| + | data here.one; | ||
| + | </ | ||
| + | |||
| + | In the submission script we change the following | ||
| + | |||
| + | * new submission file with edits | ||
| + | * -n implies reserve job slots (cpu cores) for job (not necesssary, SAS jobs will always use only one) | ||
| + | * -R reserves memory, for example, reserve 200 MB of memory on target compute node | ||
| + | * scheduler creates unique dirs in scratch by JOBPID for you, so we'll stage the job there | ||
| + | * but now we must copy relevant files //to// scratch dir and results back //to// home dir | ||
| + | |||
| + | |||
| + | < | ||
| + | |||
| + | # | ||
| + | # submit via 'bsub < run' | ||
| + | |||
| + | #BSUB -q hp12 | ||
| + | #BSUB -J test | ||
| + | #BSUB -o stdout | ||
| + | #BSUB -e stderr | ||
| + | #BSUB -n 1 | ||
| + | #BSUB -R " | ||
| + | |||
| + | # unique job dir in scratch | ||
| + | export MYSANSCRATCH=/ | ||
| + | cd $MYSANSCRATCH | ||
| + | |||
| + | cp ~/ | ||
| + | time sas test | ||
| + | cp test.log test.lst ~/sas | ||
| + | |||
| + | </ | ||
| + | |||
| + | |||
| + | * you can monitor the progress of your jobs from greentail while it runs | ||
| + | |||
| + | < | ||
| + | [hmeij@greentail sas]$ ll / | ||
| + | total 16 | ||
| + | -rw-r--r-- 1 hmeij its 33 Dec 21 14:31 test.dat | ||
| + | -rw-r--r-- 1 hmeij its 2568 Dec 21 14:31 test.log | ||
| + | -rw-r--r-- 1 hmeij its 258 Dec 21 14:31 test.lst | ||
| + | -rw-r--r-- 1 hmeij its 140 Dec 21 14:31 test.sas | ||
| + | </ | ||
| + | |||
| + | ==== Best Practices ==== | ||
| + | |||
| + | * You may submit as many SAS jobs as you like, just leave enough resources available for others to also get work done | ||
| + | * Because SAS submission are serial, non-parallel jobs your -n flag is always 1 | ||
| + | * Reserve resources if you know what you need, especially memory | ||
| + | * Use /sanscratch for large data jobs with heavy read/write operations | ||
| + | * Queue ehwfd is preferentially for Gaussian users and stay off the stata and matlab queues please | ||
| + | * Write smart SAS code, for example, use data set indexes and PROC SQL (this can be your best friend) | ||
| + | * ... suggestions will be added to this page | ||
| \\ | \\ | ||
| **[[cluster: | **[[cluster: | ||
cluster/103.1324494086.txt.gz · Last modified: by hmeij
