This is an old revision of the document!
We have different locations for scratch space. Some local to the nodes, some mounted across the network. Here is the current setup as of August 2015.
48 TB of local scratch space will be made available in 6 TB chunks on the nodes in the queue
mw256fd. That yields 5TB of local scratch space per node using Raid 0 and file type
ext4, mounted at /localscratch5tb. Everybody may use this but it has specifically been put in place for Gaussian jobs yielding massive RWF files (application scratch files).
You need to change your working directory to the location the scheduler has made for you. Also save your output before the job terminates, the scheduler will remove that working directory. Here is the workflow…
#!/bin/bash # submit like so: bsub < run.forked # if writing large checkpoint files uncomment next lines #ionice -c 2 -n 7 -p $$ #ionice -p $$ #BSUB -q mw256fd #BSUB -o out #BSUB -e err #BSUB -J test # job slots: match inside gaussian.com #BSUB -n 4 # force all onto one host (shared code and data stack) #BSUB -R "span[hosts=1]" # unique job scratch dirs MYSANSCRATCH=/sanscratch/$LSB_JOBID MYLOCALSCRATCH=/localscratch/$LSB_JOBID MYLOCALSCRATCH5TB=/localscratch5tb/$LSB_JOBID export MYSANSCRATCH MYLOCALSCRATCH MYLOCALSCRATCH5TB # cd to remote working directory cd $MYLOCALSCRATCH5TB pwd # environment export GAUSS_SCRDIR="$MYLOCALSCRATCH5TB" export g09root="/share/apps/gaussian/g09root" . $g09root/g09/bsd/g09.profile #export gdvroot="/share/apps/gaussian/gdvh11" #. $gdvroot/gdv/bsd/gdv.profile # stage input data to localscratch5tb cp ~/jobs/forked/gaussian.com . touch gaussian.log # run plain vanilla g09 < gaussian.com > gaussian.log # run dev #gdv < gaussian.com > gaussian.log # save results back to homedir !!! cp gaussian.log ~/jobs/forked/output.$LSB_JOBID