User Tools

Site Tools


cluster:142

This is an old revision of the document!



Back

Scratch Spaces

We have different … blah, blah, to come

  • /localscratch
    • local to each node, off different sizes
  • /sanscratch
    • two 5 TB file systems mounted IpoIB using NFS
  • /localscratch5tb

48 TB of local scratch space will be available in 6 TB chunks on the nodes in the queue mw256fd. That yields 5TB of local scratch space per node using Raid 0 and file type ext4. Everybody may use this but it has specifically been put in place for the Guassian jobs yielding massive RWF files (application scratch files).

You need to change your working directory to the location the scheduler has made for you. Also save your output before the job terminates, the scheduler will remove that working directory. Here is the workflow…

#!/bin/bash
# submit like so: bsub < run.forked

# if writing large checkpoint files uncomment next lines
#ionice -c 2 -n 7 -p $$
#ionice -p $$

#BSUB -q mw256fd
#BSUB -o out
#BSUB -e err
#BSUB -J test

# job slots: match inside gaussian.com
#BSUB -n 4
# force all onto one host (shared code and data stack)
#BSUB -R "span[hosts=1]"

# unique job scratch dirs
MYSANSCRATCH=/sanscratch/$LSB_JOBID
MYLOCALSCRATCH=/localscratch/$LSB_JOBID
MYLOCALSCRATCH5TB=/localscratch5tb/$LSB_JOBID
export MYSANSCRATCH MYLOCALSCRATCH MYLOCALSCRATCH5TB

# cd to remote working directory
cd $MYLOCALSCRATCH5TB
pwd

# environment
export GAUSS_SCRDIR="$MYLOCALSCRATCH5TB"

export g09root="/share/apps/gaussian/g09root"
. $g09root/g09/bsd/g09.profile

#export gdvroot="/share/apps/gaussian/gdvh11"
#. $gdvroot/gdv/bsd/gdv.profile

# stage input data to localscratch5tb
cp ~/jobs/forked/gaussian.com .
touch gaussian.log

# run plain vanilla
g09 < gaussian.com > gaussian.log

# run dev
#gdv < gaussian.com > gaussian.log

# save results back to homedir !!!
cp gaussian.log ~/jobs/forked/output.$LSB_JOBID


Back

cluster/142.1438627334.txt.gz · Last modified: 2015/08/03 14:42 by hmeij