User Tools

Site Tools


cluster:142

This is an old revision of the document!



Back

Scratch Spaces

We have different … blah, blah, to come

  • /localscratch
    • local to each node, off different sizes
  • /sanscratch
    • two 5 TB file systems mounted IpoIB using NFS
  • /localscratch5tb

48 TB of local scratch space will be available in 6 TB chunks on the nodes in the queue mw256fd. That yields 5TB of local scratch space per node. Everybody may use this but it has specifically been put in place for the Guassian jobs yielding massive RWF files (application scratch files).

#!/bin/bash
# submit like so: bsub < run.forked

# if writing large checkpoint files uncommnet next lines
#ionice -c 2 -n 7 -p $$
#ionice -p $$

rm -rf err* out* output.*

#BSUB -q mw256fd
#BSUB -o out
#BSUB -e err
#BSUB -J test

# job slots: match inside gaussian.com
#BSUB -n 4
# force all onto one host (shared code and data stack)
#BSUB -R "span[hosts=1]"

# unique job scratch dirs
MYSANSCRATCH=/sanscratch/$LSB_JOBID
MYLOCALSCRATCH=/localscratch/$LSB_JOBID
export MYSANSCRATCH MYLOCALSCRATCH

# cd to remote working dir
cd $MYSANSCRATCH
pwd

# environment
export GAUSS_SCRDIR="$MYSANSCRATCH"

export g09root="/share/apps/gaussian/g09root"
. $g09root/g09/bsd/g09.profile

#export gdvroot="/share/apps/gaussian/gdvh11"
#. $gdvroot/gdv/bsd/gdv.profile

# copy input data to fast disk
cp ~/jobs/forked/gaussian.com .
touch gaussian.log

# run plain vanilla
g09 < gaussian.com > gaussian.log

# run dev
#gdv < gaussian.com > gaussian.log

# save results back to homedir
cp gaussian.log ~/jobs/forked/output.$LSB_JOBID


Back

cluster/142.1438627000.txt.gz · Last modified: 2015/08/03 14:36 by hmeij