User Tools

Site Tools


cluster:97

Warning: Undefined array key -1 in /usr/share/dokuwiki/inc/html.php on line 1458

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cluster:97 [2012/02/16 14:01]
hmeij
cluster:97 [2012/02/16 14:09] (current)
hmeij
Line 4: Line 4:
 ==== Summary ==== ==== Summary ====
  
-The purpose of this testing is to find out how fast the storage systems respond either directly attached to compute nodes, or attached via ehternet (gigabit ethernet) or infiniband (SDR via queue imw or QDR via queue hp12).  When using infiniband interconnects we use IPoIB (IP traffic over infiniband interconnects which theoretically might be 3-4 times faster than ethernet).+The purpose of this testing is to find out how fast the storage systems respond either directly attached to compute nodes, or attached via ethernet (gigabit ethernet) or infiniband (SDR via queue imw or QDR via queue hp12).  When using infiniband interconnects we use IPoIB (IP traffic over infiniband interconnects which theoretically might be 3-4 times faster than ethernet).
  
-So, nothing beats directly attached storage ofcourse (scenario: fastlocal.dell.out), the attached disk arrays on compute nodes in the ehwfd queue.  Each node is presented with 230 gb of dedicated disk space provide by seven 10K disks using Raid 0 (all drives read and write simutaneously).  IOZone suite finished in an hour.+So, nothing beats directly attached storage ofcourse (scenario: fastlocal.dell.out below), the attached disk arrays on compute nodes in the ehwfd queue.  Each node is presented with 230 gb of dedicated disk space provided by seven 10K disks using Raid 0 (all drives read and write simultaneously).  IOZone suite finished in an hour.
  
-However, that queue may be a bottle neck (only 4 compute nodes in the ehwfd queue) or perhaps 230 GB is not enough (for 8 job slots).  So one alternative is the use of MYSANSCRATCH in your submit scripts.  MYSANSCRATCH refers to a directory made for you by the scheduler at location /sanscratch/JOBPID which is a Raid 5 filesystem of 5 TB provided by 5 disks.  IOZone suite was done in 2 hrs 45 mins (scenario: san.hp.out).+However, that queue may be a bottle neck (only 4 compute nodes in the ehwfd queue) or perhaps 230 GB is not enough (for 8 job slots).  So one alternative is the use of MYSANSCRATCH in your submit scripts.  MYSANSCRATCH refers to a directory made for you by the scheduler at location /sanscratch/JOBPID which is a Raid 5 filesystem of 5 TB provided by 5 disks spinning at 7.2K.  IOZone suite was done in 2 hrs 45 mins (scenario: san.hp.out).
  
-For an example of using MYSANSCRATCH, consult this page  +For an example of using MYSANSCRATCH, look at the bottom of this page You will have to stage your data in the directory provided and copy the results back to your home directory when finished.  The scheduler will remove the directory. 
  
 ==== IOZone ==== ==== IOZone ====
Line 70: Line 70:
   * [[http://greentail.wesleyan.edu:81/iozone/greentail/report_home|report_home]]   * [[http://greentail.wesleyan.edu:81/iozone/greentail/report_home|report_home]]
  
 +
 +==== Sample ====
 +
 +Using MYSANSCRATCH with gaussian jobs (you can use any queue but hp12 will be the fastest):
 +
 +<code>
 +
 +#!/bin/bash
 +#BSUB -q hp12
 +#BSUB -o out 
 +#BSUB -e err 
 +#BSUB -J test
 +# job slots: change both lines, also inside gaussian.com
 +#BSUB -n 8
 +#BSUB -x
 +
 +# unique job scratch dirs
 +MYSANSCRATCH=/sanscratch/$LSB_JOBID
 +MYLOCALSCRATCH=/localscratch/$LSB_JOBID
 +export MYSANSCRATCH MYLOCALSCRATCH
 +
 +# cd to remote working dir
 +cd $MYSANSCRATCH
 +pwd
 +
 +# environment
 +export GAUSS_SCRDIR="$MYSANSCRATCH"
 +export g09root="/share/apps/gaussian/g09root"
 +. $g09root/g09/bsd/g09.profile
 +
 +# stage input data 
 +rm -rf ~/gaussian/err ~/gaussian/out*
 +cp ~/gaussian/gaussian.com .
 +
 +# run
 +time g09 < gaussian.com > gaussian.log
 +
 +# save results
 +cp gaussian.log ~/gaussian/output.$LSB_JOBID
 +
 +
 +</code>
  
 \\ \\
 **[[cluster:0|Home]]** **[[cluster:0|Home]]**
cluster/97.1329418890.txt.gz ยท Last modified: 2012/02/16 14:01 by hmeij