User Tools

Site Tools


cluster:103

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision Both sides next revision
cluster:103 [2011/12/21 19:01]
hmeij [Submit]
cluster:103 [2011/12/21 19:39]
hmeij [Submit]
Line 128: Line 128:
 And so the job was dispatched to host ''n10'' for execution.  Results are posted in my home directory, in fact the entire job ran in my home directory while on the remote compute node.  I may not want to do that if I process or generate a lot of data. So we're going to add some statements to the script next. Also, I may want to reserve some memory so the scheduler does not submit the job to hosts that have insufficient memory available or some other job is dispatched later that causes memory conflicts with my job. And so the job was dispatched to host ''n10'' for execution.  Results are posted in my home directory, in fact the entire job ran in my home directory while on the remote compute node.  I may not want to do that if I process or generate a lot of data. So we're going to add some statements to the script next. Also, I may want to reserve some memory so the scheduler does not submit the job to hosts that have insufficient memory available or some other job is dispatched later that causes memory conflicts with my job.
  
-The ''hp12'' queue is the cluster greentail default queue where each compute node has a 12 GB memory footprint.  Memory footprints for the other queues differ, please consult this link (there is some old data...) [[http://petaltail.wesleyan.edu/cgi-bin/bqueues_web.cgi|http://petaltail.wesleyan.edu/cgi-bin/bqueues_web.cgi]] for information about other queues.+The ''hp12'' queue is the cluster greentail'default queue where each compute node has a 12 GB memory footprint.  Memory footprints of hosts for the other queues differ, please consult this link (there is some old data...) [[http://petaltail.wesleyan.edu/cgi-bin/bqueues_web.cgi|http://petaltail.wesleyan.edu/cgi-bin/bqueues_web.cgi]] for information about other queues. 
 + 
 +==== Submit 2 ==== 
 + 
 +On the back end compute nodes, unless specified, the job runs inside your home directory.  That job competes with other activities inside /home.  Compute nodes have two other areas where the jobs could be submitted: /localscratch and /sanscratch.  The former is a local filesystem for each node and should be used if file locking essential. The later is a filesystem from greentails disarray of 5 TB served vi IPoIB (that is NFS traffic over fast interconnects switches, the performance should be much better than gigabit ethernet switches).  It is comprised of disks and spindles that are not impacted by happens on /home.  So we're going to use that. 
 + 
 +  * new submission file with edits 
 +  * -n implies reserve job slots (cpu cores) for job (not necesssary, SAS jobs will always be one) 
 +  * -R reserves memory, for example, reserve 200 MB of memory on target compute node 
 +  * scheduler creates unique dirs in scratch by JOBPID for you, so we'll satge the job there 
 + 
 +<code> 
 + 
 +#!/bin/bash 
 +# submit via 'bsub < run' 
 + 
 +#BSUB -q hp12 
 +#BSUB -J test 
 +#BSUB -o stdout 
 +#BSUB -e stderr 
 +#BSUB -n 1 
 +#BSUB -R "rusage[mem=200]" 
 + 
 +# unique job dir in scratch  
 +export MYSANSCRATCH=/sanscratch/$LSB_JOBID 
 +cd $MYSANSCRATCH 
 + 
 +cp ~/sas/test.dat ~/sas/test.sas . 
 +time sas test 
 +cp test.log test.lst ~/sas 
 + 
 +</code> 
 + 
 + 
 +  * you can monitor the progress of your jobs from greentail while it runs 
 +<code> 
 +[hmeij@greentail sas]$ ll /sanscratch/492667/ 
 +total 16 
 +-rw-r--r-- 1 hmeij its   33 Dec 21 14:31 test.dat 
 +-rw-r--r-- 1 hmeij its 2568 Dec 21 14:31 test.log 
 +-rw-r--r-- 1 hmeij its  258 Dec 21 14:31 test.lst 
 +-rw-r--r-- 1 hmeij its  140 Dec 21 14:31 test.sas 
 +</code> 
 + 
 + 
  
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
cluster/103.txt · Last modified: 2011/12/22 19:34 by hmeij