User Tools

Site Tools


cluster:134

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:134 [2014/08/19 15:46]
hmeij
cluster:134 [2014/08/19 19:52]
hmeij [High Throughput]
Line 111: Line 111:
 |100,000|8|03:20|  |  | |100,000|8|03:20|  |  |
  
-The N=16 is 8 one core VMs and swallowtail itself (8 cores). So we're sxheduling bound. But 50,000 jobs per hour is grand! Certainly would exceed what we throw at it. And that's on old hardware.+The N=16 is 8 one core VMs and swallowtail itself (8 cores). So we're scheduling bound. But 50,000 jobs per hour is grand! Certainly would exceed what we throw at it. And that's on old hardware.
  
  
Line 119: Line 119:
 </code> </code>
  
-After fixing that. (I also added a proplog/epilog script to my submit job script which will created /localscratch/$SLURM_JOB_ID, echo the date into file foo, then cat foo to standard out). These prolog/epilog actions needs to be done by slurmd but so far it errors for me.+After fixing that. Hmmm.
  
 +^NrJobs^N^hh:mm^
 +| 1,000|8|00:02| 
 +|10,000|8|00:22| 
 +|15,000|8|00:31| 
 +|20,000|8|00:41| 
  
-^NrJobs^N^hh:mm^N^hh:mm^ 
-|50,000|8|00:?? 
  
 +Debug Level is 3. Falling back to proctrack/pgid and set debug to level 1. Also setting SchedulerType=sched/builtin (removing the backfill).
 +
 +^NrJobs^N^hh:mm:00^
 +| 1,000|8|00:00:36| 
 +|10,000|8|00:06:05| 
 +|25,000|8|00:??:00| 
 +|50,000|8|00:??:00|
 +
 + (I also added a proplog/epilog script to my submit job script which will created /localscratch/$SLURM_JOB_ID, echo the date into file foo, then cat foo to standard out and finish with removing the scratch dir). These prolog/epilog actions needs to be done by slurmd but so far it errors for me.
  
-Debug Level is 3. Maybe go to 1. 
  
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
cluster/134.txt ยท Last modified: 2014/08/22 13:05 by hmeij