User Tools

Site Tools


cluster:134

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:134 [2014/08/14 15:11]
hmeij
cluster:134 [2014/08/19 15:28]
hmeij [High Throughput]
Line 34: Line 34:
  
 echo "$SLURMD_NODENAME JOB_PID=$SLURM_JOB_ID" echo "$SLURMD_NODENAME JOB_PID=$SLURM_JOB_ID"
-echo DONE+date
  
 </code> </code>
Line 84: Line 84:
  
  
-==== 25K+ ====+==== IO error ====
  
 At around 32K jobs I ran into IO problems. At around 32K jobs I ran into IO problems.
  
 +<code>
 +
 +sbatch: error: Batch job submission failed: I/O error writing script/environment to file
 +
 +</code>
 +
 +Oh, this is OS error from ext3 file system, max files and dirs exceeded.
 +
 +Switching "StateSaveLocation=/sanscratch_sharptail/tmp/slurmctld" to XFS file system.
 +
 +
 +==== High Throughput ====
 +
 +[[https://computing.llnl.gov/linux/slurm/high_throughput.html]]
 +
 +  * MaxJobCount=120000
 +  * SlurmctldPort=6820-6825
 +  * SchedulerParameters=max_job_bf=100,interval=30  (NOTE:  unset these, pages states default values are fine)
 +
 +^NrJobs^N^hh:mm^N^hh:mm^
 +|50,000|8|00:56|  |  |
 +|75,000|8|01:53|16|01:57|
 +|100,000|8|03:20|  |  |
 +
 +The N=16 is 8 one core VMs and swallowtail itself (8 cores). So we're sxheduling bound. But 50,000 jobs per hour is grand! Certainly would exceed what we throw at it. And that's on old hardware.
 +
 +
 +<code>
 +WARNING: We will use a much slower algorithm with proctrack/pgid, use 
 +Proctracktype=proctrack/linuxproc or some other proctrack when using jobacct_gather/linux
 +</code>
 +
 +After fixing that.
 +
 +
 +^NrJobs^N^hh:mm^N^hh:mm^
 +|10,000|8|00:56| 
 +
 +
 +Debug Level is 3. Maybe go to 1.
  
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
cluster/134.txt ยท Last modified: 2014/08/22 13:05 by hmeij