User Tools

Site Tools


cluster:134

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:134 [2014/08/14 15:04]
hmeij [Slurm]
cluster:134 [2014/08/14 19:47]
hmeij [25K+]
Line 53: Line 53:
 ^v1  ^v2  ^v3  ^v4  ^v5  ^v6  ^v7  ^v8 ^ ^v1  ^v2  ^v3  ^v4  ^v5  ^v6  ^v7  ^v8 ^
 |3138|3130|3149|3133|3108|3119|3110|3113 |3138|3130|3149|3133|3108|3119|3110|3113
 +
 +# time to process queues of different sizes (I stage them with the --begin parameter)
 +# jobs do have to open the output files though, just some crude testing of slurm
 +# scheduling prowness
 +
 +^NrJobs^1,000^10,000^25,000^
 +|mm:ss | 0:33|  6:32| 19:37|
 +
 +# 20 mins for 25,000 jobs via sbatch
  
 </code> </code>
Line 71: Line 80:
      
 PartitionName=test Nodes=v[1-8] Default=YES MaxTime=INFINITE State=UP PartitionName=test Nodes=v[1-8] Default=YES MaxTime=INFINITE State=UP
 +
 +</code>
 +
 +
 +==== 25K+ ====
 +
 +At around 32K jobs I ran into IO problems.
  
 <code> <code>
 +
 +sbatch: error: Batch job submission failed: I/O error writing script/environment to file
 +
 +</code>
 +
 +Oh, this is OS error from ext3 file system, max files and dirs exceeded.
 +
 +Switching "StateSaveLocation=/sanscratch_sharptail/tmp/slurmctld" to XFS file system.
 +
 +
 +==== High Throughput ====
 +
 +[[https://computing.llnl.gov/linux/slurm/high_throughput.html]]
 +
 +  * MaxJobCount=100000
 +  * SlurmctldPort=6820-6825
 +  * SchedulerParameters=max_job_bf=100,interval=30
 +
 +
  
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
cluster/134.txt ยท Last modified: 2014/08/22 13:05 by hmeij