This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:134 [2014/08/14 15:10] hmeij [Slurm] |
cluster:134 [2014/08/19 15:33] hmeij [High Throughput] |
||
---|---|---|---|
Line 34: | Line 34: | ||
echo " | echo " | ||
- | echo DONE | + | date |
</ | </ | ||
Line 82: | Line 82: | ||
</ | </ | ||
+ | |||
+ | |||
+ | ==== IO error ==== | ||
+ | |||
+ | At around 32K jobs I ran into IO problems. | ||
+ | |||
+ | < | ||
+ | |||
+ | sbatch: error: Batch job submission failed: I/O error writing script/ | ||
+ | |||
+ | </ | ||
+ | |||
+ | Oh, this is OS error from ext3 file system, max files and dirs exceeded. | ||
+ | |||
+ | Switching " | ||
+ | |||
+ | |||
+ | ==== High Throughput ==== | ||
+ | |||
+ | [[https:// | ||
+ | |||
+ | * MaxJobCount=120000 | ||
+ | * SlurmctldPort=6820-6825 | ||
+ | * SchedulerParameters=max_job_bf=100, | ||
+ | |||
+ | ^NrJobs^N^hh: | ||
+ | |50, | ||
+ | |75, | ||
+ | |100, | ||
+ | |||
+ | The N=16 is 8 one core VMs and swallowtail itself (8 cores). So we're sxheduling bound. But 50,000 jobs per hour is grand! Certainly would exceed what we throw at it. And that's on old hardware. | ||
+ | |||
+ | |||
+ | < | ||
+ | WARNING: We will use a much slower algorithm with proctrack/ | ||
+ | Proctracktype=proctrack/ | ||
+ | </ | ||
+ | |||
+ | After fixing that. (I also added a proplog/ | ||
+ | |||
+ | |||
+ | ^NrJobs^N^hh: | ||
+ | |50, | ||
+ | |||
+ | |||
+ | Debug Level is 3. Maybe go to 1. | ||
\\ | \\ | ||
**[[cluster: | **[[cluster: |