This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:134 [2014/08/14 14:59] hmeij [Slurm] |
cluster:134 [2014/08/20 13:54] hmeij [High Throughput] |
||
---|---|---|---|
Line 17: | Line 17: | ||
* logs to files not mysql for now | * logs to files not mysql for now | ||
* change some settings in slurm.conf, particularly | * change some settings in slurm.conf, particularly | ||
- | * FirstJObId, MaxJobId | + | * FirstJObId(1), MaxJobId(999999) |
- | * MaxJobCount=100000 | + | * MaxJobCount=120000 |
* MaxTaskPerNode=65533 | * MaxTaskPerNode=65533 | ||
- | * SRunEpilog/ | ||
* | * | ||
Then I created a simple file to test Slurm | Then I created a simple file to test Slurm | ||
Line 34: | Line 33: | ||
echo " | echo " | ||
- | echo DONE | + | date |
</ | </ | ||
Line 53: | Line 52: | ||
^v1 ^v2 ^v3 ^v4 ^v5 ^v6 ^v7 ^v8 ^ | ^v1 ^v2 ^v3 ^v4 ^v5 ^v6 ^v7 ^v8 ^ | ||
|3138|3130|3149|3133|3108|3119|3110|3113 | |3138|3130|3149|3133|3108|3119|3110|3113 | ||
+ | |||
+ | # time to process queues of different sizes (I stage them with the --begin parameter) | ||
+ | # jobs do have to open the output files though, just some crude testing of slurm | ||
+ | # scheduling capabilities | ||
+ | |||
+ | ^NrJobs^1, | ||
+ | |mm:ss | 0:33| 6:32| 19:37| | ||
+ | |||
+ | # 20 mins for 25,000 jobs via sbatch | ||
</ | </ | ||
- | Slurm is installed on a PE2950 with dual quad cores and 16 GB. It is part of my high priority queue and allocated to Openlava (v2.2). | + | Slurm is installed on a PE2950 with dual quad cores and 16 GB of memory. It is part of my high priority queue and allocated to Openlava (v2.2). |
+ | |||
+ | My slurm compute nodes (v1-v8) are created in a virtual KVM environment on another PE2950 (2.6 Ghz, 16 GB ram) dual quad core with hyperthreading and virtualization turned on in the BIOS. Comments on how to build that KVM environment are here [[cluster: | ||
+ | |||
+ | From slurm.conf | ||
+ | |||
+ | < | ||
+ | |||
+ | # COMPUTE NODES | ||
+ | NodeName=v[1-8] NodeAddr=192.168.150.[1-8] CPUs=1 RealMemory=1 \ | ||
+ | Sockets=1 CoresPerSocket=1 ThreadsPerCore=1 State=UNKNOWN | ||
+ | NodeName=swallowtail NodeAddr=192.168.1.136 CPUs=8 RealMemory=16 \ | ||
+ | Sockets=2 CoresPerSocket=4 ThreadsPerCore=1 State=UNKNOWN | ||
+ | |||
+ | PartitionName=test Nodes=v[1-8] Default=YES MaxTime=INFINITE State=UP | ||
+ | |||
+ | </ | ||
+ | |||
+ | |||
+ | ==== IO error ==== | ||
+ | |||
+ | At around 32K jobs I ran into IO problems. | ||
+ | |||
+ | < | ||
+ | |||
+ | sbatch: error: Batch job submission failed: I/O error writing script/ | ||
+ | |||
+ | </ | ||
+ | |||
+ | Oh, this is OS error from ext3 file system, max files and dirs exceeded. | ||
+ | |||
+ | Switching " | ||
+ | |||
+ | |||
+ | ==== High Throughput ==== | ||
+ | |||
+ | [[https:// | ||
+ | |||
+ | * MaxJobCount=120000 | ||
+ | * SlurmctldPort=6820-6825 | ||
+ | * SchedulerParameters=max_job_bf=100, | ||
+ | |||
+ | ^NrJobs^N^hh: | ||
+ | |50, | ||
+ | |75, | ||
+ | |100, | ||
+ | |||
+ | The N=16 is 8 one core VMs and swallowtail itself (8 cores). So we're scheduling bound. But 50,000 jobs per hour is grand! Certainly would exceed what we throw at it. And that's on old hardware. | ||
+ | |||
+ | |||
+ | < | ||
+ | WARNING: We will use a much slower algorithm with proctrack/ | ||
+ | Proctracktype=proctrack/ | ||
+ | </ | ||
+ | |||
+ | After fixing that. Hmmm. | ||
+ | |||
+ | ^NrJobs^N^hh: | ||
+ | | 1, | ||
+ | |10, | ||
+ | |15, | ||
+ | |20, | ||
+ | |||
+ | |||
+ | Debug Level is 3 above. Falling back to proctrack/ | ||
+ | |||
+ | ^NrJobs^N^hh: | ||
+ | | 1, | ||
+ | |10, | ||
+ | |25, | ||
+ | |50, | ||
+ | |75, | ||
+ | |100, | ||
+ | |||
+ | Next I will add a proplog/ | ||
+ | / | ||
+ | |||
+ | ^NrJobs^N^hh: | ||
+ | | 1, | ||
+ | |10, | ||
+ | |25, | ||
+ | |50, | ||
+ | |75, | ||
+ | |100, | ||
- | My nodes are created in a virtual KVM environment also on a PE2950 (2.6 Ghz, 16 GB ram) dual quad core with hyperthreading and virtualization turned on in the BIOS. Comments on how to build that KVM environment are here [[cluster: | ||
\\ | \\ | ||
**[[cluster: | **[[cluster: |