This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:134 [2014/08/14 14:40] hmeij [Slurm] |
cluster:134 [2014/08/14 18:33] hmeij [25K+] |
||
---|---|---|---|
Line 12: | Line 12: | ||
* copied the munge.key from head node to all compute nodes | * copied the munge.key from head node to all compute nodes | ||
* slum installed from source code with | * slum installed from source code with | ||
- | * \-\-prefix=/ | + | * --prefix=/ |
* launched the configurator web page and set up a simple setup | * launched the configurator web page and set up a simple setup | ||
* created the openssl key and cert (see slurm web pages) | * created the openssl key and cert (see slurm web pages) | ||
Line 22: | Line 22: | ||
* SRunEpilog/ | * SRunEpilog/ | ||
* | * | ||
+ | Then I created a simple file to test Slurm | ||
+ | |||
+ | < | ||
+ | |||
+ | #!/bin/bash | ||
+ | |||
+ | #SBATCH --time=1: | ||
+ | #SBATCH --job-name=" | ||
+ | #SBATCH --output=" | ||
+ | #SBATCH --begin=10: | ||
+ | |||
+ | echo " | ||
+ | echo DONE | ||
+ | |||
+ | </ | ||
+ | |||
+ | < | ||
+ | |||
+ | # I then submit it like so | ||
+ | for i in `seq 1 8`; do cat run | sed " | ||
+ | |||
+ | # and observe | ||
+ | sinfo | ||
+ | PARTITION AVAIL TIMELIMIT | ||
+ | test* up | ||
+ | test* up | ||
+ | test* up | ||
+ | |||
+ | # and when I raised the total number of jobs to 25,000 the distribution is | ||
+ | ^v1 ^v2 ^v3 ^v4 ^v5 ^v6 ^v7 ^v8 ^ | ||
+ | |3138|3130|3149|3133|3108|3119|3110|3113 | ||
+ | |||
+ | # time to process queues of different sizes (I stage them with the --begin parameter) | ||
+ | # jobs do have to open the output files though, just some crude testing of slurm | ||
+ | # scheduling prowness | ||
+ | |||
+ | ^NrJobs^1, | ||
+ | |mm:ss | 0:33| 6:32| 19:37| | ||
+ | |||
+ | # 20 mins for 25,000 jobs via sbatch | ||
+ | |||
+ | </ | ||
+ | |||
+ | Slurm is installed on a PE2950 with dual quad cores and 16 GB of memory. It is part of my high priority queue and allocated to Openlava (v2.2). | ||
+ | |||
+ | My slurm compute nodes (v1-v8) are created in a virtual KVM environment on another PE2950 (2.6 Ghz, 16 GB ram) dual quad core with hyperthreading and virtualization turned on in the BIOS. Comments on how to build that KVM environment are here [[cluster: | ||
+ | |||
+ | From slurm.conf | ||
+ | |||
+ | < | ||
+ | |||
+ | # COMPUTE NODES | ||
+ | NodeName=v[1-8] NodeAddr=192.168.150.[1-8] CPUs=1 RealMemory=1 \ | ||
+ | Sockets=1 CoresPerSocket=1 ThreadsPerCore=1 State=UNKNOWN | ||
+ | NodeName=swallowtail NodeAddr=192.168.1.136 CPUs=8 RealMemory=16 \ | ||
+ | Sockets=2 CoresPerSocket=4 ThreadsPerCore=1 State=UNKNOWN | ||
+ | | ||
+ | PartitionName=test Nodes=v[1-8] Default=YES MaxTime=INFINITE State=UP | ||
+ | |||
+ | </ | ||
+ | |||
+ | |||
+ | ==== 25K+ ==== | ||
+ | |||
+ | At around 32K jobs I ran into IO problems. | ||
+ | |||
+ | < | ||
+ | |||
+ | sbatch: error: Batch job submission failed: I/O error writing script/ | ||
+ | |||
+ | </ | ||
+ | |||
+ | [[https:// | ||
+ | |||
+ | * MaxJobCount=100000 | ||
+ | * SlurmctldPort=6820-6825 | ||
+ | * SchedulerParameters=max_job_bf=100, | ||
+ | |||
\\ | \\ | ||
**[[cluster: | **[[cluster: |