This is an old revision of the document!
The Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters. The architecture is described here https://computing.llnl.gov/linux/slurm/quickstart.html.
Then I created a simple file to test Slurm
#!/bin/bash #SBATCH --time=1:30:10 #SBATCH --job-name="NUMBER" #SBATCH --output="tmp/outNUMBER" #SBATCH --begin=10:35:00 echo "$SLURMD_NODENAME JOB_PID=$SLURM_JOB_ID" echo DONE
# I then submit it like so for i in `seq 1 8`; do cat run | sed "s/NUMBER/$i/g" | sbatch; done # and observe sinfo PARTITION AVAIL TIMELIMIT NODES STATE NODELIST test* up infinite 2 comp v[2-3] test* up infinite 5 alloc v[1,4-5,7-8] test* up infinite 1 idle v6 # and when I raised the total number of jobs to 25,000 the distribution is ^v1 ^v2 ^v3 ^v4 ^v5 ^v6 ^v7 ^v8 ^ |3138|3130|3149|3133|3108|3119|3110|3113 # time to process queues of different sizes (I stage them with the --begin parameter) # jobs do have to open the output files though, just some crude testing of slurm # scheduling prowness ^NrJobs^1,000^10,000^25,000^ |mm:ss | 0:33| 6:32| 19:37| # 20 mins for 25,000 jobs via sbatch
Slurm is installed on a PE2950 with dual quad cores and 16 GB of memory. It is part of my high priority queue and allocated to Openlava (v2.2).
My slurm compute nodes (v1-v8) are created in a virtual KVM environment on another PE2950 (2.6 Ghz, 16 GB ram) dual quad core with hyperthreading and virtualization turned on in the BIOS. Comments on how to build that KVM environment are here LXC Linux Containers. The nodes are single core, 1 GB or ram.
From slurm.conf
# COMPUTE NODES NodeName=v[1-8] NodeAddr=192.168.150.[1-8] CPUs=1 RealMemory=1 \ Sockets=1 CoresPerSocket=1 ThreadsPerCore=1 State=UNKNOWN NodeName=swallowtail NodeAddr=192.168.1.136 CPUs=8 RealMemory=16 \ Sockets=2 CoresPerSocket=4 ThreadsPerCore=1 State=UNKNOWN PartitionName=test Nodes=v[1-8] Default=YES MaxTime=INFINITE State=UP