The Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters. The architecture is described here https://computing.llnl.gov/linux/slurm/quickstart.html.
Then I created a simple file to test Slurm
#!/bin/bash #SBATCH --time=1:30:10 #SBATCH --job-name="NUMBER" #SBATCH --output="tmp/outNUMBER" #SBATCH --begin=10:35:00 echo "$SLURMD_NODENAME JOB_PID=$SLURM_JOB_ID" date
# I then submit it like so for i in `seq 1 8`; do cat run | sed "s/NUMBER/$i/g" | sbatch; done # and observe sinfo PARTITION AVAIL TIMELIMIT NODES STATE NODELIST test* up infinite 2 comp v[2-3] test* up infinite 5 alloc v[1,4-5,7-8] test* up infinite 1 idle v6 # and when I raised the total number of jobs to 25,000 the distribution is ^v1 ^v2 ^v3 ^v4 ^v5 ^v6 ^v7 ^v8 ^ |3138|3130|3149|3133|3108|3119|3110|3113 # time to process queues of different sizes (I stage them with the --begin parameter) # jobs do have to open the output files though, just some crude testing of slurm # scheduling capabilities ^NrJobs^1,000^10,000^25,000^ |mm:ss | 0:33| 6:32| 19:37| # 20 mins for 25,000 jobs via sbatch
Slurm is installed on a PE2950 with dual quad cores and 16 GB of memory. It is part of my high priority queue and allocated to Openlava (v2.2).
My slurm compute nodes (v1-v8) are created in a virtual KVM environment on another PE2950 (2.6 Ghz, 16 GB ram) dual quad core with hyperthreading and virtualization turned on in the BIOS. Comments on how to build that KVM environment are here LXC Linux Containers. The virtual nodes are single core, 1 GB or ram.
From slurm.conf
# COMPUTE NODES NodeName=v[1-8] NodeAddr=192.168.150.[1-8] CPUs=1 RealMemory=1 \ Sockets=1 CoresPerSocket=1 ThreadsPerCore=1 State=UNKNOWN NodeName=swallowtail NodeAddr=192.168.1.136 CPUs=8 RealMemory=16 \ Sockets=2 CoresPerSocket=4 ThreadsPerCore=1 State=UNKNOWN PartitionName=test Nodes=v[1-8] Default=YES MaxTime=INFINITE State=UP
At around 32K jobs I ran into IO problems.
sbatch: error: Batch job submission failed: I/O error writing script/environment to file
Oh, this is OS error from ext3 file system, max files and dirs exceeded.
Switching “StateSaveLocation=/sanscratch_sharptail/tmp/slurmctld” to XFS file system.
https://computing.llnl.gov/linux/slurm/high_throughput.html
Vanilla out of the box with these changes
NrJobs | N | hh:mm | N | hh:mm |
---|---|---|---|---|
50,000 | 8 | 00:56 | ||
75,000 | 8 | 01:53 | 16 | 01:57 |
100,000 | 8 | 03:20 |
The N=16 is 8 one core VMs and swallowtail itself (8 cores). So we're scheduling bound. But 50,000 jobs per hour is grand! Certainly would exceed what we throw at it. And that's on old hardware.
WARNING: We will use a much slower algorithm with proctrack/pgid, use Proctracktype=proctrack/linuxproc or some other proctrack when using jobacct_gather/linux
After fixing that. Hmmm.
NrJobs | N | hh:mm |
---|---|---|
1,000 | 8 | 00:02 |
10,000 | 8 | 00:22 |
15,000 | 8 | 00:31 |
20,000 | 8 | 00:41 |
Debug Level is 3 above. Falling back to proctrack/pgid while setting debug to level 1. Also setting SchedulerType=sched/builtin (removing the backfill). This is throughput allright, just 8 KVM nodes handling the jobs.
NrJobs | N | hh:mm:ss |
---|---|---|
1,000 | 8 | 00:00:34 |
10,000 | 8 | 00:05:57 |
25,000 | 8 | 00:15:07 |
50,000 | 8 | 00:29:55 |
75,000 | 8 | 00:44:15 |
100,000 | 8 | 00:58:16 |
Next I will add a proplog/epilog script to my submit job script which will create /localscratch/$SLURM_JOB_ID, echo the date into file foo, then cat foo to standard out and finish with removing the scratch dir. These prolog/epilog actions needs to be done by slurmd but so far it errors for me. Does slow things down a bit. Same conditions as above.
#!/bin/bash /share/apps/lsf/slurm_prolog.pl #SBATCH --job-name="NUMBER" #SBATCH --output="tmp/outNUMBER" #SBATCH --begin=10:00:00 # unique job scratch dir export MYLOCALSCRATCH=/localscratch/$SLURM_JOB_ID cd $MYLOCALSCRATCH pwd echo "$SLURMD_NODENAME JOB_PID=$SLURM_JOB_ID" >> foo date >> foo cat foo /share/apps/lsf/slurm_epilog.pl
NrJobs | N | hh:mm:ss |
---|---|---|
1,000 | 8 | 00:05:00 |
5,000 | 8 | 00:23:43 |
10,000 | 8 | 00:47:12 |
25,000 | 8 | 00:58:01 |
With sbatch
there is no need for a wrapper script, slurm figures it all out.
#!/bin/bash #/share/apps/lsf/slurm_prolog.pl #SBATCH --job-name="MPI" #SBATCH --ntasks=8 #SBATCH --begin=now # unique job scratch dir #export MYLOCALSCRATCH=/localscratch/$SLURM_JOB_ID #cd $MYLOCALSCRATCH echo "$SLURMD_NODENAME JOB_PID=$SLURM_JOB_ID" rm -rf err out logfile mdout restrt mdinfo export PATH=/share/apps/openmpi/1.2+intel-9/bin:$PATH export LD_LIBRARY_PATH=/share/apps/openmpi/1.2+intel-9/lib:$LD_LIBRARY_PATH which mpirun mpirun /share/apps/amber/9+openmpi-1.2+intel-9/exe/pmemd -O \ -i inp/mini.in -p 1g6r.cd.parm -c 1g6r.cd.randions.crd.1 \ -ref 1g6r.cd.randions.crd.1 #/share/apps/lsf/slurm_epilog.pl
When submitted we see
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 902246 test MPI hmeij R 0:05 8 v[1-8]
Dumping the environment we observe some key parameters
SLURM_NODELIST=v[1-8] SLURM_JOB_NAME=MPI SLURMD_NODENAME=v1 SLURM_NNODES=8 SLURM_NTASKS=8 SLURM_TASKS_PER_NODE=1(x8) SLURM_NPROCS=8 SLURM_CPUS_ON_NODE=1 SLURM_JOB_NODELIST=v[1-8] SLURM_JOB_CPUS_PER_NODE=1(x8) SLURM_JOB_NUM_NODES=8
And in the slurmjob.log file
JobId=902245 UserId=hmeij(8216) GroupId=its(623) \ Name=MPI JobState=COMPLETED Partition=test TimeLimit=UNLIMITED \ StartTime=2014-08-21T15:55:06 EndTime=2014-08-21T15:57:04 \ NodeList=v[1-8] NodeCnt=8 ProcCnt=8 WorkDir=/home/hmeij/1g6r/cd