User Tools

Site Tools


cluster:134

This is an old revision of the document!


Table of Contents


Back

Slurm

The Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters. The architecture is described here https://computing.llnl.gov/linux/slurm/quickstart.html.

  • Installation
    • begins with installing Munge
      • fairly straightforward, build RPMs from tarball
      • installed on head node and all compute nodes
      • copied the munge.key from head node to all compute nodes
    • slum installed from source code with
      • –prefix=/opt/slurm-14 –sysconfdir=/opt/slurm-14/etc
      • launched the configurator web page and set up a simple setup
        • created the openssl key and cert (see slurm web pages)
        • logs to files not mysql for now
        • change some settings in slurm.conf, particularly
          • FirstJObId, MaxJobId
          • MaxJobCount=100000
          • MaxTaskPerNode=65533
          • SRunEpilog/SRunProlog (creates and removes work directories in /scratch/SLUM_JOB_ID)

Then I created a simple file to test Slurm

#!/bin/bash

#SBATCH --time=1:30:10
#SBATCH --job-name="NUMBER"
#SBATCH --output="tmp/outNUMBER"
#SBATCH --begin=10:35:00

echo "$SLURMD_NODENAME JOB_PID=$SLURM_JOB_ID"
date
# I then submit it like so
for i in `seq 1 8`; do cat run | sed "s/NUMBER/$i/g" | sbatch; done

# and observe
sinfo
PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
test*        up   infinite      2   comp v[2-3]
test*        up   infinite      5  alloc v[1,4-5,7-8]
test*        up   infinite      1   idle v6

# and when I raised the total number of jobs to 25,000 the distribution is
^v1  ^v2  ^v3  ^v4  ^v5  ^v6  ^v7  ^v8 ^
|3138|3130|3149|3133|3108|3119|3110|3113

# time to process queues of different sizes (I stage them with the --begin parameter)
# jobs do have to open the output files though, just some crude testing of slurm
# scheduling prowness

^NrJobs^1,000^10,000^25,000^
|mm:ss | 0:33|  6:32| 19:37|

# 20 mins for 25,000 jobs via sbatch

Slurm is installed on a PE2950 with dual quad cores and 16 GB of memory. It is part of my high priority queue and allocated to Openlava (v2.2).

My slurm compute nodes (v1-v8) are created in a virtual KVM environment on another PE2950 (2.6 Ghz, 16 GB ram) dual quad core with hyperthreading and virtualization turned on in the BIOS. Comments on how to build that KVM environment are here LXC Linux Containers. The nodes are single core, 1 GB or ram.

From slurm.conf

# COMPUTE NODES
NodeName=v[1-8] NodeAddr=192.168.150.[1-8] CPUs=1 RealMemory=1 \
  Sockets=1 CoresPerSocket=1 ThreadsPerCore=1 State=UNKNOWN
NodeName=swallowtail NodeAddr=192.168.1.136 CPUs=8 RealMemory=16 \
  Sockets=2 CoresPerSocket=4 ThreadsPerCore=1 State=UNKNOWN
  
PartitionName=test Nodes=v[1-8] Default=YES MaxTime=INFINITE State=UP

IO error

At around 32K jobs I ran into IO problems.

sbatch: error: Batch job submission failed: I/O error writing script/environment to file

Oh, this is OS error from ext3 file system, max files and dirs exceeded.

Switching “StateSaveLocation=/sanscratch_sharptail/tmp/slurmctld” to XFS file system.

High Throughput

https://computing.llnl.gov/linux/slurm/high_throughput.html

  • MaxJobCount=120000
  • SlurmctldPort=6820-6825
  • SchedulerParameters=max_job_bf=100,interval=30 (NOTE: unset these, pages states default values are fine)
NrJobsNhh:mmNhh:mm
50,000800:56
75,000801:531601:57
100,000803:20

The N=16 is 8 one core VMs and swallowtail itself (8 cores). So we're sxheduling bound. But 50,000 jobs per hour is grand! Certainly would exceed what we throw at it. And that's on old hardware.

WARNING: We will use a much slower algorithm with proctrack/pgid, use 
Proctracktype=proctrack/linuxproc or some other proctrack when using jobacct_gather/linux

After fixing that. (I also added a proplog/epilog script to my submit job script which will created /localscratch/$SLURM_JOB_ID, echo the date into file foo, then cat foo to standard out). These prolog/epilog actions needs to be done by slurmd but so far it errors for me.

NrJobsNhh:mmNhh:mm
50,000800:??

Debug Level is 3. Maybe go to 1.


Back

cluster/134.1408462423.txt.gz · Last modified: 2014/08/19 11:33 by hmeij