User Tools

Site Tools


cluster:134

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:134 [2014/08/14 14:53]
hmeij [Slurm]
cluster:134 [2014/08/19 19:03]
hmeij [High Throughput]
Line 12: Line 12:
       * copied the munge.key from head node to all compute nodes       * copied the munge.key from head node to all compute nodes
     * slum installed from source code with     * slum installed from source code with
-      * \-\-prefix=/opt/slurm-14  \-\-sysconfdir=/opt/slurm-14/etc+      * --prefix=/opt/slurm-14  --sysconfdir=/opt/slurm-14/etc
       * launched the configurator web page and set up a simple setup       * launched the configurator web page and set up a simple setup
         * created the openssl key and cert (see slurm web pages)         * created the openssl key and cert (see slurm web pages)
         * logs to files not mysql for now         * logs to files not mysql for now
         * change some settings in slurm.conf, particularly         * change some settings in slurm.conf, particularly
-          * FirstJObId, MaxJobId +          * FirstJObId(1), MaxJobId(999999) 
-          * MaxJobCount=100000+          * MaxJobCount=120000
           * MaxTaskPerNode=65533           * MaxTaskPerNode=65533
-          * SRunEpilog/SRunProlog (creates and removes work directories in /scratch/SLUM_JOB_ID) 
           *            * 
 Then I created a simple file to test Slurm Then I created a simple file to test Slurm
Line 34: Line 33:
  
 echo "$SLURMD_NODENAME JOB_PID=$SLURM_JOB_ID" echo "$SLURMD_NODENAME JOB_PID=$SLURM_JOB_ID"
-echo DONE+date
  
 </code> </code>
  
-Slurm is installed on a PE2950 with dual quad cores and 16 GB. It is part of my high priority queue and allocated to Openlava (v2.2).+<code> 
 + 
 +# I then submit it like so 
 +for i in `seq 1 8`; do cat run | sed "s/NUMBER/$i/g" | sbatch; done 
 + 
 +# and observe 
 +sinfo 
 +PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST 
 +test*        up   infinite      2   comp v[2-3] 
 +test*        up   infinite      5  alloc v[1,4-5,7-8] 
 +test*        up   infinite      1   idle v6 
 + 
 +# and when I raised the total number of jobs to 25,000 the distribution is 
 +^v1  ^v2  ^v3  ^v4  ^v5  ^v6  ^v7  ^v8 ^ 
 +|3138|3130|3149|3133|3108|3119|3110|3113 
 + 
 +# time to process queues of different sizes (I stage them with the --begin parameter) 
 +# jobs do have to open the output files though, just some crude testing of slurm 
 +# scheduling capabilities 
 + 
 +^NrJobs^1,000^10,000^25,000^ 
 +|mm:ss | 0:33|  6:32| 19:37| 
 + 
 +# 20 mins for 25,000 jobs via sbatch 
 + 
 +</code> 
 + 
 +Slurm is installed on a PE2950 with dual quad cores and 16 GB of memory. It is part of my high priority queue and allocated to Openlava (v2.2)
 + 
 +My slurm compute nodes (v1-v8) are created in a virtual KVM environment on another PE2950 (2.6 Ghz, 16 GB ram) dual quad core with hyperthreading and virtualization turned on in the BIOS. Comments on how to build that KVM environment are here [[cluster:132|LXC Linux Containers]]. The virtual nodes are single core, 1 GB or ram. 
 + 
 +From slurm.conf 
 + 
 +<code> 
 + 
 +# COMPUTE NODES 
 +NodeName=v[1-8] NodeAddr=192.168.150.[1-8] CPUs=1 RealMemory=1 \ 
 +  Sockets=1 CoresPerSocket=1 ThreadsPerCore=1 State=UNKNOWN 
 +NodeName=swallowtail NodeAddr=192.168.1.136 CPUs=8 RealMemory=16 \ 
 +  Sockets=2 CoresPerSocket=4 ThreadsPerCore=1 State=UNKNOWN 
 +   
 +PartitionName=test Nodes=v[1-8] Default=YES MaxTime=INFINITE State=UP 
 + 
 +</code> 
 + 
 + 
 +==== IO error ==== 
 + 
 +At around 32K jobs I ran into IO problems. 
 + 
 +<code> 
 + 
 +sbatch: error: Batch job submission failed: I/O error writing script/environment to file 
 + 
 +</code> 
 + 
 +Oh, this is OS error from ext3 file system, max files and dirs exceeded. 
 + 
 +Switching "StateSaveLocation=/sanscratch_sharptail/tmp/slurmctld" to XFS file system. 
 + 
 + 
 +==== High Throughput ==== 
 + 
 +[[https://computing.llnl.gov/linux/slurm/high_throughput.html]] 
 + 
 +  * MaxJobCount=120000 
 +  * SlurmctldPort=6820-6825 
 +  * SchedulerParameters=max_job_bf=100,interval=30  (NOTE:  unset these, pages states default values are fine) 
 + 
 +^NrJobs^N^hh:mm^N^hh:mm^ 
 +|50,000|8|00:56|  |  | 
 +|75,000|8|01:53|16|01:57| 
 +|100,000|8|03:20|  |  | 
 + 
 +The N=16 is 8 one core VMs and swallowtail itself (8 cores). So we're scheduling bound. But 50,000 jobs per hour is grand! Certainly would exceed what we throw at it. And that's on old hardware. 
 + 
 + 
 +<code> 
 +WARNING: We will use a much slower algorithm with proctrack/pgid, use  
 +Proctracktype=proctrack/linuxproc or some other proctrack when using jobacct_gather/linux 
 +</code> 
 + 
 +After fixing that. Hmmm. 
 + 
 +^NrJobs^N^hh:mm^N^hh:mm^ 
 +| 1,000|8|00:02|  
 +|10,000|8|00:22|  
 +|15,000|8|00:31|  
 +|20,000|8|00:41|  
 + 
 + 
 +Debug Level is 3. Falling back to proctrack/pgid and set debug to level 1. 
 + 
 + (I also added a proplog/epilog script to my submit job script which will created /localscratch/$SLURM_JOB_ID, echo the date into file foo, then cat foo to standard out and finish with removing the scratch dir). These prolog/epilog actions needs to be done by slurmd but so far it errors for me.
  
-My nodes are created in a virtual KVM environment also on a PE2950 (2.6 Ghz, 16 GB ram) dual quad core with hyperthreading and virtualization turned on in the BIOS. Comments on how to build that KVM environment are here [[cluster:132|LXC Linux Containers]]  
  
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
cluster/134.txt · Last modified: 2014/08/22 13:05 by hmeij