User Tools

Site Tools


cluster:208

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:208 [2021/10/15 12:44]
hmeij07 [Overview]
cluster:208 [2021/10/21 14:06]
hmeij07 [Changes]
Line 8: Line 8:
 There is a techie page at this location **[[cluster:207|Slurm Techie Page]]** for those of you who are interested in the setup. There is a techie page at this location **[[cluster:207|Slurm Techie Page]]** for those of you who are interested in the setup.
  
-__This page is intended for users__ to get started with the Slurm scheduler. ''greentail52'' will be the slurm scheduler test "controller" and with several cpu+gpu compute nodes configured. Any jobs submitted should be simple, quick running jobs, like a "sleep" or "hello world" jobs. These compute nodes are still managed by Openlava.+__This page is intended for users__ to get started with the Slurm scheduler. ''greentail52'' will be the slurm scheduler test "controller" with several cpu+gpu compute nodes configured. Any jobs submitted should be simple, quick running jobs, like a "sleep" or "hello world" jobs. The configured compute nodes are still managed by Openlava.
  
 ** Default Environment ** ** Default Environment **
Line 76: Line 76:
 $ scontrol show node n78 $ scontrol show node n78
 NodeName=n78 Arch=x86_64 CoresPerSocket=8 NodeName=n78 Arch=x86_64 CoresPerSocket=8
-   CPUAlloc=CPUTot=32 CPULoad=1.05 +   CPUAlloc=CPUTot=32 CPULoad=0.03 
-   AvailableFeatures=hasLocalscratch           <<<--- available features+   AvailableFeatures=hasLocalscratch
    ActiveFeatures=hasLocalscratch    ActiveFeatures=hasLocalscratch
-   Gres=gpu:geforce_gtx_1080_ti:4(S:0-1)       <<<--- generic resources+   Gres=gpu:geforce_gtx_1080_ti:4(S:0-1)
    NodeAddr=n78 NodeHostName=n78 Version=21.08.1    NodeAddr=n78 NodeHostName=n78 Version=21.08.1
    OS=Linux 3.10.0-693.2.2.el7.x86_64 #1 SMP Tue Sep 12 22:26:13 UTC 2017    OS=Linux 3.10.0-693.2.2.el7.x86_64 #1 SMP Tue Sep 12 22:26:13 UTC 2017
-   RealMemory=128 AllocMem=128 FreeMem=16840 Sockets=2 Boards=1 +   RealMemory=128660 AllocMem=FreeMem=72987 Sockets=2 Boards=1 
-   State=MIXED ThreadsPerCore=2 TmpDisk=0 Weight=1 Owner=N/A MCS_label=N/+   MemSpecLimit=1024 
-   Partitions=test +   State=IDLE ThreadsPerCore=2 TmpDisk=0 Weight=1 Owner=N/A MCS_label=N/
-   BootTime=2021-03-28T20:35:53 SlurmdStartTime=2021-10-11T10:41:35 +   Partitions=test,amber128 
-   LastBusyTime=2021-10-11T10:57:04 +   BootTime=2021-03-28T20:35:53 SlurmdStartTime=2021-10-14T13:56:00 
-   CfgTRES=cpu=32,mem=128M,billing=32 +   LastBusyTime=2021-10-14T13:56:01 
-   AllocTRES=cpu=2,mem=128M+   CfgTRES=cpu=32,mem=128660M,billing=32 
 +   AllocTRES=
    CapWatts=n/a    CapWatts=n/a
    CurrentWatts=0 AveWatts=0    CurrentWatts=0 AveWatts=0
    ExtSensorsJoules=n/s ExtSensorsWatts=0 ExtSensorsTemp=n/s    ExtSensorsJoules=n/s ExtSensorsWatts=0 ExtSensorsTemp=n/s
 +
  
 # sorta like bhist -l # sorta like bhist -l
Line 147: Line 149:
 Same on the cpu only compute nodes. Features could be created for memory footprints (for example "hasMem64", "hasMem128", hasMem192", "hasMem256", "hasMem32"). Then all the cpu only nodes can go into one queue and we can stick all cpu+gpu nodes in another queue. Or all of them in a single queue. We'll see, just testing. Same on the cpu only compute nodes. Features could be created for memory footprints (for example "hasMem64", "hasMem128", hasMem192", "hasMem256", "hasMem32"). Then all the cpu only nodes can go into one queue and we can stick all cpu+gpu nodes in another queue. Or all of them in a single queue. We'll see, just testing.
  
-On the resource requests: You may request 1 or more nodes, 1 or more sockets per node, 1 or more cores (physical) per socket or 1 or more threads (logical + physical) per core. Such a request can be fine grained or not; just request a node with ''--exclusive'' (test queue only) or share nodes (other queues, wit ''--oversubscribe'')+On the cpu resource requests: You may request 1 or more nodes, 1 or more sockets per node, 1 or more cores (physical) per socket or 1 or more threads (logical + physical) per core. Such a request can be fine grained or not; just request a node with ''--exclusive'' (test queue only) or share nodes (other queues, with ''--oversubscribe'')
  
-//Note: this oversubscribing is not working yet. I can only get 4 simultaneous jobs running. Maybe there is a conflict with Openlava jobs. Should isolate a node and do further testing. After isolation (n37), 4 jobs with -n 4 exhausts number of physical cores. Is that why 5th job goes pending?//  +//Note: this oversubscribing is not working yet. I can only get 4 simultaneous jobs running. Maybe there is a conflict with Openlava jobs. Should isolate a node and do further testing. After isolation (n37), 4 jobs with -n 4 exhausts number of physical cores. Is that why 5th job goes pending? Solved, see Changes section.//  
  
 ===== MPI ===== ===== MPI =====
Line 155: Line 157:
 Slurm has a builtin MPI flavor, I suggest you do not rely on it. The documentation states that on major release upgrades the ''libslurm.so'' library is not backwards compatible and all software using it would need to be recompiled.  There is a handy parallel job launcher which may be of use, it is called ''srun''. Slurm has a builtin MPI flavor, I suggest you do not rely on it. The documentation states that on major release upgrades the ''libslurm.so'' library is not backwards compatible and all software using it would need to be recompiled.  There is a handy parallel job launcher which may be of use, it is called ''srun''.
  
-For now, we'll rely on PATH/LD_LIBRARY_PATH settings to the control environment. This also implies your job should run under Openlava or Slurm. With the new head node deployment we'll introduce ''modules'' to control the environment.+For now, we'll rely on PATH/LD_LIBRARY_PATH settings to control the environment. This also implies your job should run under Openlava or Slurm. With the new head node deployment we'll introduce ''modules'' to control the environment for newly installed software.
  
 ''srun'' commands can be embedded in a job submission script but it can also run interactively. Like ''srun'' commands can be embedded in a job submission script but it can also run interactively. Like
Line 161: Line 163:
 <code> <code>
  
-$ srun --partition=mwgpu -n 4 -B 1:1:1 --mem=1024 sleep 60 &+$ srun --partition=mwgpu -n 4 -B 1:4:1 --mem=1024 sleep 60 &
  
 </code> </code>
Line 377: Line 379:
 ===== Feedback ===== ===== Feedback =====
  
-If there are errors on this page, or mistatements, let me know. As we test and improve the setup to mimic a production environment I will update the page (and mark those entries with timestamp/signature).+If there are errors on this page, or mistatements, let me know. As we test and improve the setup to mimic a production environment I will update the page (and mark those entries with newer timestamp/signature).
  
- --- //[[hmeij@wesleyan.edu|Henk]] 2021/10/14 15:20//+ --- //[[hmeij@wesleyan.edu|Henk]] 2021/10/15 09:16//
  
 +===== Changes =====
 +
 +
 +** OverSubscribe **
 +
 +Suggestion was made to set ''OverSubcribe=No'' for all partitions (thanks, Colin). We now observe with a simple sleep script that we can run 16 jobs simultaneously (with either -n or -B). So that's 16 physical cores, each has a logical core (thread) for a total of 32 cpus for ''n37''.
 +
 +''for i in `seq 1 17`;do sbatch sleep; done''
 +
 +<code>
 +#!/bin/bash
 +#SBATCH --job-name=sleep
 +#SBATCH --partition=mwgpu
 +###SBATCH -n 1
 +#SBATCH -B 1:1:1
 +#SBATCH --mem=1024
 +sleep 60
 +</code>
 + 
 + --- //[[hmeij@wesleyan.edu|Henk]] 2021/10/15 15:18//
 +
 +** GPU-CPU cores **
 +
 +Noticed this with debug level on in slurmd.log. No action taken.
 +
 +<code>
 + 
 +# n37: old gpu model bound to all physical cpu cores
 +GRES[gpu] Type:tesla_k20m Count:1 Cores(32):0-15  Links:-1,0,0,0 /dev/nvidia0
 +GRES[gpu] Type:tesla_k20m Count:1 Cores(32):0-15  Links:0,-1,0,0 /dev/nvidia1
 +GRES[gpu] Type:tesla_k20m Count:1 Cores(32):0-15  Links:0,0,-1,0 /dev/nvidia2
 +GRES[gpu] Type:tesla_k20m Count:1 Cores(32):0-15  Links:0,0,0,-1 /dev/nvidia3
 +
 +# n78: somewhat dated gpu model, bound to top/bot of physical cores (16)
 +GRES[gpu] Type:geforce_gtx_1080_ti Count:1 Cores(32):0-7   Links:-1,0,0,0 /dev/nvidia0
 +GRES[gpu] Type:geforce_gtx_1080_ti Count:1 Cores(32):0-7   Links:0,-1,0,0 /dev/nvidia1
 +GRES[gpu] Type:geforce_gtx_1080_ti Count:1 Cores(32):8-15  Links:0,0,-1,0 /dev/nvidia2
 +GRES[gpu] Type:geforce_gtx_1080_ti Count:1 Cores(32):8-15  Links:0,0,0,-1 /dev/nvidia3
 +
 +# n79, more recent gpu model, same bound pattern of top/bot (24)
 +GRES[gpu] Type:geforce_rtx_2080_s Count:1 Cores(48):0-11  Links:-1,0,0,0 /dev/nvidia0
 +GRES[gpu] Type:geforce_rtx_2080_s Count:1 Cores(48):0-11  Links:0,-1,0,0 /dev/nvidia1
 +GRES[gpu] Type:geforce_rtx_2080_s Count:1 Cores(48):12-23  Links:0,0,-1,0 /dev/nvidia2
 +GRES[gpu] Type:geforce_rtx_2080_s Count:1 Cores(48):12-23  Links:0,0,0,-1 /dev/nvidia3
 +
 +</code>
 +
 +** Weight Priority **
 +
 +Weight nodes by the memory per logical core: jobs will be allocated the nodes with the lowest weight which satisfies their requirements. So CPU jobs will be routed last to gpu queues because they have the highest weight (=lowest priority).
 +
 +<code>
 + srun --partition=exx96,amber128,mwgpu  --mem=1024  --gpus=1  --gres=gpu:any sleep 60 &
 +</code>
 +
 +The above will fill up n79 first, then n78, then n36...
 +
 +<code>
 +hp12: 12/8 = 1.5
 +tinymem: 32/20 = 1.6
 +mw128: 128/24 = 5.333333
 +mw256: 256/16 = 16
 +
 +exx96: 96/24 = 4
 +amber128: 128/16 = 8
 +mwgpu = 256/16 = 16
 +</code>
 +
 +Or more arbitrary (based on desired cpu node comsumption of cpu jobs. No action taken.
 +
 +<code>
 +tinymem   10
 +mw128     20
 +mw256fd  30    HasMem256 feature so cpu jobs can directly target large mem
 +mwgpu    40    +  HasMem256 feature
 +amber128  50
 +exx96      80
 +</code>
 +
 +** CR_CPU_Memory **
 +
 +Makes for a better 1-1 relationship of physical core to ''ntask'' yet the "hyperthreads" are still available to user jobs but physical cores are consumed first, if I got all this right.
 +
 +Deployed. My need to set threads=1 and cpus=(quantity of physical cores)...this went horribly wrong it resaulted in sockets=1 setting and threads=1 for each node.
 + --- //[[hmeij@wesleyan.edu|Henk]] 2021/10/18 14:32//
 +
 +We did set number of cpus per gpu (12 for n79) and minimum memory settings. Now we experience 5th job pending with 48 cpus consumed. When using sbatch set -n 8 because sbatch will override defaults.
 +
 +<code>
 + srun --partition=test  --mem=1024  --gres=gpu:geforce_rtx_2080_s:1 sleep 60 &
 +</code>
  
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
  
cluster/208.txt · Last modified: 2022/11/02 17:28 by hmeij07