User Tools

Site Tools


cluster:208

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:208 [2021/10/16 20:22]
hmeij07 [Changes]
cluster:208 [2022/05/27 13:03]
hmeij07 [gpu testing]
Line 383: Line 383:
  --- //[[hmeij@wesleyan.edu|Henk]] 2021/10/15 09:16//  --- //[[hmeij@wesleyan.edu|Henk]] 2021/10/15 09:16//
  
 +===== gpu testing =====
 +
 +  * n33-n37 each: 4 gpus, 16 cores, 16 threads, 32 cpus
 +  * submit one at a time, observe  
 +  * part=test, n 1, B 1:1:1, cuda_visible=0, no node specified, n33 only
 +  * "resources" reason at 17th submit, used up 16 cores and 16 threads
 +  * all on same gpu
 +  * part=test, n 1, B 1:1:1, cuda_visible not set, no node specified, n33 only
 +  * "resources" reason at 17th submit too, same reason
 +  * all gpus used? nope, all on the same one 0
 +  * redoing above with a  ''export CUDA_VISIBLE_DEVICES=`shuf -i 0-3 -n 1`''
 +  * even distribution across all gpus, 17th submit reason too
 +  * part=test, n 1, B 1:1:1, cuda_visible not set, no node specified, n[33-34] avail
 +  * while submitting 34 jobs, one at a time (30s delay), slurm fills up n33 first (all on gpu 0)
 +  * 17th submit goes to n34, gpu 1 (weird)
 ===== Changes ===== ===== Changes =====
  
Line 406: Line 421:
 ** GPU-CPU cores ** ** GPU-CPU cores **
  
-Noticed this with debug level on in slurmd.log+Noticed this with debug level on in slurmd.log. No action taken.
  
 <code> <code>
Line 429: Line 444:
  
 </code> </code>
 +
 +** Partition Priority **
 +
 +If set you can list more than one queue...
 +
 +<code>
 + srun --partition=exx96,amber128,mwgpu  --mem=1024  --gpus=1  --gres=gpu:any sleep 60 &
 +</code>
 +
 +The above will fill up n79 first, then n78, then n36...
 +
 +** Node Weight Priority **
 +
 +Weight nodes by the memory per logical core: jobs will be allocated the nodes with the lowest weight which satisfies their requirements. So CPU jobs will be routed last to gpu queues because they have the highest weight (=lowest priority).
 +<code>
 +hp12: 12/8 = 1.5
 +tinymem: 32/20 = 1.6
 +mw128: 128/24 = 5.333333
 +mw256: 256/16 = 16
 +
 +exx96: 96/24 = 4
 +amber128: 128/16 = 8
 +mwgpu = 256/16 = 16
 +</code>
 +
 +Or more arbitrary (based on desired cpu node comsumption of cpu jobs. No action taken.
 +
 +<code>
 +tinymem   10
 +mw128     20
 +mw256fd  30    HasMem256 feature so cpu jobs can directly target large mem
 +mwgpu    40    +  HasMem256 feature
 +amber128  50
 +exx96      80
 +</code>
 +
 +** CR_CPU_Memory **
 +
 +Makes for a better 1-1 relationship of physical core to ''ntask'' yet the "hyperthreads" are still available to user jobs but physical cores are consumed first, if I got all this right.
 +
 +Deployed. My need to set threads=1 and cpus=(quantity of physical cores)...this went horribly wrong it resaulted in sockets=1 setting and threads=1 for each node.
 + --- //[[hmeij@wesleyan.edu|Henk]] 2021/10/18 14:32//
 +
 +We did set number of cpus per gpu (12 for n79) and minimum memory settings. Now we experience 5th job pending with 48 cpus consumed. When using sbatch set -n 8 because sbatch will override defaults.
 +
 +<code>
 + srun --partition=test  --mem=1024  --gres=gpu:geforce_rtx_2080_s:1 sleep 60 &
 +</code>
 +
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
  
cluster/208.txt ยท Last modified: 2022/11/02 17:28 by hmeij07