This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:208 [2021/10/15 13:02] hmeij07 [Overview] |
cluster:208 [2021/10/15 19:16] hmeij07 [Feedback] |
||
---|---|---|---|
Line 157: | Line 157: | ||
Slurm has a builtin MPI flavor, I suggest you do not rely on it. The documentation states that on major release upgrades the '' | Slurm has a builtin MPI flavor, I suggest you do not rely on it. The documentation states that on major release upgrades the '' | ||
- | For now, we'll rely on PATH/ | + | For now, we'll rely on PATH/ |
'' | '' | ||
Line 163: | Line 163: | ||
< | < | ||
- | $ srun --partition=mwgpu -n 4 -B 1:1:1 --mem=1024 sleep 60 & | + | $ srun --partition=mwgpu -n 4 -B 1:4:1 --mem=1024 sleep 60 & |
</ | </ | ||
Line 379: | Line 379: | ||
===== Feedback ===== | ===== Feedback ===== | ||
- | If there are errors on this page, or mistatements, | + | If there are errors on this page, or mistatements, |
- | --- // | + | --- // |
+ | ===== Changes ===== | ||
+ | |||
+ | Suggestion was made to set Oversubcribe=No for all partitions (thanks, Colin). We now observe with a simple sleep script that we can run 16 jobs simultaneously (with either -n or -B). So that's 16 physical cores, each has a logical core for a total of 32. | ||
+ | |||
+ | '' | ||
+ | |||
+ | < | ||
+ | #!/bin/bash | ||
+ | #SBATCH --job-name=sleep | ||
+ | #SBATCH --partition=mwgpu | ||
+ | ###SBATCH -n 1 | ||
+ | #SBATCH -B 1:1:1 | ||
+ | #SBATCH --mem=1024 | ||
+ | sleep 60 | ||
+ | </ | ||
\\ | \\ | ||
**[[cluster: | **[[cluster: | ||