This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:208 [2021/10/18 18:55] hmeij07 [Changes] |
cluster:208 [2021/10/21 14:08] hmeij07 [Changes] |
||
---|---|---|---|
Line 430: | Line 430: | ||
</ | </ | ||
- | ** Weight | + | ** Partition |
- | Weight nodes by the memory per logical core: jobs will be allocated the nodes with the lowest weight which satisfies their requirements. So CPU jobs will be routed last to gpu queues because they have the highest weight (=lowest priority). | + | If set you can list more than one queue... |
+ | < | ||
+ | srun --partition=exx96, | ||
+ | </ | ||
+ | |||
+ | The above will fill up n79 first, then n78, then n36... | ||
+ | |||
+ | ** Node Weight Priority ** | ||
+ | |||
+ | Weight nodes by the memory per logical core: jobs will be allocated the nodes with the lowest weight which satisfies their requirements. So CPU jobs will be routed last to gpu queues because they have the highest weight (=lowest priority). | ||
< | < | ||
hp12: 12/8 = 1.5 | hp12: 12/8 = 1.5 | ||
Line 460: | Line 469: | ||
Makes for a better 1-1 relationship of physical core to '' | Makes for a better 1-1 relationship of physical core to '' | ||
- | Deployed. My need to set threads=1 and cpus=(quantity of physical cores) | + | Deployed. My need to set threads=1 and cpus=(quantity of physical cores)...this went horribly wrong it resaulted in sockets=1 setting and threads=1 for each node. |
--- // | --- // | ||
+ | |||
+ | We did set number of cpus per gpu (12 for n79) and minimum memory settings. Now we experience 5th job pending with 48 cpus consumed. When using sbatch set -n 8 because sbatch will override defaults. | ||
+ | |||
+ | < | ||
+ | srun --partition=test | ||
+ | </ | ||
\\ | \\ | ||
**[[cluster: | **[[cluster: | ||