This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:208 [2021/10/21 14:03] hmeij07 [Changes] |
cluster:208 [2022/05/26 17:23] hmeij07 [Feedback] |
||
---|---|---|---|
Line 383: | Line 383: | ||
--- // | --- // | ||
+ | ===== gpu testing ===== | ||
+ | |||
+ | * n33 only, free of jobs, 4 gpus, 16 cores, 16 threads, 32 cpus | ||
+ | |||
+ | * submit one at a time, observe where pmemd.cuda ends up | ||
+ | * | ||
===== Changes ===== | ===== Changes ===== | ||
Line 430: | Line 436: | ||
</ | </ | ||
- | ** Weight | + | ** Partition |
- | Weight nodes by the memory per logical core: jobs will be allocated the nodes with the lowest weight which satisfies their requirements. So CPU jobs will be routed last to gpu queues because they have the highest weight (=lowest priority). | + | If set you can list more than one queue... |
+ | < | ||
+ | srun --partition=exx96, | ||
+ | </ | ||
+ | |||
+ | The above will fill up n79 first, then n78, then n36... | ||
+ | |||
+ | ** Node Weight Priority ** | ||
+ | |||
+ | Weight nodes by the memory per logical core: jobs will be allocated the nodes with the lowest weight which satisfies their requirements. So CPU jobs will be routed last to gpu queues because they have the highest weight (=lowest priority). | ||
< | < | ||
hp12: 12/8 = 1.5 | hp12: 12/8 = 1.5 |