This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:208 [2021/10/15 12:57] hmeij07 |
cluster:208 [2021/10/15 13:16] hmeij07 [Feedback] |
||
---|---|---|---|
Line 149: | Line 149: | ||
Same on the cpu only compute nodes. Features could be created for memory footprints (for example " | Same on the cpu only compute nodes. Features could be created for memory footprints (for example " | ||
- | On the resource requests: You may request 1 or more nodes, 1 or more sockets per node, 1 or more cores (physical) per socket or 1 or more threads (logical + physical) per core. Such a request can be fine grained or not; just request a node with '' | + | On the cpu resource requests: You may request 1 or more nodes, 1 or more sockets per node, 1 or more cores (physical) per socket or 1 or more threads (logical + physical) per core. Such a request can be fine grained or not; just request a node with '' |
//Note: this oversubscribing is not working yet. I can only get 4 simultaneous jobs running. Maybe there is a conflict with Openlava jobs. Should isolate a node and do further testing. After isolation (n37), 4 jobs with -n 4 exhausts number of physical cores. Is that why 5th job goes pending?// | //Note: this oversubscribing is not working yet. I can only get 4 simultaneous jobs running. Maybe there is a conflict with Openlava jobs. Should isolate a node and do further testing. After isolation (n37), 4 jobs with -n 4 exhausts number of physical cores. Is that why 5th job goes pending?// | ||
Line 157: | Line 157: | ||
Slurm has a builtin MPI flavor, I suggest you do not rely on it. The documentation states that on major release upgrades the '' | Slurm has a builtin MPI flavor, I suggest you do not rely on it. The documentation states that on major release upgrades the '' | ||
- | For now, we'll rely on PATH/ | + | For now, we'll rely on PATH/ |
'' | '' | ||
Line 163: | Line 163: | ||
< | < | ||
- | $ srun --partition=mwgpu -n 4 -B 1:1:1 --mem=1024 sleep 60 & | + | $ srun --partition=mwgpu -n 4 -B 1:4:1 --mem=1024 sleep 60 & |
</ | </ | ||
Line 381: | Line 381: | ||
If there are errors on this page, or mistatements, | If there are errors on this page, or mistatements, | ||
- | --- // | + | --- // |