This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
cluster:126 [2023/03/17 18:47] hmeij07 |
cluster:126 [2023/10/23 19:37] (current) hmeij07 |
||
---|---|---|---|
Line 48: | Line 48: | ||
* 2 nodes with dual twelve core chips (Xeon 4214R “Cascade Lake Refresh” 2.4 GHz), Supermicro 1U servers with a memory footprint of 192 GB. These nodes hold four RTX5000 gpus each. Known as the " | * 2 nodes with dual twelve core chips (Xeon 4214R “Cascade Lake Refresh” 2.4 GHz), Supermicro 1U servers with a memory footprint of 192 GB. These nodes hold four RTX5000 gpus each. Known as the " | ||
- | * 6 nodes with dual 28 core chips (Xeon Gold 'Ice Lake-SP' | + | * 6 nodes with dual 28 core chips (Xeon Gold 'Ice Lake-SP' |
All queues are available for job submissions via all login nodes. Some nodes on Infiniband switches for parallel computational jobs (queues: mw256fd, hp12, mw256). | All queues are available for job submissions via all login nodes. Some nodes on Infiniband switches for parallel computational jobs (queues: mw256fd, hp12, mw256). | ||
Line 101: | Line 101: | ||
* exx96 contains 4 RTX2080S per node | * exx96 contains 4 RTX2080S per node | ||
* same setup as mwgpu queue | * same setup as mwgpu queue | ||
- | * test (swallowtail, petaltail, cottontail2, n29, n33) | + | * test contains 8 RTX5000 gpus |
+ | * can be used for production runs | ||
+ | * beware of preemptive events, checkpoint! | ||
+ | * mw128, NFSoRDMA, bought with faculty startup monies | ||
+ | * beware of preemptive events, checkpoint! | ||
+ | * 6 compute nodes | ||
+ | * Priority access for Sarah' | ||
**NOTE**: we are migrating from Openlava to Slurm during summer 2022. All queues except '' | **NOTE**: we are migrating from Openlava to Slurm during summer 2022. All queues except '' |