Both sides previous revision
Previous revision
Next revision
|
Previous revision
|
cluster:126 [2025/04/18 13:15] hmeij07 |
cluster:126 [2025/04/18 13:53] (current) hmeij07 |
* 12 nodes with dual twelve core chips (Xeon Silver 4214, 2.20 Ghz) in ASUS ESC4000G4 2U rack servers with a memory footprint of 96 GB ( 1,152 GB, about 20 teraflops dpfp). These node each have four RTX1080S (32 GB memory footprint) gpus providing 702 teraflops (mixed mode). Known as the "rtx2080" rack, nodes n79-n90, queue exx96, 432 job slots. | * 12 nodes with dual twelve core chips (Xeon Silver 4214, 2.20 Ghz) in ASUS ESC4000G4 2U rack servers with a memory footprint of 96 GB ( 1,152 GB, about 20 teraflops dpfp). These node each have four RTX1080S (32 GB memory footprint) gpus providing 702 teraflops (mixed mode). Known as the "rtx2080" rack, nodes n79-n90, queue exx96, 432 job slots. |
| |
* 2 nodes with dual twelve core chips (Xeon 4214R “Cascade Lake Refresh” 2.4 GHz), Supermicro 1U servers with a memory footprint of 192 GB. These nodes hold four RTX5000 gpus each. Known as the "sslurm test nodes, nodes n100-n101. Service by cottontail2, dual Xeon 5222 “Cascade Lake-SP” 3.8 GHz 4-core with a memory footprint of 96 GB. ''test'' queue, nodes n100-n101. | * 2 nodes with dual twelve core chips (Xeon 4214R “Cascade Lake Refresh” 2.4 GHz), Supermicro 1U servers with a memory footprint of 192 GB. These nodes hold four RTX5000 gpus each. Known as the "slurm test" nodes. Service by cottontail2, dual Xeon 5222 “Cascade Lake-SP” 3.8 GHz 4-core with a memory footprint of 96 GB. ''test'' queue, nodes n100-n101. |
| |
* 6 nodes with dual 28 core chips (Xeon Gold 'Ice Lake-SP' 6330 CPU @ 2.00GHz), Supermicro 1U servers with a memory footprint of 256 GB ( 1,536 GB, about 27 teraflops dpfp). Know as "astro" rack. ''mw256'' queue nodes n102-n107. Storage server "astrostore" serves the nodes NFSoRDMA (EDR Infiniband). About 164 TB. | * 6 nodes with dual 28 core chips (Xeon Gold 'Ice Lake-SP' 6330 CPU @ 2.00GHz), Supermicro 1U servers with a memory footprint of 256 GB (1,536 GB, about 27 teraflops dpfp). Know as "astro" rack. ''mw256'' queue nodes n102-n107. Storage server "astrostore" serves the nodes NFSoRDMA (EDR Infiniband). About 164 TB. |
| |
All queues are available for job submissions via all login nodes. Some nodes on Infiniband switches for parallel computational jobs (queues: mw256fd, hp12, mw256). Our total job slot count is roughly 2,144 with our physical core count 1,480. Our total teraflops compute capacity is about 58 cpu side, 25 gpu side (double precision floating point) and 702 gpu side (mixed mode). Our total memory footprint is about 560 GB gpu side, 10,452 GB cpu side. | * 10 nodes with dual 12 core chips (Xeon Silver 4410Y CPU @ 3.9 Ghz), Emerald Rapids Microway servers with a memory footprint of 256 GB (2,560 GB, about 90 teraflops dpfp). These nodes hold four RTX4070Ti-Super gpus each. Known as the "desktop" racks. ''mwgpu256'', nodes 108-n117. |
| |
Home directory file system are provided (via NFS or IPoIB) by the node ''hpcstore'' (our file server) from a direct attached disk array. In total, 235 TB of /zfshomes disk space is accessible to the users. Node ''greentail52'' makes available 55 TB of scratch space at /sanscratch via NFS. In addition all nodes provide local scratch space at /localscratch (excludes queue tinymem). The scheduler automatically makes directories in both these scratch areas for each job (named after JOBPID). Backup services for /zfshomes are provided via disk-to-disk replication from node ''hpcstore'' to node ''sharptail'' disk arrays. The TrueNAs/ZFS appliance performs daily snapshots with a retention window of 180 days. Some faculty&students have their home directories on node ''ringtail'' which provides 33 TB via /home33. Some faculty&students have their home directories on node ''ringtail2'' which provides 66 TB via /home33. Some faculty&students also have their own storage (2x 110 TB via /mindstore). Static content should be migrated to the Rstore platform. | All queues are available for job submissions via cottontail2 login node. Some nodes on Infiniband switches for parallel computational jobs (queues: mw256fd, hp12, mw256). Our total job slot count is roughly 2,624 with our physical core count 1,312. Our total teraflops compute capacity is about 88 cpu side and 2,462 gpu side (mixed mode). Our total memory footprint is about 1,200 GB gpu side and 13,012 GB cpu side. |
| |
| Home directory file system are provided (via NFS or IPoIB) by the node M40ha (our file server) from a direct attached disk array. In total, 500 TB of /zfshomes disk space is accessible to the users. Node ''greentail52'' makes available 55 TB of scratch space at /sanscratch via NFS. In addition all nodes provide local scratch space at /localscratch (excludes queue tinymem). The scheduler automatically makes directories in both these scratch areas for each job (named after JOBPID). Backup services for /zfshomes are provided via replication to an older X20HA TrueNAS/ZFS appliance. The M40HA TrueNAS/ZFS appliance performs daily snapshots with a retention window of 180 days. Some faculty&students have their home directories on node ''ringtail'' which provides 33 TB via /home33. Some faculty&students have their home directories on node ''ringtail2'' which provides 66 TB via /home66. Some faculty&students also have their own storage (2x 110 TB via /mindstore). Static content should be migrated to the Rstore platform. |
| |
| |
===== Our Queues ===== | ===== Our Queues ===== |
| |
There are no scheduler commercial software license resources. Only stata has a limited 6 user license. | There are no scheduler commercial software license resources. Only stata has a limited 6 user license. Matlab and Mathematica now have "unlimited licenses". |
| |
^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^ | |
| stata | //na// | //na// | //na// | QDR Infiniband | //any host// | 6 licenses | | |
| |
Note: Matlab and Mathematica now have "unlimited licenses". | |
| |
| |
| test | 2 | 192 | 96 | gigabit ethernet | n100-n101 | GPU & CPU | | | test | 2 | 192 | 96 | gigabit ethernet | n100-n101 | GPU & CPU | |
| mw256 | 6 | 256 | 672 | EDR infiniband | n102-n102 | CPU | | | mw256 | 6 | 256 | 672 | EDR infiniband | n102-n102 | CPU | |
| | mwgpu256 | 10 | 256 | 480 | gigabit ethernet | n108-n117 | GPU & CPU | |
| |
Some guidelines for appropriate queue usage with detailed page links: | Some guidelines for appropriate queue usage with detailed page links: |
* can be used for production runs | * can be used for production runs |
* beware of preemptive events, checkpoint! | * beware of preemptive events, checkpoint! |
* mw128, NFSoRDMA, bought with faculty startup monies | * mw1256, NFSoRDMA, bought with faculty startup monies |
* beware of preemptive events, checkpoint! | * beware of preemptive events, checkpoint! |
* 6 compute nodes | * 6 compute nodes |
* Priority access for Sarah's lab till 4/1/2026 | * Priority access for Sarah's lab till 4/1/2026 |
| * mwgpu256, contains 40 RTX4070Ti-Super gpus |
| * same setup as exx96 queue |
| |
**NOTE**: we are migrating from Openlava to Slurm during summer 2022. All queues except ''hp12'' and ''mw256fd'' will be service by ''cottontail2'' Slurm scheduler. | **NOTE**: we are migrating from Openlava to Slurm during summer 2022. All queues except ''hp12'' and ''mw256fd'' will be service by ''cottontail2'' Slurm scheduler. |