Warning: Undefined array key "DOKU_PREFS" in /usr/share/dokuwiki/inc/common.php on line 2082
cluster:126 [DokuWiki]

User Tools

Site Tools


cluster:126

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision Both sides next revision
cluster:126 [2022/03/30 15:06]
hmeij07
cluster:126 [2022/03/30 15:13]
hmeij07
Line 47: Line 47:
   * 12 nodes with dual twelve core chips (Xeon Silver 4214, 2.20 Ghz) in ASUS ESC4000G4 2U rack servers with a memory footprint of 96 GB ( 1,152 GB, about 20 teraflops  dpfp). These node each have four RTX1080S (32 GB memory footprint) gpus providing 702 teraflops (mixed mode). Known as the "rtx2080" rack, nodes n79-n90, queue exx96, 432 job slots.   * 12 nodes with dual twelve core chips (Xeon Silver 4214, 2.20 Ghz) in ASUS ESC4000G4 2U rack servers with a memory footprint of 96 GB ( 1,152 GB, about 20 teraflops  dpfp). These node each have four RTX1080S (32 GB memory footprint) gpus providing 702 teraflops (mixed mode). Known as the "rtx2080" rack, nodes n79-n90, queue exx96, 432 job slots.
  
-  * +  * 2 nodes with dual twelve core chips (Xeon 4214R “Cascade Lake Refresh” 2.4 GHz), Supermicro 1u servers with a memory footprint of 192 GB. These nodes hold four RTX5000 gpus each. Known as the "sslurm test nodes, nodes n100-n101. Service by cottontail2, dual  Xeon 5222 “Cascade Lake-SP” 3.8 GHz 4-core with a memory footprint of 96 GB.
  
-All queues are available for job submissions via all login nodes. Some nodes on Infiniband switches for parallel computational jobs (queues: mw256fd, hp12).  Our total job slot count is roughly 2,144 with our physical core count 1,480. Our total teraflops compute capacity is about 58 cpu side, 25 gpu side (double precision floating point) and 702 gpu side (mixed mode). Our total memory footprint is about 528 GB gpu side,  8,532 GB cpu side.+All queues are available for job submissions via all login nodes. Some nodes on Infiniband switches for parallel computational jobs (queues: mw256fd, hp12).  Our total job slot count is roughly 2,144 with our physical core count 1,480. Our total teraflops compute capacity is about 58 cpu side, 25 gpu side (double precision floating point) and 702 gpu side (mixed mode). Our total memory footprint is about 560 GB gpu side,  8,916 GB cpu side.
  
 Home directory file system are provided (via NFS or IPoIB) by the node ''hpcstore'' (our file server) from a direct attached disk array. In total, 190 TB of /zfshomes disk space is accessible to the users. Node ''greentail52'' makes available 55 TB of scratch space at /sanscratch via NFS.  In addition all nodes provide local scratch space at /localscratch (excludes queue tinymem). The scheduler automatically makes directories in both these scratch areas for each job (named after JOBPID). Backup services for /zfshomes are provided via disk-to-disk replication from node ''hpcstore'' to node ''sharptail'' disk arrays. The TrueNAs/ZFS appliance performs daily snapshots with a retention window of 365 days. Some faculty have their home directories on node ''ringtail'' which provides 33 TB via /home33. Some faculty also have their own storage (2x 110 TB via /mindstore).  All home directories will migrate to a FreeNAS/ZFS appliance named ''hpcstore'' in 2020 (190T usable, scalable to 1.2P, ETA summer 2020).  Home directory file system are provided (via NFS or IPoIB) by the node ''hpcstore'' (our file server) from a direct attached disk array. In total, 190 TB of /zfshomes disk space is accessible to the users. Node ''greentail52'' makes available 55 TB of scratch space at /sanscratch via NFS.  In addition all nodes provide local scratch space at /localscratch (excludes queue tinymem). The scheduler automatically makes directories in both these scratch areas for each job (named after JOBPID). Backup services for /zfshomes are provided via disk-to-disk replication from node ''hpcstore'' to node ''sharptail'' disk arrays. The TrueNAs/ZFS appliance performs daily snapshots with a retention window of 365 days. Some faculty have their home directories on node ''ringtail'' which provides 33 TB via /home33. Some faculty also have their own storage (2x 110 TB via /mindstore).  All home directories will migrate to a FreeNAS/ZFS appliance named ''hpcstore'' in 2020 (190T usable, scalable to 1.2P, ETA summer 2020). 
cluster/126.txt · Last modified: 2023/10/23 15:37 by hmeij07