User Tools

Site Tools


cluster:126

Warning: Undefined array key -1 in /usr/share/dokuwiki/inc/html.php on line 1458

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cluster:126 [2023/03/17 14:43]
hmeij07
cluster:126 [2023/10/23 15:37] (current)
hmeij07
Line 11: Line 11:
  
   * server ''cottontail'' (Supermicro 4U), old scheduler openlava, CentOS6   * server ''cottontail'' (Supermicro 4U), old scheduler openlava, CentOS6
-  * primary login server ''cottontail2'' (Supermicro 1U), new slurm scheduler, Rocky8 +  * primary login server ''cottontail2'' (Supermicro 1U), new slurm scheduler, Rocky8, Warewulf
-  * serveer ''swallowtail'' (Dell PowerEdge 2950 2U), sandbox, CentOS6 +
-  * server ''petaltail'' (Dell PowerEdge 2950 2U), sandbox, Warewulf v3.6 provisioning, CentOS6+
   * zenoss monitoring and alerting server ''hpcmon'' (supermicro 1U), CentOS6   * zenoss monitoring and alerting server ''hpcmon'' (supermicro 1U), CentOS6
-  * secondary login server ''greentail52'' (SuperMicro 36+2, 2U), serving out /sanscratch, CentOS7+  * secondary login server ''greentail52'' (SuperMicro 36+2, 2U), serving out /sanscratch, CentOS7, sandbox
   * server ''sharptail'' (Supermicro 4U),  /lvm_data (backup), /zfshomes replication, CentOS6   * server ''sharptail'' (Supermicro 4U),  /lvm_data (backup), /zfshomes replication, CentOS6
   * server ''sharptail2'' (Supermicro 2U), disaster recovery for off site (active users only), CentOS6   * server ''sharptail2'' (Supermicro 2U), disaster recovery for off site (active users only), CentOS6
Line 21: Line 19:
   * storage servers ''rstore6'' and ''rstore7'' (Supermicro 4U), replicated, Samba shares (2x 220T)   * storage servers ''rstore6'' and ''rstore7'' (Supermicro 4U), replicated, Samba shares (2x 220T)
   * storage servers ''mstore0/mstore1'' (Supermicro 4U), replicated, mounted on all HPC nodes (2x 110T)   * storage servers ''mstore0/mstore1'' (Supermicro 4U), replicated, mounted on all HPC nodes (2x 110T)
-  * storage server ''hpcstore'' (IXsystems, dual contrtoller shelf and two storage shelves, /zfshomes 235T+  * storage server ''hpcstore'' TrueNAS, dual controller shelf and two storage shelves, /zfshomes (235T)
  
 Several types of compute nodes are available via the scheduler:  Several types of compute nodes are available via the scheduler: 
Line 50: Line 48:
   * 2 nodes with dual twelve core chips (Xeon 4214R “Cascade Lake Refresh” 2.4 GHz), Supermicro 1U servers with a memory footprint of 192 GB. These nodes hold four RTX5000 gpus each. Known as the "sslurm test nodes, nodes n100-n101. Service by cottontail2, dual  Xeon 5222 “Cascade Lake-SP” 3.8 GHz 4-core with a memory footprint of 96 GB. ''test'' queue, nodes n100-n101.   * 2 nodes with dual twelve core chips (Xeon 4214R “Cascade Lake Refresh” 2.4 GHz), Supermicro 1U servers with a memory footprint of 192 GB. These nodes hold four RTX5000 gpus each. Known as the "sslurm test nodes, nodes n100-n101. Service by cottontail2, dual  Xeon 5222 “Cascade Lake-SP” 3.8 GHz 4-core with a memory footprint of 96 GB. ''test'' queue, nodes n100-n101.
  
-  * 6 nodes with dual 28 core chips (Xeon Gold 'Ice Lake-SP' 6330 CPU @ 2.00GHz), Supermicro 1U servers with a memory footprint of 256 GB ( 1,536 GB, about 27 teraflops dpfp). Know as "astro" rack. ''mw256'' queue nodes n1-2-n107. Storage server "astrostore" serves the nodes NFSoRDMA (EDR Infiniband). About 164 TB.+  * 6 nodes with dual 28 core chips (Xeon Gold 'Ice Lake-SP' 6330 CPU @ 2.00GHz), Supermicro 1U servers with a memory footprint of 256 GB ( 1,536 GB, about 27 teraflops dpfp). Know as "astro" rack. ''mw256'' queue nodes n102-n107. Storage server "astrostore" serves the nodes NFSoRDMA (EDR Infiniband). About 164 TB.
  
 All queues are available for job submissions via all login nodes. Some nodes on Infiniband switches for parallel computational jobs (queues: mw256fd, hp12, mw256).  Our total job slot count is roughly 2,144 with our physical core count 1,480. Our total teraflops compute capacity is about 58 cpu side, 25 gpu side (double precision floating point) and 702 gpu side (mixed mode). Our total memory footprint is about 560 GB gpu side,  10,452 GB cpu side. All queues are available for job submissions via all login nodes. Some nodes on Infiniband switches for parallel computational jobs (queues: mw256fd, hp12, mw256).  Our total job slot count is roughly 2,144 with our physical core count 1,480. Our total teraflops compute capacity is about 58 cpu side, 25 gpu side (double precision floating point) and 702 gpu side (mixed mode). Our total memory footprint is about 560 GB gpu side,  10,452 GB cpu side.
Line 103: Line 101:
   * exx96 contains 4 RTX2080S per node   * exx96 contains 4 RTX2080S per node
     * same setup as mwgpu queue     * same setup as mwgpu queue
-  * test (swallowtailpetaltailcottontail2n29n33)+  * test contains 8 RTX5000 gpus 
 +    * can be used for production runs 
 +    * beware of preemptive eventscheckpoint! 
 +  * mw128NFSoRDMAbought with faculty startup monies 
 +    * beware of preemptive eventscheckpoint! 
 +    * 6 compute nodes 
 +    * Priority access for Sarah's lab till 4/1/2026
  
 **NOTE**: we are migrating from Openlava to Slurm during summer 2022. All queues except ''hp12'' and ''mw256fd'' will be service by ''cottontail2'' Slurm scheduler. **NOTE**: we are migrating from Openlava to Slurm during summer 2022. All queues except ''hp12'' and ''mw256fd'' will be service by ''cottontail2'' Slurm scheduler.
cluster/126.1679078608.txt.gz · Last modified: 2023/03/17 14:43 by hmeij07