User Tools

Site Tools


cluster:126

Warning: Undefined array key -1 in /usr/share/dokuwiki/inc/html.php on line 1458

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cluster:126 [2022/03/30 15:06]
hmeij07
cluster:126 [2023/10/23 15:37] (current)
hmeij07
Line 8: Line 8:
 ===== Description ===== ===== Description =====
  
-The High Performance Compute Cluster (HPCC) is comprised of several login nodes (all are on our internal network (vlan 52)  //wesleyan.edu// so VPN is required for off campus access)+The High Performance Compute Cluster (HPCC) is comprised of several login nodes (all are on our internal network (vlan 52)  //wesleyan.edu// so VPN is required for off campus access as well as for students on campus)
  
-  * node ''cottontail'' (Supermicro 4U), old scheduler +  * server ''cottontail'' (Supermicro 4U), old scheduler openlava, CentOS6 
-  * primary login server ''cottontail2'' (Supermicro 1U), new slurm scheduler +  * primary login server ''cottontail2'' (Supermicro 1U), new slurm scheduler, Rocky8, Warewulf 
-  * node ''swallowtail'' (Dell PowerEdge 2950 2U)backup scheduler, databases +  * zenoss monitoring and alerting server ''hpcmon'' (supermicro 1U), CentOS6 
-  * sandbox ''petaltail'' (Dell PowerEdge 2950 2U), test box, Warewulf provisioning CentOS6 +  * secondary login server ''greentail52'' (SuperMicro 36+2, 2U), serving out /sanscratch, CentOS7, sandbox 
-  * zenoss monitoring and alerting server ''hpcmon'' (supermicro 1U, centos6+  * server ''sharptail'' (Supermicro 4U),  /lvm_data (backup), /zfshomes replication, CentOS6 
-  * secondary login server ''greentail52'' (SuperMicro 36+2, 2U), serving out /sanscratch  +  * server ''sharptail2'' (Supermicro 2U), disaster recovery for off site (active users only), CentOS6
-  * server ''sharptail'' (Supermicro 4U),  /lvm_data (backup), /zfshomes replication +
-  * server ''sharptail2'' (Supermicro 2U), disaster recovery for off site (active users only)+
   * storage servers ''rstore4'' and ''rstore5'' (Supermicro 4U), replicated, Samba shares (2x 220T)   * storage servers ''rstore4'' and ''rstore5'' (Supermicro 4U), replicated, Samba shares (2x 220T)
   * storage servers ''rstore6'' and ''rstore7'' (Supermicro 4U), replicated, Samba shares (2x 220T)   * storage servers ''rstore6'' and ''rstore7'' (Supermicro 4U), replicated, Samba shares (2x 220T)
-  * storage servers ''mstore0/mindstorsrv1'' (Supermicro 4U), replicated, mounted on all HPC nodes (2x 110T)+  * storage servers ''mstore0/mstore1'' (Supermicro 4U), replicated, mounted on all HPC nodes (2x 110T
 +  * storage server ''hpcstore'' TrueNAS, dual controller shelf and two storage shelves, /zfshomes (235T)
  
 Several types of compute nodes are available via the scheduler:  Several types of compute nodes are available via the scheduler: 
Line 27: Line 26:
   * All are x86_64, Intel Xeon chips from 2006 onwards   * All are x86_64, Intel Xeon chips from 2006 onwards
   * All are on private networks (192.168.x.x and/or 10.10.x.x, no internet)   * All are on private networks (192.168.x.x and/or 10.10.x.x, no internet)
-  * All mount /zfshomes (190T FreeNAS/ZFS appliance 2020) and /sanscratch (xfs, 55T) +  * All mount /zfshomes (235T TrueNAS/ZFS appliance 2020) and /sanscratch (xfs, 55T) 
   * All have local disks providing varying amounts of /localscratch (usually Raid0, no backup!)   * All have local disks providing varying amounts of /localscratch (usually Raid0, no backup!)
   * Hyperthreading is on but only 50% of logical cores allocated via scheduler   * Hyperthreading is on but only 50% of logical cores allocated via scheduler
Line 47: Line 46:
   * 12 nodes with dual twelve core chips (Xeon Silver 4214, 2.20 Ghz) in ASUS ESC4000G4 2U rack servers with a memory footprint of 96 GB ( 1,152 GB, about 20 teraflops  dpfp). These node each have four RTX1080S (32 GB memory footprint) gpus providing 702 teraflops (mixed mode). Known as the "rtx2080" rack, nodes n79-n90, queue exx96, 432 job slots.   * 12 nodes with dual twelve core chips (Xeon Silver 4214, 2.20 Ghz) in ASUS ESC4000G4 2U rack servers with a memory footprint of 96 GB ( 1,152 GB, about 20 teraflops  dpfp). These node each have four RTX1080S (32 GB memory footprint) gpus providing 702 teraflops (mixed mode). Known as the "rtx2080" rack, nodes n79-n90, queue exx96, 432 job slots.
  
-  * +  * 2 nodes with dual twelve core chips (Xeon 4214R “Cascade Lake Refresh” 2.4 GHz), Supermicro 1U servers with a memory footprint of 192 GB. These nodes hold four RTX5000 gpus each. Known as the "sslurm test nodes, nodes n100-n101. Service by cottontail2, dual  Xeon 5222 “Cascade Lake-SP” 3.8 GHz 4-core with a memory footprint of 96 GB. ''test'' queue, nodes n100-n101.
  
-All queues are available for job submissions via all login nodes. Some nodes on Infiniband switches for parallel computational jobs (queues: mw256fd, hp12).  Our total job slot count is roughly 2,144 with our physical core count 1,480. Our total teraflops compute capacity is about 58 cpu side, 25 gpu side (double precision floating pointand 702 gpu side (mixed mode). Our total memory footprint is about 528 GB gpu side,  8,532 GB cpu side.+  * 6 nodes with dual 28 core chips (Xeon Gold 'Ice Lake-SP' 6330 CPU @ 2.00GHz), Supermicro 1U servers with a memory footprint of 256 GB ( 1,536 GB, about 27 teraflops dpfp). Know as "astro" rack. ''mw256'' queue nodes n102-n107. Storage server "astrostore" serves the nodes NFSoRDMA (EDR Infiniband). About 164 TB.
  
-Home directory file system are provided (via NFS or IPoIB) by the node ''hpcstore'' (our file server) from a direct attached disk array. In total, 190 TB of /zfshomes disk space is accessible to the users. Node ''greentail52'' makes available 55 TB of scratch space at /sanscratch via NFS.  In addition all nodes provide local scratch space at /localscratch (excludes queue tinymem)The scheduler automatically makes directories in both these scratch areas for each job (named after JOBPID). Backup services for /zfshomes are provided via disk-to-disk replication from node ''hpcstore'' to node ''sharptail'' disk arrays. The TrueNAs/ZFS appliance performs daily snapshots with a retention window of 365 daysSome faculty have their home directories on node ''ringtail'' which provides 33 TB via /home33. Some faculty also have their own storage (2x 110 TB via /mindstore).  All home directories will migrate to a FreeNAS/ZFS appliance named ''hpcstore'' in 2020 (190T usablescalable to 1.2P, ETA summer 2020)+All queues are available for job submissions via all login nodes. Some nodes on Infiniband switches for parallel computational jobs (queues: mw256fd, hp12, mw256).  Our total job slot count is roughly 2,144 with our physical core count 1,480Our total teraflops compute capacity is about 58 cpu side, 25 gpu side (double precision floating point) and 702 gpu side (mixed mode). Our total memory footprint is about 560 GB gpu side,  10,452 GB cpu side.
  
-Two (oldRstore storage servers each provide about 104 TB of usable backup space which is not mounted on the compute nodes. Each Rstore server's content is replicated to a dedicated passive standby server of same size, located in same data center but in different racksAs of Spring 2019 we have added two new Rstore servers of 220 T each, fully backed up with replicationFaculty may request shares for their labs to off load static content from the HPCC.+Home directory file system are provided (via NFS or IPoIBby the node ''hpcstore'' (our file server) from a direct attached disk array. In total, 235 TB of /zfshomes disk space is accessible to the users. Node ''greentail52'' makes available 55 TB of scratch space at /sanscratch via NFS.  In addition all nodes provide local scratch space at /localscratch (excludes queue tinymem)The scheduler automatically makes directories in both these scratch areas for each job (named after JOBPID). Backup services for /zfshomes are provided via disk-to-disk replication from node ''hpcstore'' to node ''sharptail'' disk arrays. The TrueNAs/ZFS appliance performs daily snapshots with retention window of 180 daysSome faculty&students have their home directories on node ''ringtail'' which provides 33 TB via /home33Some faculty&students have their home directories on node ''ringtail2'' which provides 66 TB via /home33. Some faculty&students also have their own storage (2x 110 TB via /mindstore).  Static content should be migrated to the Rstore platform.
  
  
 ===== Our Queues ===== ===== Our Queues =====
  
-Commercial software has their own queue limited by available licenses. There are no scheduler license resources, just queue jobs up in appropriate queueCommercial software jobs are processed on the  nodes of mw256fd and mw128+There are no scheduler commercial software license resources. Only stata has a limited 6 user license.
  
 ^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^ ^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^
Line 67: Line 66:
  
 ^Queue^Nr Of Nodes^Total GB Mem Per Node^Job SLots In Queue^Switch^Hosts^Notes^ ^Queue^Nr Of Nodes^Total GB Mem Per Node^Job SLots In Queue^Switch^Hosts^Notes^
-|  hp12  |   32  |  12  |  256  | QDR infiniband  | n1-n32 |  CPU  |+|  hp12  |   32  |  12  |  256  | gigabit ethernet  | n1-n32 |  CPU  |
 |  mwgpu  |    |  256  |  120  | QDR infiniband  | n33-n37 |  GPU & CPU  | |  mwgpu  |    |  256  |  120  | QDR infiniband  | n33-n37 |  GPU & CPU  |
 |  mw256fd  |    |  256  |  192  | QDR infiniband  | n38-n45 |  CPU  | |  mw256fd  |    |  256  |  192  | QDR infiniband  | n38-n45 |  CPU  |
Line 74: Line 73:
 |  amber128  |    |  128  |  24  | gigabit ethernet  | n78 |  GPU & CPU  | |  amber128  |    |  128  |  24  | gigabit ethernet  | n78 |  GPU & CPU  |
 |  exx96  |   12  |  96  |  432  | gigabit ethernet  | n79-n90 |  GPU & CPU  | |  exx96  |   12  |  96  |  432  | gigabit ethernet  | n79-n90 |  GPU & CPU  |
 +|  test  |    |  192  |  96  | gigabit ethernet  | n100-n101 |  GPU & CPU  |
 +|  mw256  |    |  256  |  672  | EDR infiniband  | n102-n102 |  CPU  |
  
 Some guidelines for appropriate queue usage with detailed page links: Some guidelines for appropriate queue usage with detailed page links:
Line 98: Line 99:
     * Be sure to use mpich3 for Amber     * Be sure to use mpich3 for Amber
     * Priority access for Amber jobs till 10/01/2020     * Priority access for Amber jobs till 10/01/2020
-  * test (swallowtailpetaltailcottontail2n29, n33) +  * exx96 contains 4 RTX2080S per node 
-    * wall time of 8 hours of CPU usage +    * same setup as mwgpu queue 
 +  * test contains 8 RTX5000 gpus 
 +    * can be used for production runs 
 +    * beware of preemptive eventscheckpoint! 
 +  * mw128NFSoRDMAbought with faculty startup monies 
 +    * beware of preemptive events, checkpoint! 
 +    * 6 compute nodes 
 +    * Priority access for Sarah's lab till 4/1/2026 
 + 
 +**NOTE**: we are migrating from Openlava to Slurm during summer 2022. All queues except ''hp12'' and ''mw256fd'' will be service by ''cottontail2'' Slurm scheduler. 
 + 
 +  * [[cluster:213|New Head Node]] 
 +  * [[cluster:218|Getting Started with Slurm  Guide]]
  
 **There are no wall time limits in our HPCC environment except for queue ''test''.** You are responsible for checkpointing though. Consult these pages, all nodes in all queues are DMTCP enabled (read [[cluster:190|DMTCP]]. Login nodes and storage nodes are on UPS but all compute nodes are on utility power. Crashes do happen, be prepared to restart your long running jobs. **There are no wall time limits in our HPCC environment except for queue ''test''.** You are responsible for checkpointing though. Consult these pages, all nodes in all queues are DMTCP enabled (read [[cluster:190|DMTCP]]. Login nodes and storage nodes are on UPS but all compute nodes are on utility power. Crashes do happen, be prepared to restart your long running jobs.
Line 110: Line 123:
  
 For a list of software installed consult [[cluster:73|Software List]] page, endless... For a list of software installed consult [[cluster:73|Software List]] page, endless...
 +
 +For a list of OpenHPC software installed consult [[cluster:215|Software List]] page
  
 Details on all scratch spaces consult [[cluster:142|Scratch Spaces]] page Details on all scratch spaces consult [[cluster:142|Scratch Spaces]] page
Line 115: Line 130:
 For HPCC acknowledgements consult [[cluster:53|Acknowledgement]] page For HPCC acknowledgements consult [[cluster:53|Acknowledgement]] page
  
-Sample scripts for job submissions (serial, array, parallel, forked and gpu) can be found at ''/zfshomes/hmeij/jobs/'' and ''/zfshomes/hmeij/k20redo''+Sample scripts for job submissions (serial, array, parallel, forked and gpu) can be found at ''/zfshomes/hmeij/jobs/'' and ''/zfshomes/hmeij/k20redo'' and ''/zfshomes/hmeij/slurm''
  
 From off-campus you need to VPN in first, download GlobalProtect client  at [[http://vpn.wesleyan.edu]] From off-campus you need to VPN in first, download GlobalProtect client  at [[http://vpn.wesleyan.edu]]
cluster/126.1648667201.txt.gz · Last modified: 2022/03/30 15:06 by hmeij07