User Tools

Site Tools


cluster:126

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
cluster:126 [2025/04/18 13:42] hmeij07cluster:126 [2025/11/22 16:08] (current) – [In General] hmeij07
Line 5: Line 5:
  
 This page will be maintained and provide information to get users started using the compute cluster. It is a merger of the old "brief description" page and the "queue description" page. This page will be maintained and provide information to get users started using the compute cluster. It is a merger of the old "brief description" page and the "queue description" page.
 +
 +==== In General ====
 +
 +HPCC maintains and regularly updates an extensive software stack. Including provisioning tools, resource management, file transfer clients, development tools, a variety of scientific libraries, a variety of compilers (e.g. gcc/g++, OneAPI) and communication libraries (e.g., OpenMPI). Primarily provided by OpenHPC (https://openhpc.community/). Many opensource applications are custom compiled (about 100 or so used by many academic disciplines). The HPCC website offers documentation to help users resolve technical issues they may encounter (https://dokuwiki.wesleyan.edu/doku.php?id=cluster:0). Additional technical support (and tutors) is provided by the Scientific Computing and Informatics Center (https://www.wesleyan.edu/scic/) and the Quantitative Analysis Center (https://www.wesleyan.edu/qac/).
 +
 +ITS funds the system administrative support (0.75 FTE) of one ITS employee. Power and cooling are funded and maintained by Physical Plant. On an annual basis Academic Affairs contributes $25K and the HPCC users contribute $15K. This sets up a 4 year refresh cycle of $160K. On an annual basis Finance contributes up to $10K for maintenance (failed disks etc, monies do not roll over, use it or loose it).
 +
 +All HPCC hardware is located inside Wesleyan's ITS data center. Physical access is limited by swipe cards to certain ITS personnel. All head/login nodes (2 to 4 or so) are located on an internal subnet protected by Wesleyan's enterprise wide firewall. VPN is required from off campus. The internal HPCC network consists of two private subnets for the 100 or so compute nodes (one for job scheduler and monitoring tools, one for data transfers and NFS mounts). Access to HPCC resources is based on local Linux accounts and groups. 
  
 ===== Description ===== ===== Description =====
Line 13: Line 21:
   * primary login server ''cottontail2'' (Supermicro 1U), new Slurm scheduler, Rocky8, Warewulf, OpenHPC   * primary login server ''cottontail2'' (Supermicro 1U), new Slurm scheduler, Rocky8, Warewulf, OpenHPC
   * zabbix and ganglia monitoring and alerting servers ''hpcmon'' (supermicro 1U), CentOS8   * zabbix and ganglia monitoring and alerting servers ''hpcmon'' (supermicro 1U), CentOS8
-  * secondary login servers ''petaltal, swallowtail'' (HP blades, CentOS7)+  * secondary login servers ''petaltal, swallowtail'' (HP blades, Rocky8)
   * scratch server ''greentail522''(Suoermicro 36+2) serving out /sanscratch, CentOS7, sandbox   * scratch server ''greentail522''(Suoermicro 36+2) serving out /sanscratch, CentOS7, sandbox
   * backup Slurm test server ''sharptail2'' (Supermicro 2U), CentOS8, OpenHPC   * backup Slurm test server ''sharptail2'' (Supermicro 2U), CentOS8, OpenHPC
Line 53: Line 61:
   * 10 nodes with dual 12 core chips (Xeon Silver 4410Y CPU @ 3.9 Ghz), Emerald Rapids Microway servers with a memory footprint of 256 GB (2,560 GB, about 90 teraflops dpfp). These nodes hold four RTX4070Ti-Super gpus each.  Known as the "desktop" racks. ''mwgpu256'', nodes 108-n117.   * 10 nodes with dual 12 core chips (Xeon Silver 4410Y CPU @ 3.9 Ghz), Emerald Rapids Microway servers with a memory footprint of 256 GB (2,560 GB, about 90 teraflops dpfp). These nodes hold four RTX4070Ti-Super gpus each.  Known as the "desktop" racks. ''mwgpu256'', nodes 108-n117.
  
-All queues are available for job submissions via cottontail2 login node. Some nodes on Infiniband switches for parallel computational jobs (queues: mw256fdhp12, mw256).  Our total job slot count is roughly 2,624 with our physical core count 1,312Our total teraflops compute capacity is about 58 cpu side and 2,462 gpu side (mixed mode). Our total memory footprint is about 1,200 GB gpu side 13,012 GB cpu side.+  * 1 node with dual twelve core chips (Xeon Silver 4520, 2.40 Ghz) in ASUS ESC4000-E11 2U rack server with a memory footprint of 512 GB ( about 4 teraflops  dpfp)This node has four NVIDIA RTX PRO™ 6000 Blackwell Max-Q Workstation Edition  (96 GB memory footprint) gpus providing 440 teraflops (mixed mode). Known as the "Blackwell" gpu servernode n91queue exx512, 48 job slots. [[https://www.techpowerup.com/gpu-specs/rtx-pro-6000-blackwell-max-q.c4273]]
  
-Home directory file system are provided (via NFS or IPoIB) by the node M40ha (our file server) from a direct attached disk array. In total, 500 TB of /zfshomes disk space is accessible to the users. Node ''greentail52'' makes available 55 TB of scratch space at /sanscratch via NFS.  In addition all nodes provide local scratch space at /localscratch (excludes queue tinymem). The scheduler automatically makes directories in both these scratch areas for each job (named after JOBPID). Backup services for /zfshomes are provided via disk-to-disk replication from node ''hpcstore'' to node ''sharptail'' disk arrays. The TrueNAs/ZFS appliance performs daily snapshots with a retention window of 180 days. Some faculty&students have their home directories on node ''ringtail'' which provides 33 TB via /home33. Some faculty&students have their home directories on node ''ringtail2'' which provides 66 TB via /home33. Some faculty&students also have their own storage (2x 110 TB via /mindstore).  Static content should be migrated to the Rstore platform.+All queues are available for job submissions via cottontail2 login node. Some nodes on Infiniband switches for parallel computational jobs (queues: mw256fd, hp12, mw256).  Our total job slot count is roughly 2,672 with our physical core count 1,336. Our total teraflops compute capacity is about 92 cpu side and 2,902 gpu side (mixed mode). Our total memory footprint is about 1,584 GB gpu side and 13,524 GB cpu side. 
 + 
 +Home directory file system are provided (via NFS or IPoIB) by the node M40ha (our file server) from a direct attached disk array. In total, 500 TB of /zfshomes disk space is accessible to the users. Node ''greentail52'' makes available 55 TB of scratch space at /sanscratch via NFS.  In addition all nodes provide local scratch space at /localscratch (excludes queue tinymem). The scheduler automatically makes directories in both these scratch areas for each job (named after JOBPID). Backup services for /zfshomes are provided via replication to an older X20HA TrueNAS/ZFS appliance. The M40HA TrueNAS/ZFS appliance performs daily snapshots with a retention window of 180 days. Some faculty&students have their home directories on node ''ringtail'' which provides 33 TB via /home33. Some faculty&students have their home directories on node ''ringtail2'' which provides 66 TB via /home66. Some faculty&students also have their own storage (2x 110 TB via /mindstore).  Static content should be migrated to the Rstore platform.
  
  
 ===== Our Queues ===== ===== Our Queues =====
  
-There are no scheduler commercial software license resources. Only stata has a limited 6 user license. +There are no scheduler commercial software license resources. Only stata has a limited 6 user license. Matlab and Mathematica now have "unlimited licenses".
- +
-^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^ +
-|  stata  |   //na//  |  //na//  |  //na//  |   QDR Infiniband | //any host// |  6 licenses | +
- +
-Note: Matlab and Mathematica now have "unlimited licenses".+
  
  
Line 78: Line 83:
 |  test  |    |  192  |  96  | gigabit ethernet  | n100-n101 |  GPU & CPU  | |  test  |    |  192  |  96  | gigabit ethernet  | n100-n101 |  GPU & CPU  |
 |  mw256  |    |  256  |  672  | EDR infiniband  | n102-n102 |  CPU  | |  mw256  |    |  256  |  672  | EDR infiniband  | n102-n102 |  CPU  |
 +|  mwgpu256  |   10  |  256  |  480  | gigabit ethernet  | n108-n117 |  GPU & CPU  |
 +|  exx512  |    |  512  |  48  | gigabit ethernet  | n91 |  GPU & CPU  |
  
 Some guidelines for appropriate queue usage with detailed page links: Some guidelines for appropriate queue usage with detailed page links:
Line 107: Line 114:
     * can be used for production runs     * can be used for production runs
     * beware of preemptive events, checkpoint!     * beware of preemptive events, checkpoint!
-  * mw128, NFSoRDMA, bought with faculty startup monies+  * mw1256, NFSoRDMA, bought with faculty startup monies
     * beware of preemptive events, checkpoint!     * beware of preemptive events, checkpoint!
     * 6 compute nodes     * 6 compute nodes
     * Priority access for Sarah's lab till 4/1/2026     * Priority access for Sarah's lab till 4/1/2026
 +  * mwgpu256, contains 40 RTX4070Ti-Super gpus
 +    * same setup as exx96 queue
 +  * exx512, contains 4 NVIDIA RTX PRO™ 6000 Blackwell Max-Q Workstation Edition GPUs
 +    * beware of preemptive events, checkpoint!
 +    * Priority access for Antonio's lab till 10/7/2028
  
 **NOTE**: we are migrating from Openlava to Slurm during summer 2022. All queues except ''hp12'' and ''mw256fd'' will be service by ''cottontail2'' Slurm scheduler. **NOTE**: we are migrating from Openlava to Slurm during summer 2022. All queues except ''hp12'' and ''mw256fd'' will be service by ''cottontail2'' Slurm scheduler.
cluster/126.1744983770.txt.gz · Last modified: by hmeij07