User Tools

Site Tools


cluster:126

Warning: Undefined array key -1 in /usr/share/dokuwiki/inc/html.php on line 1458

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cluster:126 [2017/09/05 10:41]
hmeij07
cluster:126 [2023/10/23 15:37] (current)
hmeij07
Line 8: Line 8:
 ===== Description ===== ===== Description =====
  
-The High Performance Compute Cluster (HPCC) is comprised of several login nodes (all are on our domain //wesleyan.edu// behind VPN for off campus access)+The High Performance Compute Cluster (HPCC) is comprised of several login nodes (all are on our internal network (vlan 52)  //wesleyan.edu// so VPN is required for off campus access as well as for students on campus)
  
-  * primary login node ''cottontail'' (Supermicro 4U), Openlava scheduler and snapshot engine +  * server ''cottontail'' (Supermicro 4U), old scheduler openlava, CentOS6 
-  * secondary login node ''cottontail2'' (HP Proliant G380 2U), backup scheduler, standby for /sanscratch +  * primary login server ''cottontail2'' (Supermicro 1U), new slurm scheduler, Rocky8, Warewulf 
-  * secondary login node ''swallowtail'' (Dell PowerEdge 2950 2U), backup scheduler, databases +  * zenoss monitoring and alerting server ''hpcmon'' (supermicro 1U), CentOS6 
-  * (old login node) ''petaltail'' (Dell PowerEdge 2950 2U), test box (may crash)Warewulf provisioning +  * secondary login server ''greentail52'' (SuperMicro 36+2, 2U), serving out /sanscratchCentOS7, sandbox 
-  * (old login node) ''greentail'' (HP Proliant G380 2U), /sanscratch primary NFS server +  * server ''sharptail'' (Supermicro 4U),  /lvm_data (backup), /zfshomes replication, CentOS6 
-  * (old node) ''whitetail'' (Angstrom Blade 1U), Hadoop Cloudera test server +  * server ''sharptail2'' (Supermicro 2U), disaster recovery for off site (active users only), CentOS6 
-  * (not to be used as) login node ''sharptail'' (Supermicro 4U), /home primary NFS server +  * storage servers ''rstore4'' and ''rstore5'' (Supermicro 4U), replicated, Samba shares (2x 220T) 
-  * (to be populated summer 2017) ''sharptail2'' (Supermicro 2U), disaster recovery for /homeoff site +  * storage servers ''rstore6'' and ''rstore7'' (Supermicro 4U), replicatedSamba shares (2x 220T) 
-  * Storage servers ''rstore0'' and ''rstore2'' (Supermicro 4U), NFS mounts and Samba shares+  * storage servers ''mstore0/mstore1'' (Supermicro 4U), replicated, mounted on all HPC nodes (2x 110T) 
 +  * storage server ''hpcstore'' TrueNAS, dual controller shelf and two storage shelves, /zfshomes (235T)
  
-Several types of compute nodes are available via the OpenLava scheduler, http://www.openlava.org+Several types of compute nodes are available via the scheduler: 
  
-  * All are running CentOS6.[4-9], x86_64, Intel Xeon chips (except the Angstrom blades which are AMD Opteron) +  * All are running CentOS 6.10 or CentOS 7.7 (except cottontail2/n100,n101 run rocky8.5) 
-  * All are on private networks (no internet) +  * All are x86_64, Intel Xeon chips from 2006 onwards 
-  * All mount /home (10TB, to be expanded to 25TB fall 2017) and /sanscratch (33TB+  * All are on private networks (192.168.x.x and/or 10.10.x.x, no internet) 
 +  * All mount /zfshomes (235T TrueNAS/ZFS appliance 2020) and /sanscratch (xfs, 55T
   * All have local disks providing varying amounts of /localscratch (usually Raid0, no backup!)   * All have local disks providing varying amounts of /localscratch (usually Raid0, no backup!)
-  * Hyperthreading is on but only on newer hardware are 50% of logical cores allocated.+  * Hyperthreading is on but only 50% of logical cores allocated via scheduler
  
 Compute node categories which usually align with queues: Compute node categories which usually align with queues:
  
   * 32 nodes with dual quad core chips (Xeon 5620, 2.4 Ghz) in HP blade 4U enclosures (SL2x170z G6) with memory footprint of 12 GB each (384 GB). This cluster has a compute capacity of 1.5 teraflops (measured using Linpack). Known as the HP cluster, or the nodes n1-n32, queue hp12, 256 job slots.   * 32 nodes with dual quad core chips (Xeon 5620, 2.4 Ghz) in HP blade 4U enclosures (SL2x170z G6) with memory footprint of 12 GB each (384 GB). This cluster has a compute capacity of 1.5 teraflops (measured using Linpack). Known as the HP cluster, or the nodes n1-n32, queue hp12, 256 job slots.
- 
-  * 42 nodes with dual single core chips (AMD Opteron Model 250, 2.4 Ghz) in Angstrom blade 12U enclosures with a memory footprint of 24 GB each (1,008 GB). This cluster has a compute capacity of 0.2-0.3 teraflops (estimated). Known as the Blue Sky Studio cluster, or the b-nodes (b0-b51), queue bss24, 84 job slots. Powered off when not in use. 
  
   * 5 nodes with dual eight core chips (Xeon E5-2660, 2.2 Ghz) in ASUS 2U rack servers with a memory footprint of 256 GB each (1,280 GB). Nodes also contain four K20 Telsa GPU each, 2,500 cores/gpu (10,000 gpu cores per node) with GPU memory footprint of 5 GB (20 GB). This cluster has a compute capacity of 23.40 teraflops double precision or 70.40 teraflops single precision on GPU side and 2.9 teraflops on cpu side. Known as the Microway GPU cluster, or the nodes n33-n37, mwgpu (120 job slots). <del>Old queue mw256 merged in.</del>    * 5 nodes with dual eight core chips (Xeon E5-2660, 2.2 Ghz) in ASUS 2U rack servers with a memory footprint of 256 GB each (1,280 GB). Nodes also contain four K20 Telsa GPU each, 2,500 cores/gpu (10,000 gpu cores per node) with GPU memory footprint of 5 GB (20 GB). This cluster has a compute capacity of 23.40 teraflops double precision or 70.40 teraflops single precision on GPU side and 2.9 teraflops on cpu side. Known as the Microway GPU cluster, or the nodes n33-n37, mwgpu (120 job slots). <del>Old queue mw256 merged in.</del> 
Line 42: Line 42:
   * 18 nodes with dual twelve core chips (Xeon E5-2650 v4, 2.2 Ghz) in Supermicro 1U rack servers with a memory footprint of 128 GB each (2,304 GB). This cluster has a compute capacity of 14.3 teraflops (estimated). Known as the Microway "Carlos" CPU cluster, or nodes n60-n77, queue mw128, 648 job slots.   * 18 nodes with dual twelve core chips (Xeon E5-2650 v4, 2.2 Ghz) in Supermicro 1U rack servers with a memory footprint of 128 GB each (2,304 GB). This cluster has a compute capacity of 14.3 teraflops (estimated). Known as the Microway "Carlos" CPU cluster, or nodes n60-n77, queue mw128, 648 job slots.
  
-All queues are available for job submissions via all login nodes. All nodes on Infiniband switches for parallel computational jobs (excludes bss24, tinymem and mw128 queues).  Our total job slot count is roughly 1,688, our physical core count 1,176Our total teraflops compute capacity is about 36 cpu side, 23 gpu side. Our total memory footprint is about 100 GB gpu side,  7,280 GB cpu side (excludes queue bss24).+  * node with dual eight core chips (Xeon E5-2620 v42.10 Ghz) in Supermicro 1U rack server with a memory footprint of 128 GB. This node has four GTX1080i (32 GB memory footprint) gpus. Known as the "amber128" queue, 24 job slots.
  
-Home directory file system are provided (via NFS or IPoIBby the node ''sharptail'' (our file server) from a direct attached disk array. In total10 TB of /home disk space is accessible to the users. Node ''greentail'' makes available 33 TB of scratch space at /sanscratch via NFS.  In addition all nodes provide local scratch space at /localscratch (excludes queue tinymem). The Openlava scheduler automatically makes directories in both these scratch areas for each job (named after JOBPID). Backup services for /home are provided via disk-to-disk snapshots from node ''sharptail'' to node ''cottontail'' disk arrays. (dailyweeklymonthly snapshots are mounted read only on ''cottontail'' for self-serve content retrievals)+  * 12 nodes with dual twelve core chips (Xeon Silver 4214, 2.20 Ghzin ASUS ESC4000G4 2U rack servers with a memory footprint of 96 GB 1,152 GB, about 20 teraflops  dpfp). These node each have four RTX1080S (32 GB memory footprint) gpus providing 702 teraflops (mixed mode). Known as the "rtx2080" rack, nodes n79-n90queue exx96432 job slots.
  
-A subset of 25 nodes of the Blue Sky Studio cluster listed above also runs our test Hadoop cluster.  The namenode and login node is ''whitetail'' and also contains the scheduler for Hadoop. It is based on Cloudera CD3U6 repository +  * 2 nodes with dual twelve core chips (Xeon 4214R “Cascade Lake Refresh” 2.4 GHz), Supermicro 1U servers with a memory footprint of 192 GB. These nodes hold four RTX5000 gpus each. Known as the "sslurm test nodes, nodes n100-n101Service by cottontail2, dual  Xeon 5222 “Cascade Lake-SP” 3.8 GHz 4-core with a memory footprint of 96 GB. ''test'' queue, nodes n100-n101.
  
-Two Rstore storage servers each provide about 104 TB of usable backup space which is not mounted on the compute nodes. Each Rstore server's content is replicated to a dedicated passive standby server of same size.+  * 6 nodes with dual 28 core chips (Xeon Gold 'Ice Lake-SP' 6330 CPU @ 2.00GHz), Supermicro 1U servers with a memory footprint of 256 GB ( 1,536 GB, about 27 teraflops dpfp). Know as "astro" rack. ''mw256'' queue nodes n102-n107. Storage server "astrostore" serves the nodes NFSoRDMA (EDR Infiniband). About 164 TB. 
 + 
 +All queues are available for job submissions via all login nodes. Some nodes on Infiniband switches for parallel computational jobs (queues: mw256fd, hp12, mw256).  Our total job slot count is roughly 2,144 with our physical core count 1,480. Our total teraflops compute capacity is about 58 cpu side, 25 gpu side (double precision floating point) and 702 gpu side (mixed mode). Our total memory footprint is about 560 GB gpu side,  10,452 GB cpu side. 
 + 
 +Home directory file system are provided (via NFS or IPoIB) by the node ''hpcstore'' (our file server) from a direct attached disk array. In total, 235 TB of /zfshomes disk space is accessible to the users. Node ''greentail52'' makes available 55 TB of scratch space at /sanscratch via NFS.  In addition all nodes provide local scratch space at /localscratch (excludes queue tinymem)The scheduler automatically makes directories in both these scratch areas for each job (named after JOBPID). Backup services for /zfshomes are provided via disk-to-disk replication from node ''hpcstore'' to node ''sharptail'' disk arrays. The TrueNAs/ZFS appliance performs daily snapshots with retention window of 180 days. Some faculty&students have their home directories on node ''ringtail'' which provides 33 TB via /home33. Some faculty&students have their home directories on node ''ringtail2'' which provides 66 TB via /home33. Some faculty&students also have their own storage (2x 110 TB via /mindstore).  Static content should be migrated to the Rstore platform.
  
  
 ===== Our Queues ===== ===== Our Queues =====
  
-Commercial software has their own queue limited by available licenses. There are no scheduler license resources, just queue jobs up in appropriate queueJobs are processed on the  nodes of hp12, mw256, and mw256fd queues. That can change if we need to.+There are no scheduler commercial software license resources. Only stata has a limited 6 user license.
  
 ^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^ ^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^
-|  matlab  |  //na//  |  //na//  |  //na//  |   QDR infiniband | //any host// |  8/16 licenses +|  stata  |   //na//  |  //na//  |  //na//  |   QDR Infiniband | //any host// |  6 licenses |
-|  stata  |   //na//  |  //na//  |  //na//  |   QDR infiniband | //any host// |  6 licenses +
-|  mathematica  |  //na//  |  //na//  |  //na//  |   QDR infiniband |  //any host//  |  unlimited licenses  |+
  
 +Note: Matlab and Mathematica now have "unlimited licenses".
  
-^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^ + 
-|  hp12  |   32  |  12  |  256  | QDR infiniband  | n1-n32 |  CPU  | +^Queue^Nr Of Nodes^Total GB Mem Per Node^Job SLots In Queue^Switch^Hosts^Notes^ 
-|  bss24  |  42  |  24  |   84  | gigabit ethernet  | b1-b49 |  CPU  | +|  hp12  |   32  |  12  |  256  | gigabit ethernet  | n1-n32 |  CPU  | 
-|  mw256  |    |  256  |  80  | QDR infiniband  | n33-n37 |  CPU  | +|  mwgpu  |    |  256  |  120  | QDR infiniband  | n33-n37 |  GPU & CPU  | 
-|  mwgpu  |    |  256  |  40  | QDR infiniband  | n33-n37 |  GPU & CPU  | +|  mw256fd  |    |  256  |  192  | QDR infiniband  | n38-n45 |  CPU  |
-|  mw256fd  |    |  256  |  256  | QDR infiniband  | n38-n45 |  CPU  |+
 |  tinymem  |   14  |  32  |  448  | gigabit ethernet  | n39-n59 |  CPU  | |  tinymem  |   14  |  32  |  448  | gigabit ethernet  | n39-n59 |  CPU  |
-|  mw128  |   18  |  129   648  | gigabit ethernet  | n60-n77 |  CPU  |+|  mw128  |   18  |  128   648  | gigabit ethernet  | n60-n77 |  CPU  | 
 +|  amber128  |    |  128  |  24  | gigabit ethernet  | n78 |  GPU & CPU  | 
 +|  exx96  |   12  |  96  |  432  | gigabit ethernet  | n79-n90 |  GPU & CPU  | 
 +|  test  |    |  192  |  96  | gigabit ethernet  | n100-n101 |  GPU & CPU  | 
 +|  mw256  |    |  256  |  672  | EDR infiniband  | n102-n102 |  CPU  |
  
 Some guidelines for appropriate queue usage with detailed page links: Some guidelines for appropriate queue usage with detailed page links:
Line 74: Line 80:
   * hp12 is the default queue   * hp12 is the default queue
     * for processing lots of small to medium memory footprint jobs     * for processing lots of small to medium memory footprint jobs
-  * bss24, primarily used by bioinformatics group, available to all if needed +  * mwgpu is for GPU (K20) enabled software primarily (Amber, Lammps, NAMD, Gromacs, Matlab, Mathematica) 
-    * when not in use powered off, email me (hmeij@wes) or PEND jobs (hpcadmin will get notified) +    * be sure to reserve one or more job slot for each GPU used [[cluster:192|EXX96]] 
-    * also our Hadoop cluster [[cluster:115|Use Hadoop Cluster]] +    * be sure to use the correct wrapper script to set up mpirun from mvapich2, mpich3 or openmpi
-  * mw256 are for jobs requiring large memory access (up to 24 jobs slots per node) +
-    * for exclusive use of a node reserve all memory +
-  * mwgpu is for GPU enabled software primarily (Amber, Lammps, NAMD, Gromacs, Matlab, Mathematica) +
-    * be sure to reserve one or more job slot for each GPU used [[cluster:119|Submitting GPU Jobs]] +
-    * be sure to use the correct wrapper script to set up mpirun from mvapich2+
   * mw256fd  are for jobs requiring large memory access (up to 24 jobs slots per node)    * mw256fd  are for jobs requiring large memory access (up to 24 jobs slots per node) 
     * or requiring lots of threads (job slots) confined to single node (Gaussian, Autodeck)     * or requiring lots of threads (job slots) confined to single node (Gaussian, Autodeck)
Line 94: Line 95:
   * mw128 (bought with faculty startup funds) tailored for Gaussian jobs   * mw128 (bought with faculty startup funds) tailored for Gaussian jobs
     * About 2TB /localscratch (Raid 10) on each node     * About 2TB /localscratch (Raid 10) on each node
-    * Priority access for Carlos' group till summer 2020 +    * Priority access for Carlos' group till 07/01/2020 
-  * test (swallowtail, petaltail, cottontail2+  * amber128 (donated hardwaretailored for Amber16 jobs 
-    * wall time of hours of CPU usage+    * Be sure to use mpich3 for Amber 
 +    * Priority access for Amber jobs till 10/01/2020 
 +  * exx96 contains 4 RTX2080S per node 
 +    * same setup as mwgpu queue 
 +  * test contains RTX5000 gpus 
 +    * can be used for production runs 
 +    * beware of preemptive events, checkpoint! 
 +  * mw128, NFSoRDMA, bought with faculty startup monies 
 +    * beware of preemptive events, checkpoint! 
 +    * 6 compute nodes 
 +    * Priority access for Sarah's lab till 4/1/2026
  
-**There are no wall time limits in our HPCC environment except for queue ''test''.** You are responsible for checkpointing though. Consult these pages, all nodes in all queues are BLCR enabled. Logins nodes and storage nodes are on UPS but all compute nodes are on utility power. Crashes do happen, be prepared to restart your long running jobs.+**NOTE**: we are migrating from Openlava to Slurm during summer 2022. All queues except ''hp12'' and ''mw256fd'' will be service by ''cottontail2'' Slurm scheduler.
  
-  * [[cluster:147|BLCR Checkpoint in OL3]] Serial Jobs +  * [[cluster:213|New Head Node]] 
-  * [[cluster:148|BLCR Checkpoint in OL3]] Parallel Jobs+  * [[cluster:218|Getting Started with Slurm  Guide]] 
 + 
 +**There are no wall time limits in our HPCC environment except for queue ''test''.** You are responsible for checkpointing though. Consult these pages, all nodes in all queues are DMTCP enabled (read [[cluster:190|DMTCP]]. Login nodes and storage nodes are on UPS but all compute nodes are on utility power. Crashes do happen, be prepared to restart your long running jobs.
  
 ===== Other Stuff ===== ===== Other Stuff =====
Line 107: Line 120:
 Home directory policy and Rstore storage options [[cluster:136|HomeDir and Storage Options]] Home directory policy and Rstore storage options [[cluster:136|HomeDir and Storage Options]]
  
-Checkpointing is supported in all queues, how it works [[cluster:124|BLCR]] page+Checkpointing is supported in all queues, how it works [[cluster:190|DMTCP]] page 
 + 
 +For a list of software installed consult [[cluster:73|Software List]] page, endless...
  
-For a list of software installed consult [[cluster:73|Software List]] page+For a list of OpenHPC software installed consult [[cluster:215|Software List]] page
  
 Details on all scratch spaces consult [[cluster:142|Scratch Spaces]] page Details on all scratch spaces consult [[cluster:142|Scratch Spaces]] page
Line 115: Line 130:
 For HPCC acknowledgements consult [[cluster:53|Acknowledgement]] page For HPCC acknowledgements consult [[cluster:53|Acknowledgement]] page
  
-Sample scripts for job submissions (serial, array, parallel, forked and gpu) can be found at ''/home/hmeij/jobs/''+Sample scripts for job submissions (serial, array, parallel, forked and gpu) can be found at ''/zfshomes/hmeij/jobs/'' and ''/zfshomes/hmeij/k20redo'' and ''/zfshomes/hmeij/slurm''
  
-From off-campus you need to VPN in first at [[http://webvpn.wesleyan.edu]]+From off-campus you need to VPN in first, download GlobalProtect client  at [[http://vpn.wesleyan.edu]]
  
  
cluster/126.1504622481.txt.gz · Last modified: 2017/09/05 10:41 by hmeij07