Both sides previous revision
Previous revision
Next revision
|
Previous revision
|
cluster:126 [2020/02/27 14:26] hmeij07 |
cluster:126 [2025/04/18 13:53] (current) hmeij07 |
===== Description ===== | ===== Description ===== |
| |
The High Performance Compute Cluster (HPCC) is comprised of several login nodes (all are on our domain //wesleyan.edu// behind VPN for off campus access) | The High Performance Compute Cluster (HPCC) is comprised of several login nodes (all are on our internal network (vlan 52) //wesleyan.edu// so VPN is required for off campus access as well as for students on campus) |
| |
* primary login node ''cottontail'' (Supermicro 4U), Openlava scheduler and snapshot engine for /home | * server ''cottontail'' (Supermicro 4U), old scheduler Openlava, CentOS6 |
* secondary login node ''cottontail2'' (HP Proliant G380 2U), backup scheduler | * primary login server ''cottontail2'' (Supermicro 1U), new Slurm scheduler, Rocky8, Warewulf, OpenHPC |
* secondary login node ''swallowtail'' (Dell PowerEdge 2950 2U), backup scheduler, databases | * zabbix and ganglia monitoring and alerting servers ''hpcmon'' (supermicro 1U), CentOS8 |
* sandbox ''petaltail'' (Dell PowerEdge 2950 2U), test box, Warewulf provisioning CentOS6 | * secondary login servers ''petaltal, swallowtail'' (HP blades, CentOS7) |
* NFS server ''greentail52'' (SuperMicro 36+2, 2U), /sanscratch | * scratch server ''greentail522''(Suoermicro 36+2) serving out /sanscratch, CentOS7, sandbox |
* (only log in when moving conternt) file server node ''sharptail'' (Supermicro 4U), /home NFS server | * backup Slurm test server ''sharptail2'' (Supermicro 2U), CentOS8, OpenHPC |
* DR node ''sharptail2'' (Supermicro 2U), disaster recovery for /home, off site (active users only) | * storage servers ''rstore0'' and ''rstore1'' (Supermicro 4U), replicated, Samba shares (2x 440T) |
* storage servers ''rstore0'' and ''rstore2'' (Supermicro 4U), NFS mounts and Samba shares (2x 120T) | * storage servers ''rstore2'' and ''rstore3'' (Supermicro 4U), replicated, Samba shares (2x 440T) |
* storage servers ''rstore4'' and ''rstore6'' (Supermicro 4U), NFS mounts and Samba shares (2x 220T) | * storage servers ''mstore0/mstore1'' (Supermicro 4U), replicated, mounted on all HPC nodes (2x 110T) |
* mindstore storage servers ''mstore0/mstore1'' (Supermicro 4U), available on HPC (2x 110T) | * storage server ''M40HA'' TrueNAS, storage appliance for home directories, /zfshomes (500T) |
| * storage server ''X20HA'' TrueNAS, replication target for ''M40HA'' (300T) |
| |
Several types of compute nodes are available via the OpenLava scheduler, http://www.openlava.org: | Several types of compute nodes are available via the scheduler: |
| |
* All are running CentOS6.10 or CentOS7.7 | * All are running CentOS 8.10 (except some older hardware on CentOS 6 or 7) |
* All are x86_64, Intel Xeon chips from 2006 onwards | * All are x86_64, Intel Xeon chips with OpenHPC compile environment 2.x |
* All are on private networks (192.168.x.x or 10.10.x.x, no internet) | * All are on private networks (192.168.x.x and/or 10.10.x.x, no internet) |
* All mount /home (10TB, to be replaced by FreeNAS/ZFS 190T appliance 2020) and /sanscratch (xfs,55TB) | * All mount /zfshomes (500T TrueNAS/ZFS appliance 2024) and /sanscratch (xfs, 55T) |
* All have local disks providing varying amounts of /localscratch (usually Raid0, no backup!) | * All have local disks providing varying amounts of /localscratch (usually Raid0, no backup!) |
* Hyperthreading is on but only 50% of logical cores allocated via scheduler | * Hyperthreading is on (can be allocated via scheduler) |
| |
Compute node categories which usually align with queues: | Compute node categories which usually align with queues: |
* 12 nodes with dual twelve core chips (Xeon Silver 4214, 2.20 Ghz) in ASUS ESC4000G4 2U rack servers with a memory footprint of 96 GB ( 1,152 GB, about 20 teraflops dpfp). These node each have four RTX1080S (32 GB memory footprint) gpus providing 702 teraflops (mixed mode). Known as the "rtx2080" rack, nodes n79-n90, queue exx96, 432 job slots. | * 12 nodes with dual twelve core chips (Xeon Silver 4214, 2.20 Ghz) in ASUS ESC4000G4 2U rack servers with a memory footprint of 96 GB ( 1,152 GB, about 20 teraflops dpfp). These node each have four RTX1080S (32 GB memory footprint) gpus providing 702 teraflops (mixed mode). Known as the "rtx2080" rack, nodes n79-n90, queue exx96, 432 job slots. |
| |
All queues are available for job submissions via all login nodes. Some nodes on Infiniband switches for parallel computational jobs (queues: me256fd, hp12). Our total job slot count is roughly 2,144 with our physical core count 1,480. Our total teraflops compute capacity is about 58 cpu side, 25 gpu side (double precision floating point) and 702 gpu side (mixed mode). Our total memory footprint is about 528 GB gpu side, 8,532 GB cpu side. | * 2 nodes with dual twelve core chips (Xeon 4214R “Cascade Lake Refresh” 2.4 GHz), Supermicro 1U servers with a memory footprint of 192 GB. These nodes hold four RTX5000 gpus each. Known as the "slurm test" nodes. Service by cottontail2, dual Xeon 5222 “Cascade Lake-SP” 3.8 GHz 4-core with a memory footprint of 96 GB. ''test'' queue, nodes n100-n101. |
| |
Home directory file system are provided (via NFS or IPoIB) by the node ''sharptail'' (our file server) from a direct attached disk array. In total, 10 TB of /home disk space is accessible to the users. Node ''greentail52'' makes available 55 TB of scratch space at /sanscratch via NFS. In addition all nodes provide local scratch space at /localscratch (excludes queue tinymem). The scheduler automatically makes directories in both these scratch areas for each job (named after JOBPID). Backup services for /home are provided via disk-to-disk point-in-time snapshots from node ''sharptail'' to node ''cottontail'' disk arrays. (daily, weekly, monthly snapshots are mounted read only on ''cottontail'' for self-serve content retrievals). Some faculty have their home directories on node ''ringtail'' which provides 33 TB via /home33. Some faculty also have their own storage (2x 110 TB via /mindstore). In addition no-quota, no-backup user directories can be requested in /homeextra1 (7 T) or /homeextra2 (5 T). All home directories will migrate to a FreeNAS/ZFS appliance named ''hpcstore'' in 2020 (190T usable, scalable to 1.2P). | * 6 nodes with dual 28 core chips (Xeon Gold 'Ice Lake-SP' 6330 CPU @ 2.00GHz), Supermicro 1U servers with a memory footprint of 256 GB (1,536 GB, about 27 teraflops dpfp). Know as "astro" rack. ''mw256'' queue nodes n102-n107. Storage server "astrostore" serves the nodes NFSoRDMA (EDR Infiniband). About 164 TB. |
| |
Two (old) Rstore storage servers each provide about 104 TB of usable backup space which is not mounted on the compute nodes. Each Rstore server's content is replicated to a dedicated passive standby server of same size, located in same data center but in different racks. As of Spring 2019 we have added two new Rstore servers of 220 T each, fully backed up with replication. | * 10 nodes with dual 12 core chips (Xeon Silver 4410Y CPU @ 3.9 Ghz), Emerald Rapids Microway servers with a memory footprint of 256 GB (2,560 GB, about 90 teraflops dpfp). These nodes hold four RTX4070Ti-Super gpus each. Known as the "desktop" racks. ''mwgpu256'', nodes 108-n117. |
| |
| All queues are available for job submissions via cottontail2 login node. Some nodes on Infiniband switches for parallel computational jobs (queues: mw256fd, hp12, mw256). Our total job slot count is roughly 2,624 with our physical core count 1,312. Our total teraflops compute capacity is about 88 cpu side and 2,462 gpu side (mixed mode). Our total memory footprint is about 1,200 GB gpu side and 13,012 GB cpu side. |
| |
===== Our Queues ===== | Home directory file system are provided (via NFS or IPoIB) by the node M40ha (our file server) from a direct attached disk array. In total, 500 TB of /zfshomes disk space is accessible to the users. Node ''greentail52'' makes available 55 TB of scratch space at /sanscratch via NFS. In addition all nodes provide local scratch space at /localscratch (excludes queue tinymem). The scheduler automatically makes directories in both these scratch areas for each job (named after JOBPID). Backup services for /zfshomes are provided via replication to an older X20HA TrueNAS/ZFS appliance. The M40HA TrueNAS/ZFS appliance performs daily snapshots with a retention window of 180 days. Some faculty&students have their home directories on node ''ringtail'' which provides 33 TB via /home33. Some faculty&students have their home directories on node ''ringtail2'' which provides 66 TB via /home66. Some faculty&students also have their own storage (2x 110 TB via /mindstore). Static content should be migrated to the Rstore platform. |
| |
Commercial software has their own queue limited by available licenses. There are no scheduler license resources, just queue jobs up in appropriate queue. Jobs are processed on the nodes of hp12, mwgpu <del>mw256</del>, and mw256fd queues. That can change if we need to. | |
| |
^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^ | ===== Our Queues ===== |
| stata | //na// | //na// | //na// | QDR infiniband | //any host// | 6 licenses | | |
| |
Note: Matlab and Mathematica now have "unlimited licenses". | There are no scheduler commercial software license resources. Only stata has a limited 6 user license. Matlab and Mathematica now have "unlimited licenses". |
| |
| |
^Queue^Nr Of Nodes^Total GB Mem Per Node^Job SLots In Queue^Switch^Hosts^Notes^ | ^Queue^Nr Of Nodes^Total GB Mem Per Node^Job SLots In Queue^Switch^Hosts^Notes^ |
| hp12 | 32 | 12 | 256 | QDR infiniband | n1-n32 | CPU | | | hp12 | 32 | 12 | 256 | gigabit ethernet | n1-n32 | CPU | |
| mwgpu | 5 | 256 | 120 | QDR infiniband | n33-n37 | GPU & CPU | | | mwgpu | 5 | 256 | 120 | QDR infiniband | n33-n37 | GPU & CPU | |
| mw256fd | 8 | 256 | 192 | QDR infiniband | n38-n45 | CPU | | | mw256fd | 8 | 256 | 192 | QDR infiniband | n38-n45 | CPU | |
| tinymem | 14 | 32 | 448 | gigabit ethernet | n39-n59 | CPU | | | tinymem | 14 | 32 | 448 | gigabit ethernet | n39-n59 | CPU | |
| mw128 | 18 | 128 | 648 | gigabit ethernet | n60-n77 | CPU | | | mw128 | 18 | 128 | 648 | gigabit ethernet | n60-n77 | CPU | |
| amber128 | 1 | 128 | 24 | gigabit ethernet | n78 | GPCU & CPU | | | amber128 | 1 | 128 | 24 | gigabit ethernet | n78 | GPU & CPU | |
| | exx96 | 12 | 96 | 432 | gigabit ethernet | n79-n90 | GPU & CPU | |
| | test | 2 | 192 | 96 | gigabit ethernet | n100-n101 | GPU & CPU | |
| | mw256 | 6 | 256 | 672 | EDR infiniband | n102-n102 | CPU | |
| | mwgpu256 | 10 | 256 | 480 | gigabit ethernet | n108-n117 | GPU & CPU | |
| |
Some guidelines for appropriate queue usage with detailed page links: | Some guidelines for appropriate queue usage with detailed page links: |
* hp12 is the default queue | * hp12 is the default queue |
* for processing lots of small to medium memory footprint jobs | * for processing lots of small to medium memory footprint jobs |
* mw256 are for jobs requiring large memory access (up to 24 jobs slots per node) | * mwgpu is for GPU (K20) enabled software primarily (Amber, Lammps, NAMD, Gromacs, Matlab, Mathematica) |
* for exclusive use of a node reserve all memory | * be sure to reserve one or more job slot for each GPU used [[cluster:192|EXX96]] |
* mwgpu is for GPU enabled software primarily (Amber, Lammps, NAMD, Gromacs, Matlab, Mathematica) | * be sure to use the correct wrapper script to set up mpirun from mvapich2, mpich3 or openmpi |
* be sure to reserve one or more job slot for each GPU used [[cluster:119|Submitting GPU Jobs]] | |
* be sure to use the correct wrapper script to set up mpirun from mvapich2 | |
* mw256fd are for jobs requiring large memory access (up to 24 jobs slots per node) | * mw256fd are for jobs requiring large memory access (up to 24 jobs slots per node) |
* or requiring lots of threads (job slots) confined to single node (Gaussian, Autodeck) | * or requiring lots of threads (job slots) confined to single node (Gaussian, Autodeck) |
* mw128 (bought with faculty startup funds) tailored for Gaussian jobs | * mw128 (bought with faculty startup funds) tailored for Gaussian jobs |
* About 2TB /localscratch (Raid 10) on each node | * About 2TB /localscratch (Raid 10) on each node |
* Priority access for Carlos' group till summer 2020 | * Priority access for Carlos' group till 07/01/2020 |
* amber128 (donated hardware) tailored for Amber16 jobs | * amber128 (donated hardware) tailored for Amber16 jobs |
* Be sure to use mpich3 for Amber | * Be sure to use mpich3 for Amber |
* Priority access for Amber jobs | * Priority access for Amber jobs till 10/01/2020 |
* test (swallowtail, petaltail, cottontail2) | * exx96 contains 4 RTX2080S per node |
* wall time of 8 hours of CPU usage | * same setup as mwgpu queue |
| * test contains 8 RTX5000 gpus |
| * can be used for production runs |
| * beware of preemptive events, checkpoint! |
| * mw1256, NFSoRDMA, bought with faculty startup monies |
| * beware of preemptive events, checkpoint! |
| * 6 compute nodes |
| * Priority access for Sarah's lab till 4/1/2026 |
| * mwgpu256, contains 40 RTX4070Ti-Super gpus |
| * same setup as exx96 queue |
| |
**There are no wall time limits in our HPCC environment except for queue ''test''.** You are responsible for checkpointing though. Consult these pages, all nodes in all queues are BLCR enabled. Logins nodes and storage nodes are on UPS but all compute nodes are on utility power. Crashes do happen, be prepared to restart your long running jobs. | **NOTE**: we are migrating from Openlava to Slurm during summer 2022. All queues except ''hp12'' and ''mw256fd'' will be service by ''cottontail2'' Slurm scheduler. |
| |
* [[cluster:147|BLCR Checkpoint in OL3]] Serial Jobs | * [[cluster:213|New Head Node]] |
* [[cluster:148|BLCR Checkpoint in OL3]] Parallel Jobs | * [[cluster:218|Getting Started with Slurm Guide]] |
| |
| **There are no wall time limits in our HPCC environment except for queue ''test''.** You are responsible for checkpointing though. Consult these pages, all nodes in all queues are DMTCP enabled (read [[cluster:190|DMTCP]]. Login nodes and storage nodes are on UPS but all compute nodes are on utility power. Crashes do happen, be prepared to restart your long running jobs. |
| |
===== Other Stuff ===== | ===== Other Stuff ===== |
Checkpointing is supported in all queues, how it works [[cluster:190|DMTCP]] page | Checkpointing is supported in all queues, how it works [[cluster:190|DMTCP]] page |
| |
For a list of software installed consult [[cluster:73|Software List]] page | For a list of software installed consult [[cluster:73|Software List]] page, endless... |
| |
| For a list of OpenHPC software installed consult [[cluster:215|Software List]] page |
| |
Details on all scratch spaces consult [[cluster:142|Scratch Spaces]] page | Details on all scratch spaces consult [[cluster:142|Scratch Spaces]] page |
For HPCC acknowledgements consult [[cluster:53|Acknowledgement]] page | For HPCC acknowledgements consult [[cluster:53|Acknowledgement]] page |
| |
Sample scripts for job submissions (serial, array, parallel, forked and gpu) can be found at ''/home/hmeij/jobs/'' | Sample scripts for job submissions (serial, array, parallel, forked and gpu) can be found at ''/zfshomes/hmeij/jobs/'' and ''/zfshomes/hmeij/k20redo'' and ''/zfshomes/hmeij/slurm'' |
| |
From off-campus you need to VPN in first at [[http://webvpn.wesleyan.edu]] | From off-campus you need to VPN in first, download GlobalProtect client at [[http://vpn.wesleyan.edu]] |
| |
| |