This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
cluster:126 [2019/05/30 19:21] hmeij07 [Description] |
cluster:126 [2025/04/18 13:53] (current) hmeij07 |
||
---|---|---|---|
Line 8: | Line 8: | ||
===== Description ===== | ===== Description ===== | ||
- | The High Performance Compute Cluster (HPCC) is comprised of several login nodes (all are on our domain | + | The High Performance Compute Cluster (HPCC) is comprised of several login nodes (all are on our internal network (vlan 52) |
- | * primary login node '' | + | * server |
- | * secondary | + | * primary |
- | * secondary login node '' | + | * zabbix and ganglia monitoring and alerting servers |
- | * (old login node) '' | + | * secondary |
- | * (old login node) '' | + | * scratch server |
- | * < | + | * backup Slurm test server |
- | * (not to be used as) login node '' | + | * storage servers '' |
- | * DR node '' | + | * storage servers |
- | * Storage servers | + | * storage servers |
+ | * storage server '' | ||
+ | * storage server | ||
- | Several types of compute nodes are available via the OpenLava | + | Several types of compute nodes are available via the scheduler: |
- | * All are running | + | * All are running |
- | * All are on private networks (no internet) | + | * All are x86_64, Intel Xeon chips with OpenHPC compile environment 2.x |
- | * All mount /home (10TB, to be expanded to 25TB fall 2017) and /sanscratch (33TB) | + | * All are on private networks (192.168.x.x and/or 10.10.x.x, |
+ | * All mount /zfshomes | ||
* All have local disks providing varying amounts of / | * All have local disks providing varying amounts of / | ||
- | * Hyperthreading is on but only on newer hardware are 50% of logical cores allocated. | + | * Hyperthreading is on (can be allocated |
Compute node categories which usually align with queues: | Compute node categories which usually align with queues: | ||
Line 40: | Line 43: | ||
* 18 nodes with dual twelve core chips (Xeon E5-2650 v4, 2.2 Ghz) in Supermicro 1U rack servers with a memory footprint of 128 GB each (2,304 GB). This cluster has a compute capacity of 14.3 teraflops (estimated). Known as the Microway " | * 18 nodes with dual twelve core chips (Xeon E5-2650 v4, 2.2 Ghz) in Supermicro 1U rack servers with a memory footprint of 128 GB each (2,304 GB). This cluster has a compute capacity of 14.3 teraflops (estimated). Known as the Microway " | ||
- | * 1 node with dual eight core chips (Xeon E5-2620 v4, 2.10 Ghz) in Supermicro 1U rack server with a memory footprint of 64 GB (128 GB). This node has four GTX1080Ti | + | * 1 node with dual eight core chips (Xeon E5-2620 v4, 2.10 Ghz) in Supermicro 1U rack server with a memory footprint of 128 GB. This node has four GTX1080i |
- | All queues are available for job submissions via all login nodes. All nodes on Infiniband switches for parallel computational jobs (excludes tinymem, mw128 and amber128 queues). Our total job slot count is roughly 1,712 with our physical core count 1,192. Our total teraflops compute capacity is about 38 cpu side, 25 gpu side (double precision floating point). Our total memory footprint | + | * 12 nodes with dual twelve core chips (Xeon Silver 4214, 2.20 Ghz) in ASUS ESC4000G4 2U rack servers |
- | Home directory file system are provided | + | * 2 nodes with dual twelve core chips (Xeon 4214R “Cascade Lake Refresh” 2.4 GHz), Supermicro 1U servers with a memory footprint |
- | Two Rstore storage servers each provide about 104 TB of usable backup space which is not mounted on the compute | + | * 6 nodes with dual 28 core chips (Xeon Gold 'Ice Lake-SP' |
+ | * 10 nodes with dual 12 core chips (Xeon Silver 4410Y CPU @ 3.9 Ghz), Emerald Rapids Microway servers with a memory footprint of 256 GB (2,560 GB, about 90 teraflops dpfp). These nodes hold four RTX4070Ti-Super gpus each. Known as the " | ||
- | ===== Our Queues ===== | + | All queues are available for job submissions via cottontail2 login node. Some nodes on Infiniband switches for parallel computational jobs (queues: mw256fd, hp12, mw256). |
- | Commercial software has their own queue limited | + | Home directory file system are provided (via NFS or IPoIB) |
- | ^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^ | ||
- | | stata | // | ||
- | Note: Matlab and Mathematica now have " | + | ===== Our Queues ===== |
+ | |||
+ | There are no scheduler commercial software license resources. Only stata has a limited 6 user license. | ||
^Queue^Nr Of Nodes^Total GB Mem Per Node^Job SLots In Queue^Switch^Hosts^Notes^ | ^Queue^Nr Of Nodes^Total GB Mem Per Node^Job SLots In Queue^Switch^Hosts^Notes^ | ||
- | | hp12 | | + | | hp12 | |
| mwgpu | | | mwgpu | | ||
| mw256fd | | mw256fd | ||
| tinymem | | tinymem | ||
| mw128 | | | mw128 | | ||
- | | amber128 | + | | amber128 |
+ | | exx96 | | ||
+ | | test | | ||
+ | | mw256 | | ||
+ | | mwgpu256 | ||
Some guidelines for appropriate queue usage with detailed page links: | Some guidelines for appropriate queue usage with detailed page links: | ||
Line 71: | Line 79: | ||
* hp12 is the default queue | * hp12 is the default queue | ||
* for processing lots of small to medium memory footprint jobs | * for processing lots of small to medium memory footprint jobs | ||
- | | + | * mwgpu is for GPU (K20) enabled software primarily (Amber, Lammps, NAMD, Gromacs, Matlab, Mathematica) |
- | * for exclusive use of a node reserve all memory | + | * be sure to reserve one or more job slot for each GPU used [[cluster:192|EXX96]] |
- | | + | * be sure to use the correct wrapper script to set up mpirun from mvapich2, mpich3 or openmpi |
- | * be sure to reserve one or more job slot for each GPU used [[cluster:119|Submitting GPU Jobs]] | + | |
- | * be sure to use the correct wrapper script to set up mpirun from mvapich2 | + | |
* mw256fd | * mw256fd | ||
* or requiring lots of threads (job slots) confined to single node (Gaussian, Autodeck) | * or requiring lots of threads (job slots) confined to single node (Gaussian, Autodeck) | ||
Line 88: | Line 94: | ||
* mw128 (bought with faculty startup funds) tailored for Gaussian jobs | * mw128 (bought with faculty startup funds) tailored for Gaussian jobs | ||
* About 2TB / | * About 2TB / | ||
- | * Priority access for Carlos' | + | * Priority access for Carlos' |
* amber128 (donated hardware) tailored for Amber16 jobs | * amber128 (donated hardware) tailored for Amber16 jobs | ||
* Be sure to use mpich3 for Amber | * Be sure to use mpich3 for Amber | ||
- | * Priority access for Amber jobs | + | * Priority access for Amber jobs till 10/ |
- | * test (swallowtail, petaltail, cottontail2) | + | * exx96 contains 4 RTX2080S per node |
- | * wall time of 8 hours of CPU usage | + | * same setup as mwgpu queue |
+ | * test contains 8 RTX5000 gpus | ||
+ | * can be used for production runs | ||
+ | * beware of preemptive events, checkpoint! | ||
+ | * mw1256, NFSoRDMA, bought with faculty startup monies | ||
+ | * beware | ||
+ | * 6 compute nodes | ||
+ | * Priority access for Sarah' | ||
+ | * mwgpu256, contains 40 RTX4070Ti-Super gpus | ||
+ | * same setup as exx96 queue | ||
- | **There are no wall time limits in our HPCC environment | + | **NOTE**: we are migrating from Openlava to Slurm during summer 2022. All queues |
- | * [[cluster:147|BLCR Checkpoint in OL3]] Serial Jobs | + | * [[cluster:213|New Head Node]] |
- | * [[cluster:148|BLCR Checkpoint | + | * [[cluster:218|Getting Started with Slurm Guide]] |
+ | |||
+ | **There are no wall time limits | ||
===== Other Stuff ===== | ===== Other Stuff ===== | ||
Line 104: | Line 121: | ||
Home directory policy and Rstore storage options [[cluster: | Home directory policy and Rstore storage options [[cluster: | ||
- | Checkpointing is supported in all queues, how it works [[cluster:124|BLCR]] page | + | Checkpointing is supported in all queues, how it works [[cluster:190|DMTCP]] page |
+ | |||
+ | For a list of software installed consult [[cluster: | ||
- | For a list of software installed consult [[cluster:73|Software List]] page | + | For a list of OpenHPC |
Details on all scratch spaces consult [[cluster: | Details on all scratch spaces consult [[cluster: | ||
Line 112: | Line 131: | ||
For HPCC acknowledgements consult [[cluster: | For HPCC acknowledgements consult [[cluster: | ||
- | Sample scripts for job submissions (serial, array, parallel, forked and gpu) can be found at ''/ | + | Sample scripts for job submissions (serial, array, parallel, forked and gpu) can be found at ''/ |
- | From off-campus you need to VPN in first at [[http://webvpn.wesleyan.edu]] | + | From off-campus you need to VPN in first, download GlobalProtect client |