This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:126 [2017/09/05 14:41] hmeij07 |
cluster:126 [2022/03/30 19:13] hmeij07 |
||
---|---|---|---|
Line 8: | Line 8: | ||
===== Description ===== | ===== Description ===== | ||
- | The High Performance Compute Cluster (HPCC) is comprised of several login nodes (all are on our domain | + | The High Performance Compute Cluster (HPCC) is comprised of several login nodes (all are on our internal network (vlan 52) |
- | * primary login node '' | + | * node '' |
- | * secondary | + | * primary |
- | * secondary login node '' | + | * node '' |
- | * (old login node) '' | + | * sandbox |
- | * (old login node) '' | + | * zenoss monitoring and alerting server |
- | * (old node) '' | + | * secondary login server |
- | * (not to be used as) login node '' | + | * server |
- | * (to be populated summer 2017) '' | + | * server |
- | * Storage | + | * storage |
+ | * storage servers '' | ||
+ | * storage servers '' | ||
- | Several types of compute nodes are available via the OpenLava | + | Several types of compute nodes are available via the scheduler: |
- | * All are running | + | * All are running |
- | * All are on private networks (no internet) | + | * All are x86_64, Intel Xeon chips from 2006 onwards |
- | * All mount /home (10TB, to be expanded to 25TB fall 2017) and /sanscratch (33TB) | + | * All are on private networks (192.168.x.x and/or 10.10.x.x, |
+ | * All mount /zfshomes | ||
* All have local disks providing varying amounts of / | * All have local disks providing varying amounts of / | ||
- | * Hyperthreading is on but only on newer hardware are 50% of logical cores allocated. | + | * Hyperthreading is on but only 50% of logical cores allocated |
Compute node categories which usually align with queues: | Compute node categories which usually align with queues: | ||
* 32 nodes with dual quad core chips (Xeon 5620, 2.4 Ghz) in HP blade 4U enclosures (SL2x170z G6) with memory footprint of 12 GB each (384 GB). This cluster has a compute capacity of 1.5 teraflops (measured using Linpack). Known as the HP cluster, or the nodes n1-n32, queue hp12, 256 job slots. | * 32 nodes with dual quad core chips (Xeon 5620, 2.4 Ghz) in HP blade 4U enclosures (SL2x170z G6) with memory footprint of 12 GB each (384 GB). This cluster has a compute capacity of 1.5 teraflops (measured using Linpack). Known as the HP cluster, or the nodes n1-n32, queue hp12, 256 job slots. | ||
- | |||
- | * 42 nodes with dual single core chips (AMD Opteron Model 250, 2.4 Ghz) in Angstrom blade 12U enclosures with a memory footprint of 24 GB each (1,008 GB). This cluster has a compute capacity of 0.2-0.3 teraflops (estimated). Known as the Blue Sky Studio cluster, or the b-nodes (b0-b51), queue bss24, 84 job slots. Powered off when not in use. | ||
* 5 nodes with dual eight core chips (Xeon E5-2660, 2.2 Ghz) in ASUS 2U rack servers with a memory footprint of 256 GB each (1,280 GB). Nodes also contain four K20 Telsa GPU each, 2,500 cores/gpu (10,000 gpu cores per node) with GPU memory footprint of 5 GB (20 GB). This cluster has a compute capacity of 23.40 teraflops double precision or 70.40 teraflops single precision on GPU side and 2.9 teraflops on cpu side. Known as the Microway GPU cluster, or the nodes n33-n37, mwgpu (120 job slots). < | * 5 nodes with dual eight core chips (Xeon E5-2660, 2.2 Ghz) in ASUS 2U rack servers with a memory footprint of 256 GB each (1,280 GB). Nodes also contain four K20 Telsa GPU each, 2,500 cores/gpu (10,000 gpu cores per node) with GPU memory footprint of 5 GB (20 GB). This cluster has a compute capacity of 23.40 teraflops double precision or 70.40 teraflops single precision on GPU side and 2.9 teraflops on cpu side. Known as the Microway GPU cluster, or the nodes n33-n37, mwgpu (120 job slots). < | ||
Line 42: | Line 43: | ||
* 18 nodes with dual twelve core chips (Xeon E5-2650 v4, 2.2 Ghz) in Supermicro 1U rack servers with a memory footprint of 128 GB each (2,304 GB). This cluster has a compute capacity of 14.3 teraflops (estimated). Known as the Microway " | * 18 nodes with dual twelve core chips (Xeon E5-2650 v4, 2.2 Ghz) in Supermicro 1U rack servers with a memory footprint of 128 GB each (2,304 GB). This cluster has a compute capacity of 14.3 teraflops (estimated). Known as the Microway " | ||
- | All queues are available for job submissions via all login nodes. All nodes on Infiniband switches for parallel computational jobs (excludes bss24, tinymem and mw128 queues). | + | * 1 node with dual eight core chips (Xeon E5-2620 v4, 2.10 Ghz) in Supermicro 1U rack server with a memory footprint |
- | Home directory file system are provided | + | * 12 nodes with dual twelve core chips (Xeon Silver 4214, 2.20 Ghz) in ASUS ESC4000G4 2U rack servers with a memory footprint of 96 GB ( 1,152 GB, about 20 teraflops |
- | A subset of 25 nodes of the Blue Sky Studio cluster listed above also runs our test Hadoop cluster. | + | * 2 nodes with dual twelve core chips (Xeon 4214R “Cascade Lake Refresh” 2.4 GHz), Supermicro 1u servers with a memory footprint |
- | Two Rstore storage servers each provide about 104 TB of usable backup space which is not mounted on the compute nodes. Each Rstore server' | + | All queues are available for job submissions via all login nodes. Some nodes on Infiniband switches for parallel computational jobs (queues: mw256fd, hp12). |
+ | |||
+ | Home directory file system are provided (via NFS or IPoIB) by the node '' | ||
+ | |||
+ | Two (old) Rstore storage servers each provide about 104 TB of usable backup space which is not mounted on the compute nodes. Each Rstore server' | ||
===== Our Queues ===== | ===== Our Queues ===== | ||
- | Commercial software has their own queue limited by available licenses. There are no scheduler license resources, just queue jobs up in appropriate queue. | + | Commercial software has their own queue limited by available licenses. There are no scheduler license resources, just queue jobs up in appropriate queue. |
^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^ | ^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^ | ||
- | | matlab | + | | stata | // |
- | | stata | // | + | |
- | | mathematica | + | |
+ | Note: Matlab and Mathematica now have " | ||
- | ^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^ | + | |
+ | ^Queue^Nr Of Nodes^Total GB Mem Per Node^Job SLots In Queue^Switch^Hosts^Notes^ | ||
| hp12 | | | hp12 | | ||
- | | bss24 | 42 | 24 | | + | | mwgpu | |
- | | mw256 | | + | | mw256fd |
- | | mwgpu | | + | |
- | | mw256fd | + | |
| tinymem | | tinymem | ||
- | | mw128 | | + | | mw128 | |
+ | | amber128 | ||
+ | | exx96 | | ||
Some guidelines for appropriate queue usage with detailed page links: | Some guidelines for appropriate queue usage with detailed page links: | ||
Line 74: | Line 79: | ||
* hp12 is the default queue | * hp12 is the default queue | ||
* for processing lots of small to medium memory footprint jobs | * for processing lots of small to medium memory footprint jobs | ||
- | | + | * mwgpu is for GPU (K20) enabled software primarily (Amber, Lammps, NAMD, Gromacs, Matlab, Mathematica) |
- | * when not in use powered off, email me (hmeij@wes) or PEND jobs (hpcadmin will get notified) | + | * be sure to reserve one or more job slot for each GPU used [[cluster:192|EXX96]] |
- | * also our Hadoop cluster [[cluster: | + | * be sure to use the correct wrapper script to set up mpirun from mvapich2, mpich3 or openmpi |
- | * mw256 are for jobs requiring large memory access (up to 24 jobs slots per node) | + | |
- | * for exclusive use of a node reserve all memory | + | |
- | | + | |
- | * be sure to reserve one or more job slot for each GPU used [[cluster:119|Submitting GPU Jobs]] | + | |
- | * be sure to use the correct wrapper script to set up mpirun from mvapich2 | + | |
* mw256fd | * mw256fd | ||
* or requiring lots of threads (job slots) confined to single node (Gaussian, Autodeck) | * or requiring lots of threads (job slots) confined to single node (Gaussian, Autodeck) | ||
Line 94: | Line 94: | ||
* mw128 (bought with faculty startup funds) tailored for Gaussian jobs | * mw128 (bought with faculty startup funds) tailored for Gaussian jobs | ||
* About 2TB / | * About 2TB / | ||
- | * Priority access for Carlos' | + | * Priority access for Carlos' |
- | * test (swallowtail, | + | * amber128 (donated hardware) tailored for Amber16 jobs |
- | * wall time of 8 hours of CPU usage | + | * Be sure to use mpich3 for Amber |
- | + | * Priority access for Amber jobs till 10/01/2020 | |
- | **There are no wall time limits in our HPCC environment except for queue '' | + | * test (swallowtail, |
+ | * wall time of 8 hours of CPU usage | ||
- | | + | **There are no wall time limits |
- | | + | |
===== Other Stuff ===== | ===== Other Stuff ===== | ||
Line 107: | Line 107: | ||
Home directory policy and Rstore storage options [[cluster: | Home directory policy and Rstore storage options [[cluster: | ||
- | Checkpointing is supported in all queues, how it works [[cluster:124|BLCR]] page | + | Checkpointing is supported in all queues, how it works [[cluster:190|DMTCP]] page |
- | For a list of software installed consult [[cluster: | + | For a list of software installed consult [[cluster: |
Details on all scratch spaces consult [[cluster: | Details on all scratch spaces consult [[cluster: | ||
Line 115: | Line 115: | ||
For HPCC acknowledgements consult [[cluster: | For HPCC acknowledgements consult [[cluster: | ||
- | Sample scripts for job submissions (serial, array, parallel, forked and gpu) can be found at ''/ | + | Sample scripts for job submissions (serial, array, parallel, forked and gpu) can be found at ''/ |
- | From off-campus you need to VPN in first at [[http://webvpn.wesleyan.edu]] | + | From off-campus you need to VPN in first, download GlobalProtect client |