This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:126 [2017/06/01 19:32] hmeij07 [Description] |
cluster:126 [2022/03/30 19:06] hmeij07 |
||
---|---|---|---|
Line 8: | Line 8: | ||
===== Description ===== | ===== Description ===== | ||
- | The High Performance Compute Cluster (HPCC) is comprised of several login nodes (all are on our domain | + | The High Performance Compute Cluster (HPCC) is comprised of several login nodes (all are on our internal network (vlan 52) |
- | * primary login node '' | + | * node '' |
- | * secondary | + | * primary |
- | * secondary login node '' | + | * node '' |
- | * (old login node) '' | + | * sandbox |
- | * (old login node) '' | + | * zenoss monitoring and alerting server |
- | * (old node) '' | + | * secondary login server |
- | * (not to be used as) login node '' | + | * server |
- | * (to be installed summer 2017) '' | + | * server |
- | * Storage | + | * storage |
+ | * storage servers '' | ||
+ | * storage servers '' | ||
- | Several types of compute nodes are available via the OpenLava | + | Several types of compute nodes are available via the scheduler: |
- | * All are running | + | * All are running |
- | * All are on private networks (no internet) | + | * All are x86_64, Intel Xeon chips from 2006 onwards |
- | * All mount /home (10TB, to be expanded to 25TB fall 2017) and /sanscratch (33TB) | + | * All are on private networks (192.168.x.x and/or 10.10.x.x, |
+ | * All mount /zfshomes | ||
* All have local disks providing varying amounts of / | * All have local disks providing varying amounts of / | ||
- | * Hyperthreading is on but only on newer hardware are 50% of logical cores allocated. | + | * Hyperthreading is on but only 50% of logical cores allocated |
Compute node categories which usually align with queues: | Compute node categories which usually align with queues: | ||
Line 32: | Line 35: | ||
* 32 nodes with dual quad core chips (Xeon 5620, 2.4 Ghz) in HP blade 4U enclosures (SL2x170z G6) with memory footprint of 12 GB each (384 GB). This cluster has a compute capacity of 1.5 teraflops (measured using Linpack). Known as the HP cluster, or the nodes n1-n32, queue hp12, 256 job slots. | * 32 nodes with dual quad core chips (Xeon 5620, 2.4 Ghz) in HP blade 4U enclosures (SL2x170z G6) with memory footprint of 12 GB each (384 GB). This cluster has a compute capacity of 1.5 teraflops (measured using Linpack). Known as the HP cluster, or the nodes n1-n32, queue hp12, 256 job slots. | ||
- | | + | * 5 nodes with dual eight core chips (Xeon E5-2660, 2.2 Ghz) in ASUS 2U rack servers with a memory footprint of 256 GB each (1,280 GB). Nodes also contain four K20 Telsa GPU each, 2,500 cores/gpu (10,000 gpu cores per node) with GPU memory footprint of 5 GB (20 GB). This cluster has a compute capacity of 23.40 teraflops double |
- | + | ||
- | | + | |
* 8 nodes with dual eight core chips (Xeon E5-2660, 2.2 Ghz) in Supermicro 1U rack servers with a memory footprint of 256 GB each (2,048 GB). This cluster has a compute capacity of 5.3 teraflops (estimated). Known as the Microway CPU cluster, or nodes n38-n45, queue mw256fd, 192 job slots. | * 8 nodes with dual eight core chips (Xeon E5-2660, 2.2 Ghz) in Supermicro 1U rack servers with a memory footprint of 256 GB each (2,048 GB). This cluster has a compute capacity of 5.3 teraflops (estimated). Known as the Microway CPU cluster, or nodes n38-n45, queue mw256fd, 192 job slots. | ||
Line 40: | Line 41: | ||
* 14 nodes with dual ten core chips (Xeon E5-2550 v3, 2.3 Ghz) in Supermicro 1U rack servers with a memory footprint of 32 GB each (256 GB). This cluster has a compute capacity of 12 teraflops (estimated). Known as the Microway tinymem cluster, or n46-n59, queue tinymem, 448 job slots. | * 14 nodes with dual ten core chips (Xeon E5-2550 v3, 2.3 Ghz) in Supermicro 1U rack servers with a memory footprint of 32 GB each (256 GB). This cluster has a compute capacity of 12 teraflops (estimated). Known as the Microway tinymem cluster, or n46-n59, queue tinymem, 448 job slots. | ||
- | All queues are available for job submissions via all login nodes. All nodes on Infiniband switches for parallel computational jobs (excludes bss24 and tinymem queues). | + | * 18 nodes with dual twelve core chips (Xeon E5-2650 v4, 2.2 Ghz) in Supermicro 1U rack servers with a memory footprint |
- | Home directory file system are provided (via NFS or IPoIB) by the node '' | + | * 1 node with dual eight core chips (Xeon E5-2620 v4, 2.10 Ghz) in Supermicro 1U rack server with a memory footprint of 128 GB. This node has four GTX1080i (32 GB memory footprint) gpus. Known as the " |
+ | |||
+ | * 12 nodes with dual twelve core chips (Xeon Silver 4214, 2.20 Ghz) in ASUS ESC4000G4 2U rack servers with a memory footprint of 96 GB ( 1,152 GB, about 20 teraflops | ||
+ | |||
+ | * | ||
+ | |||
+ | All queues are available for job submissions via all login nodes. Some nodes on Infiniband switches for parallel computational jobs (queues: mw256fd, hp12). | ||
+ | |||
+ | Home directory file system are provided (via NFS or IPoIB) by the node '' | ||
+ | |||
+ | Two (old) Rstore storage servers each provide about 104 TB of usable backup space which is not mounted on the compute nodes. Each Rstore server' | ||
- | A subset of 25 nodes of the Blue Sky Studio cluster listed above also runs our test Hadoop cluster. | ||
===== Our Queues ===== | ===== Our Queues ===== | ||
- | Commercial software has their own queue limited by available licenses | + | Commercial software has their own queue limited by available licenses. There are no scheduler license resources, just queue jobs up in appropriate queue. Commercial software jobs are processed on the nodes of mw256fd |
^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^ | ^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^ | ||
- | | matlab | + | | stata | // |
- | | stata | // | + | |
- | | mathematica | + | |
+ | Note: Matlab and Mathematica now have " | ||
- | ^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^ | + | |
+ | ^Queue^Nr Of Nodes^Total GB Mem Per Node^Job SLots In Queue^Switch^Hosts^Notes^ | ||
| hp12 | | | hp12 | | ||
- | | | + | | |
- | | | + | | |
- | | | + | | |
- | | | + | | |
- | | | + | | |
+ | | exx96 | | ||
Some guidelines for appropriate queue usage with detailed page links: | Some guidelines for appropriate queue usage with detailed page links: | ||
* hp12 is the default queue | * hp12 is the default queue | ||
- | * for processing lots of small memory footprint jobs | + | * for processing lots of small to medium |
- | * bss24, primarily used by bioinformatics group, available to all if needed | + | * mwgpu is for GPU (K20) enabled software primarily (Amber, Lammps, NAMD, Gromacs, Matlab, Mathematica) |
- | * when not in use shut down, email me (hmeij@wes) or PEND jobs (hpcadmin will get notified) | + | * be sure to reserve one or more job slot for each GPU used [[cluster:192|EXX96]] |
- | * also our Hadoop cluster (access via head node whitetail) [[cluster: | + | * be sure to use the correct wrapper script |
- | * mw256 are for jobs requiring large memory access (up to 24 jobs slots per node) | + | |
- | * mwgpu is for GPU enabled software primarily (Amber, Lammps, NAMD, Gromacs, Matlab, Mathematica) | + | |
- | * be sure to reserve one or more job slot for each GPU used [[cluster:119|Submitting GPU Jobs]] | + | |
- | * be sure to use the correct wrapper script | + | |
* mw256fd | * mw256fd | ||
* or requiring lots of threads (job slots) confined to single node (Gaussian, Autodeck) | * or requiring lots of threads (job slots) confined to single node (Gaussian, Autodeck) | ||
- | * or requiring access to / | + | * or requiring access to fast / |
* you must stage and save results, for an example read [[https:// | * you must stage and save results, for an example read [[https:// | ||
* / | * / | ||
- | * / | + | * or requiring larger |
* stage temporary data in / | * stage temporary data in / | ||
* tinymem are for small serial jobs with small memory requirements | * tinymem are for small serial jobs with small memory requirements | ||
- | * has a sataDOM (non spinning 16G device on motherboard) for operating system | + | * nodes have a sataDOM (non spinning 16G USB device on motherboard) for operating system |
- | * do not use / | + | * do not use / |
- | * test (swallowtail, | + | * mw128 (bought with faculty startup funds) tailored for Gaussian jobs |
- | * wall time of 8 hours of CPU usage | + | * About 2TB / |
- | + | * Priority access for Carlos' | |
- | **There are no wall time limits in our HPCC except for queue '' | + | * amber128 (donated hardware) tailored for Amber16 jobs |
+ | * Be sure to use mpich3 for Amber | ||
+ | * Priority access for Amber jobs till 10/01/2020 | ||
+ | * test (swallowtail, | ||
+ | * wall time of 8 hours of CPU usage | ||
- | | + | **There are no wall time limits |
- | | + | |
===== Other Stuff ===== | ===== Other Stuff ===== | ||
Line 97: | Line 107: | ||
Home directory policy and Rstore storage options [[cluster: | Home directory policy and Rstore storage options [[cluster: | ||
- | Checkpointing is supported in all queues, how it works [[cluster:124|BLCR]] page | + | Checkpointing is supported in all queues, how it works [[cluster:190|DMTCP]] page |
- | For a list of software installed consult [[cluster: | + | For a list of software installed consult [[cluster: |
Details on all scratch spaces consult [[cluster: | Details on all scratch spaces consult [[cluster: | ||
Line 105: | Line 115: | ||
For HPCC acknowledgements consult [[cluster: | For HPCC acknowledgements consult [[cluster: | ||
- | Sample scripts for job submissions (serial, array, parallel, forked and gpu) can be found at ''/ | + | Sample scripts for job submissions (serial, array, parallel, forked and gpu) can be found at ''/ |
- | From off-campus you need to VPN in first at [[http://webvpn.wesleyan.edu]] | + | From off-campus you need to VPN in first, download GlobalProtect client |