This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:126 [2017/12/06 10:40] hmeij07 |
cluster:126 [2020/02/27 09:26] hmeij07 |
||
---|---|---|---|
Line 10: | Line 10: | ||
The High Performance Compute Cluster (HPCC) is comprised of several login nodes (all are on our domain // | The High Performance Compute Cluster (HPCC) is comprised of several login nodes (all are on our domain // | ||
- | * primary login node '' | + | * primary login node '' |
- | * secondary login node '' | + | * secondary login node '' |
* secondary login node '' | * secondary login node '' | ||
- | * (old login node) '' | + | * sandbox |
- | * (old login node) '' | + | * NFS server |
- | * <del>(old node) '' | + | * (only log in when moving conternt) file server node '' |
- | * (not to be used as) login node '' | + | * DR node '' |
- | * DR node '' | + | * storage |
- | * Storage | + | * storage servers '' |
+ | * mindstore storage servers '' | ||
Several types of compute nodes are available via the OpenLava scheduler, http:// | Several types of compute nodes are available via the OpenLava scheduler, http:// | ||
- | * All are running CentOS6.[4-9], | + | * All are running CentOS6.10 or CentOS7.7 |
- | * All are on private networks (no internet) | + | * All are x86_64, Intel Xeon chips from 2006 onwards |
- | * All mount /home (10TB, to be expanded to 25TB fall 2017) and /sanscratch (33TB) | + | * All are on private networks (192.168.x.x or 10.10.x.x, |
+ | * All mount /home (10TB, to be replaced by FreeNAS/ZFS 190T appliance 2020) and /sanscratch (xfs,55TB) | ||
* All have local disks providing varying amounts of / | * All have local disks providing varying amounts of / | ||
- | * Hyperthreading is on but only on newer hardware are 50% of logical cores allocated. | + | * Hyperthreading is on but only 50% of logical cores allocated |
Compute node categories which usually align with queues: | Compute node categories which usually align with queues: | ||
Line 40: | Line 42: | ||
* 18 nodes with dual twelve core chips (Xeon E5-2650 v4, 2.2 Ghz) in Supermicro 1U rack servers with a memory footprint of 128 GB each (2,304 GB). This cluster has a compute capacity of 14.3 teraflops (estimated). Known as the Microway " | * 18 nodes with dual twelve core chips (Xeon E5-2650 v4, 2.2 Ghz) in Supermicro 1U rack servers with a memory footprint of 128 GB each (2,304 GB). This cluster has a compute capacity of 14.3 teraflops (estimated). Known as the Microway " | ||
- | * 1 node with dual eight core chips (Xeon E5-2620 v4, 2.10 Ghz) in Supermicro 1U rack server with a memory footprint of 64 GB. This node has four GTX1080Ti | + | * 1 node with dual eight core chips (Xeon E5-2620 v4, 2.10 Ghz) in Supermicro 1U rack server with a memory footprint of 128 GB. This node has four GTX1080i (32 GB memory footprint) |
- | All queues are available for job submissions via all login nodes. All nodes on Infiniband switches for parallel computational jobs (excludes bss24, tinymem and mw128 queues). Our total job slot count is roughly | + | * 12 nodes with dual twelve core chips (Xeon Silver 4214, 2.20 Ghz) in ASUS ESC4000G4 2U rack servers with a memory footprint of 96 GB ( 1,152 GB, about 20 teraflops |
- | Home directory file system | + | All queues |
- | A subset of 25 nodes of the Blue Sky Studio cluster listed above also runs our test Hadoop cluster. The namenode and login node is '' | + | Home directory file system are provided (via NFS or IPoIB) by the node '' |
- | Two Rstore storage servers each provide about 104 TB of usable backup space which is not mounted on the compute nodes. Each Rstore server' | + | Two (old) Rstore storage servers each provide about 104 TB of usable backup space which is not mounted on the compute nodes. Each Rstore server' |
Line 56: | Line 58: | ||
^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^ | ^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^ | ||
- | | matlab | ||
| stata | // | | stata | // | ||
- | | mathematica | ||
+ | Note: Matlab and Mathematica now have " | ||
- | ^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^ | + | |
+ | ^Queue^Nr Of Nodes^Total GB Mem Per Node^Job SLots In Queue^Switch^Hosts^Notes^ | ||
| hp12 | | | hp12 | | ||
- | | bss24 | 42 | 24 | | ||
| mwgpu | | | mwgpu | | ||
- | | mw256fd | + | | mw256fd |
| tinymem | | tinymem | ||
- | | mw128 | | + | | mw128 | |
+ | | amber128 | ||
Some guidelines for appropriate queue usage with detailed page links: | Some guidelines for appropriate queue usage with detailed page links: | ||
Line 73: | Line 75: | ||
* hp12 is the default queue | * hp12 is the default queue | ||
* for processing lots of small to medium memory footprint jobs | * for processing lots of small to medium memory footprint jobs | ||
- | * bss24, primarily used by bioinformatics group, available to all if needed | ||
- | * when not in use powered off, email me (hmeij@wes) or PEND jobs (hpcadmin will get notified) | ||
- | * also our Hadoop cluster [[cluster: | ||
* mw256 are for jobs requiring large memory access (up to 24 jobs slots per node) | * mw256 are for jobs requiring large memory access (up to 24 jobs slots per node) | ||
* for exclusive use of a node reserve all memory | * for exclusive use of a node reserve all memory | ||
Line 94: | Line 93: | ||
* About 2TB / | * About 2TB / | ||
* Priority access for Carlos' | * Priority access for Carlos' | ||
+ | * amber128 (donated hardware) tailored for Amber16 jobs | ||
+ | * Be sure to use mpich3 for Amber | ||
+ | * Priority access for Amber jobs | ||
* test (swallowtail, | * test (swallowtail, | ||
* wall time of 8 hours of CPU usage | * wall time of 8 hours of CPU usage | ||
Line 106: | Line 108: | ||
Home directory policy and Rstore storage options [[cluster: | Home directory policy and Rstore storage options [[cluster: | ||
- | Checkpointing is supported in all queues, how it works [[cluster:124|BLCR]] page | + | Checkpointing is supported in all queues, how it works [[cluster:190|DMTCP]] page |
For a list of software installed consult [[cluster: | For a list of software installed consult [[cluster: |