This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:126 [2017/09/05 10:43] hmeij07 |
cluster:126 [2019/05/30 15:21] hmeij07 [Description] |
||
---|---|---|---|
Line 15: | Line 15: | ||
* (old login node) '' | * (old login node) '' | ||
* (old login node) '' | * (old login node) '' | ||
- | * (old node) '' | + | * <del>(old node) '' |
* (not to be used as) login node '' | * (not to be used as) login node '' | ||
- | * (to be populated summer 2017) '' | + | * DR node '' |
* Storage servers '' | * Storage servers '' | ||
Line 31: | Line 31: | ||
* 32 nodes with dual quad core chips (Xeon 5620, 2.4 Ghz) in HP blade 4U enclosures (SL2x170z G6) with memory footprint of 12 GB each (384 GB). This cluster has a compute capacity of 1.5 teraflops (measured using Linpack). Known as the HP cluster, or the nodes n1-n32, queue hp12, 256 job slots. | * 32 nodes with dual quad core chips (Xeon 5620, 2.4 Ghz) in HP blade 4U enclosures (SL2x170z G6) with memory footprint of 12 GB each (384 GB). This cluster has a compute capacity of 1.5 teraflops (measured using Linpack). Known as the HP cluster, or the nodes n1-n32, queue hp12, 256 job slots. | ||
- | |||
- | * 42 nodes with dual single core chips (AMD Opteron Model 250, 2.4 Ghz) in Angstrom blade 12U enclosures with a memory footprint of 24 GB each (1,008 GB). This cluster has a compute capacity of 0.2-0.3 teraflops (estimated). Known as the Blue Sky Studio cluster, or the b-nodes (b0-b51), queue bss24, 84 job slots. Powered off when not in use. | ||
* 5 nodes with dual eight core chips (Xeon E5-2660, 2.2 Ghz) in ASUS 2U rack servers with a memory footprint of 256 GB each (1,280 GB). Nodes also contain four K20 Telsa GPU each, 2,500 cores/gpu (10,000 gpu cores per node) with GPU memory footprint of 5 GB (20 GB). This cluster has a compute capacity of 23.40 teraflops double precision or 70.40 teraflops single precision on GPU side and 2.9 teraflops on cpu side. Known as the Microway GPU cluster, or the nodes n33-n37, mwgpu (120 job slots). < | * 5 nodes with dual eight core chips (Xeon E5-2660, 2.2 Ghz) in ASUS 2U rack servers with a memory footprint of 256 GB each (1,280 GB). Nodes also contain four K20 Telsa GPU each, 2,500 cores/gpu (10,000 gpu cores per node) with GPU memory footprint of 5 GB (20 GB). This cluster has a compute capacity of 23.40 teraflops double precision or 70.40 teraflops single precision on GPU side and 2.9 teraflops on cpu side. Known as the Microway GPU cluster, or the nodes n33-n37, mwgpu (120 job slots). < | ||
Line 42: | Line 40: | ||
* 18 nodes with dual twelve core chips (Xeon E5-2650 v4, 2.2 Ghz) in Supermicro 1U rack servers with a memory footprint of 128 GB each (2,304 GB). This cluster has a compute capacity of 14.3 teraflops (estimated). Known as the Microway " | * 18 nodes with dual twelve core chips (Xeon E5-2650 v4, 2.2 Ghz) in Supermicro 1U rack servers with a memory footprint of 128 GB each (2,304 GB). This cluster has a compute capacity of 14.3 teraflops (estimated). Known as the Microway " | ||
- | All queues are available for job submissions via all login nodes. All nodes on Infiniband switches for parallel computational jobs (excludes bss24, tinymem and mw128 queues). | + | * 1 node with dual eight core chips (Xeon E5-2620 v4, 2.10 Ghz) in Supermicro 1U rack server with a memory footprint |
- | Home directory file system | + | All queues |
- | A subset of 25 nodes of the Blue Sky Studio cluster listed above also runs our test Hadoop cluster. The namenode and login node is '' | + | Home directory file system are provided (via NFS or IPoIB) by the node '' |
- | Two Rstore storage servers each provide about 104 TB of usable backup space which is not mounted on the compute nodes. Each Rstore server' | + | Two Rstore storage servers each provide about 104 TB of usable backup space which is not mounted on the compute nodes. Each Rstore server' |
Line 56: | Line 54: | ||
^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^ | ^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^ | ||
- | | matlab | ||
| stata | // | | stata | // | ||
- | | mathematica | ||
+ | Note: Matlab and Mathematica now have " | ||
- | ^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^ | + | |
+ | ^Queue^Nr Of Nodes^Total GB Mem Per Node^Job SLots In Queue^Switch^Hosts^Notes^ | ||
| hp12 | | | hp12 | | ||
- | | bss24 | 42 | 24 | | ||
| mwgpu | | | mwgpu | | ||
- | | mw256fd | + | | mw256fd |
| tinymem | | tinymem | ||
- | | mw128 | | + | | mw128 | |
+ | | amber128 | ||
Some guidelines for appropriate queue usage with detailed page links: | Some guidelines for appropriate queue usage with detailed page links: | ||
Line 73: | Line 71: | ||
* hp12 is the default queue | * hp12 is the default queue | ||
* for processing lots of small to medium memory footprint jobs | * for processing lots of small to medium memory footprint jobs | ||
- | * bss24, primarily used by bioinformatics group, available to all if needed | ||
- | * when not in use powered off, email me (hmeij@wes) or PEND jobs (hpcadmin will get notified) | ||
- | * also our Hadoop cluster [[cluster: | ||
* mw256 are for jobs requiring large memory access (up to 24 jobs slots per node) | * mw256 are for jobs requiring large memory access (up to 24 jobs slots per node) | ||
* for exclusive use of a node reserve all memory | * for exclusive use of a node reserve all memory | ||
Line 94: | Line 89: | ||
* About 2TB / | * About 2TB / | ||
* Priority access for Carlos' | * Priority access for Carlos' | ||
+ | * amber128 (donated hardware) tailored for Amber16 jobs | ||
+ | * Be sure to use mpich3 for Amber | ||
+ | * Priority access for Amber jobs | ||
* test (swallowtail, | * test (swallowtail, | ||
* wall time of 8 hours of CPU usage | * wall time of 8 hours of CPU usage |