This shows you the differences between two versions of the page.
Both sides previous revision Previous revision | Next revision Both sides next revision | ||
cluster:126 [2017/12/06 10:40] hmeij07 |
cluster:126 [2017/12/06 10:57] hmeij07 |
||
---|---|---|---|
Line 40: | Line 40: | ||
* 18 nodes with dual twelve core chips (Xeon E5-2650 v4, 2.2 Ghz) in Supermicro 1U rack servers with a memory footprint of 128 GB each (2,304 GB). This cluster has a compute capacity of 14.3 teraflops (estimated). Known as the Microway " | * 18 nodes with dual twelve core chips (Xeon E5-2650 v4, 2.2 Ghz) in Supermicro 1U rack servers with a memory footprint of 128 GB each (2,304 GB). This cluster has a compute capacity of 14.3 teraflops (estimated). Known as the Microway " | ||
- | * 1 node with dual eight core chips (Xeon E5-2620 v4, 2.10 Ghz) in Supermicro 1U rack server with a memory footprint of 64 GB. This node has four GTX1080Ti gpus providing | + | * 1 node with dual eight core chips (Xeon E5-2620 v4, 2.10 Ghz) in Supermicro 1U rack server with a memory footprint of 64 GB (128 GB). This node has four GTX1080Ti |
- | All queues are available for job submissions via all login nodes. All nodes on Infiniband switches for parallel computational jobs (excludes | + | All queues are available for job submissions via all login nodes. All nodes on Infiniband switches for parallel computational jobs (excludes tinymem, mw128 and amber128 |
- | Home directory file system are provided (via NFS or IPoIB) by the node '' | + | Home directory file system are provided (via NFS or IPoIB) by the node '' |
- | A subset of 25 nodes of the Blue Sky Studio cluster listed above also runs our test Hadoop cluster. | + | <del>A subset of 25 nodes of the Blue Sky Studio cluster listed above also runs our test Hadoop cluster. |
- | Two Rstore storage servers each provide about 104 TB of usable backup space which is not mounted on the compute nodes. Each Rstore server' | + | Two Rstore storage servers each provide about 104 TB of usable backup space which is not mounted on the compute nodes. Each Rstore server' |
Line 56: | Line 56: | ||
^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^ | ^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^ | ||
- | | matlab | ||
| stata | // | | stata | // | ||
- | | mathematica | ||
+ | Note: Matlab and Mathematica now have " | ||
- | ^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^ | + | |
+ | ^Queue^Nr Of Nodes^Total GB Mem Per Node^Job SLots In Queue^Switch^Hosts^Notes^ | ||
| hp12 | | | hp12 | | ||
- | | bss24 | 42 | 24 | | ||
| mwgpu | | | mwgpu | | ||
- | | mw256fd | + | | mw256fd |
| tinymem | | tinymem | ||
- | | mw128 | | + | | mw128 | |
+ | | amber128 | ||
Some guidelines for appropriate queue usage with detailed page links: | Some guidelines for appropriate queue usage with detailed page links: | ||
Line 73: | Line 73: | ||
* hp12 is the default queue | * hp12 is the default queue | ||
* for processing lots of small to medium memory footprint jobs | * for processing lots of small to medium memory footprint jobs | ||
- | * bss24, primarily used by bioinformatics group, available to all if needed | ||
- | * when not in use powered off, email me (hmeij@wes) or PEND jobs (hpcadmin will get notified) | ||
- | * also our Hadoop cluster [[cluster: | ||
* mw256 are for jobs requiring large memory access (up to 24 jobs slots per node) | * mw256 are for jobs requiring large memory access (up to 24 jobs slots per node) | ||
* for exclusive use of a node reserve all memory | * for exclusive use of a node reserve all memory | ||
Line 94: | Line 91: | ||
* About 2TB / | * About 2TB / | ||
* Priority access for Carlos' | * Priority access for Carlos' | ||
+ | * amber128 (donated hardware) tailored for Amber16 jobs | ||
+ | * Be sure to use mpich3 for Amber | ||
+ | * Priority access for Amber jobs | ||
* test (swallowtail, | * test (swallowtail, | ||
* wall time of 8 hours of CPU usage | * wall time of 8 hours of CPU usage |