This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:126 [2017/06/01 19:34] hmeij07 [Description] |
cluster:126 [2017/07/12 14:32] hmeij07 |
||
---|---|---|---|
Line 17: | Line 17: | ||
* (old node) '' | * (old node) '' | ||
* (not to be used as) login node '' | * (not to be used as) login node '' | ||
- | * (to be installed | + | * (to be populated |
* Storage servers '' | * Storage servers '' | ||
Several types of compute nodes are available via the OpenLava scheduler, http:// | Several types of compute nodes are available via the OpenLava scheduler, http:// | ||
- | * All are running CentOS6.8, x86_64, Intel Xeon chips (except the Angstrom blades which are AMD Opteron) | + | * All are running CentOS6.[4-9], x86_64, Intel Xeon chips (except the Angstrom blades which are AMD Opteron) |
* All are on private networks (no internet) | * All are on private networks (no internet) | ||
* All mount /home (10TB, to be expanded to 25TB fall 2017) and /sanscratch (33TB) | * All mount /home (10TB, to be expanded to 25TB fall 2017) and /sanscratch (33TB) | ||
Line 40: | Line 40: | ||
* 14 nodes with dual ten core chips (Xeon E5-2550 v3, 2.3 Ghz) in Supermicro 1U rack servers with a memory footprint of 32 GB each (256 GB). This cluster has a compute capacity of 12 teraflops (estimated). Known as the Microway tinymem cluster, or n46-n59, queue tinymem, 448 job slots. | * 14 nodes with dual ten core chips (Xeon E5-2550 v3, 2.3 Ghz) in Supermicro 1U rack servers with a memory footprint of 32 GB each (256 GB). This cluster has a compute capacity of 12 teraflops (estimated). Known as the Microway tinymem cluster, or n46-n59, queue tinymem, 448 job slots. | ||
- | All queues are available for job submissions via all login nodes. All nodes on Infiniband switches for parallel computational jobs (excludes bss24 and tinymem queues). | + | * 18 nodes with dual twelve core chips (Xeon E5-2650 v4, 2.2 Ghz) in Supermicro 1U rack servers with a memory footprint of 128 GB each (2,304 GB). This cluster has a compute capacity of 14.3 teraflops (estimated). Known as the Microway " |
+ | |||
+ | All queues are available for job submissions via all login nodes. All nodes on Infiniband switches for parallel computational jobs (excludes bss24, tinymem | ||
Home directory file system are provided (via NFS or IPoIB) by the node '' | Home directory file system are provided (via NFS or IPoIB) by the node '' | ||
Line 51: | Line 53: | ||
===== Our Queues ===== | ===== Our Queues ===== | ||
- | Commercial software has their own queue limited by available licenses | + | Commercial software has their own queue limited by available licenses. There are no scheduler license resources, just queue jobs up in appropriate queue. Jobs are processed on the nodes of hp12, mw256, and mw256fd queues. That can change if we need to. |
^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^ | ^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^ | ||
Line 62: | Line 64: | ||
| hp12 | | | hp12 | | ||
| bss24 | 42 | 24 | | | bss24 | 42 | 24 | | ||
- | | mw256 | | + | | mw256 | |
- | | mwgpu | | + | | mwgpu | |
| mw256fd | | mw256fd | ||
- | | tinymem | + | | tinymem |
+ | | mw128 | | ||
Some guidelines for appropriate queue usage with detailed page links: | Some guidelines for appropriate queue usage with detailed page links: | ||
* hp12 is the default queue | * hp12 is the default queue | ||
- | * for processing lots of small memory footprint jobs | + | * for processing lots of small to medium |
* bss24, primarily used by bioinformatics group, available to all if needed | * bss24, primarily used by bioinformatics group, available to all if needed | ||
- | * when not in use shut down, email me (hmeij@wes) or PEND jobs (hpcadmin will get notified) | + | * when not in use powered off, email me (hmeij@wes) or PEND jobs (hpcadmin will get notified) |
- | * also our Hadoop cluster | + | * also our Hadoop cluster [[cluster: |
* mw256 are for jobs requiring large memory access (up to 24 jobs slots per node) | * mw256 are for jobs requiring large memory access (up to 24 jobs slots per node) | ||
+ | * for exclusive use of a node reserve all memory | ||
* mwgpu is for GPU enabled software primarily (Amber, Lammps, NAMD, Gromacs, Matlab, Mathematica) | * mwgpu is for GPU enabled software primarily (Amber, Lammps, NAMD, Gromacs, Matlab, Mathematica) | ||
* be sure to reserve one or more job slot for each GPU used [[cluster: | * be sure to reserve one or more job slot for each GPU used [[cluster: | ||
- | * be sure to use the correct wrapper script | + | * be sure to use the correct wrapper script |
* mw256fd | * mw256fd | ||
* or requiring lots of threads (job slots) confined to single node (Gaussian, Autodeck) | * or requiring lots of threads (job slots) confined to single node (Gaussian, Autodeck) | ||
- | * or requiring access to / | + | * or requiring access to fast / |
* you must stage and save results, for an example read [[https:// | * you must stage and save results, for an example read [[https:// | ||
* / | * / | ||
- | * / | + | * or requiring larger |
* stage temporary data in / | * stage temporary data in / | ||
* tinymem are for small serial jobs with small memory requirements | * tinymem are for small serial jobs with small memory requirements | ||
- | * has a sataDOM (non spinning 16G device on motherboard) for operating system | + | * nodes have a sataDOM (non spinning 16G USB device on motherboard) for operating system |
- | * do not use / | + | * do not use / |
- | * test (swallowtail, | + | * mw128 (bought with faculty startup funds) tailored for Gaussian jobs |
+ | * About 2TB / | ||
+ | * Priority access for Carlos' | ||
+ | * test (swallowtail, | ||
* wall time of 8 hours of CPU usage | * wall time of 8 hours of CPU usage | ||
- | **There are no wall time limits in our HPCC except for queue '' | + | **There are no wall time limits in our HPCC environment |
* [[cluster: | * [[cluster: |