This shows you the differences between two versions of the page.
Both sides previous revision Previous revision | Next revision Both sides next revision | ||
cluster:126 [2020/02/27 09:26] hmeij07 |
cluster:126 [2020/02/27 09:40] hmeij07 |
||
---|---|---|---|
Line 10: | Line 10: | ||
The High Performance Compute Cluster (HPCC) is comprised of several login nodes (all are on our domain // | The High Performance Compute Cluster (HPCC) is comprised of several login nodes (all are on our domain // | ||
- | * primary login node '' | + | * primary login node '' |
* secondary login node '' | * secondary login node '' | ||
* secondary login node '' | * secondary login node '' | ||
* sandbox '' | * sandbox '' | ||
+ | * sandbox '' | ||
+ | * zenoss monitoring and alerting server '' | ||
* NFS server '' | * NFS server '' | ||
* (only log in when moving conternt) file server node '' | * (only log in when moving conternt) file server node '' | ||
Line 21: | Line 23: | ||
* mindstore storage servers '' | * mindstore storage servers '' | ||
- | Several types of compute nodes are available via the OpenLava | + | Several types of compute nodes are available via the scheduler: |
* All are running CentOS6.10 or CentOS7.7 | * All are running CentOS6.10 or CentOS7.7 | ||
Line 55: | Line 57: | ||
===== Our Queues ===== | ===== Our Queues ===== | ||
- | Commercial software has their own queue limited by available licenses. There are no scheduler license resources, just queue jobs up in appropriate queue. | + | Commercial software has their own queue limited by available licenses. There are no scheduler license resources, just queue jobs up in appropriate queue. |
^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^ | ^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^ | ||
- | | stata | // | + | | stata | // |
Note: Matlab and Mathematica now have " | Note: Matlab and Mathematica now have " | ||
Line 69: | Line 71: | ||
| tinymem | | tinymem | ||
| mw128 | | | mw128 | | ||
- | | amber128 | + | | amber128 |
+ | | exx96 | | ||
Some guidelines for appropriate queue usage with detailed page links: | Some guidelines for appropriate queue usage with detailed page links: | ||
Line 75: | Line 78: | ||
* hp12 is the default queue | * hp12 is the default queue | ||
* for processing lots of small to medium memory footprint jobs | * for processing lots of small to medium memory footprint jobs | ||
- | * mw256 are for jobs requiring large memory access (up to 24 jobs slots per node) | ||
- | * for exclusive use of a node reserve all memory | ||
* mwgpu is for GPU enabled software primarily (Amber, Lammps, NAMD, Gromacs, Matlab, Mathematica) | * mwgpu is for GPU enabled software primarily (Amber, Lammps, NAMD, Gromacs, Matlab, Mathematica) | ||
- | * be sure to reserve one or more job slot for each GPU used [[cluster:119|Submitting GPU Jobs]] | + | * be sure to reserve one or more job slot for each GPU used [[cluster:192|EXX96]] |
- | * be sure to use the correct wrapper script to set up mpirun from mvapich2 | + | * be sure to use the correct wrapper script to set up mpirun from mvapich2, mpich3 or openmpi |
* mw256fd | * mw256fd | ||
* or requiring lots of threads (job slots) confined to single node (Gaussian, Autodeck) | * or requiring lots of threads (job slots) confined to single node (Gaussian, Autodeck) | ||
Line 92: | Line 93: | ||
* mw128 (bought with faculty startup funds) tailored for Gaussian jobs | * mw128 (bought with faculty startup funds) tailored for Gaussian jobs | ||
* About 2TB / | * About 2TB / | ||
- | * Priority access for Carlos' | + | * Priority access for Carlos' |
* amber128 (donated hardware) tailored for Amber16 jobs | * amber128 (donated hardware) tailored for Amber16 jobs | ||
* Be sure to use mpich3 for Amber | * Be sure to use mpich3 for Amber | ||
- | * Priority access for Amber jobs | + | * Priority access for Amber jobs till 10/01/2020 |
* test (swallowtail, | * test (swallowtail, | ||
- | * wall time of 8 hours of CPU usage | + | * wall time of 8 hours of CPU usage |
- | **There are no wall time limits in our HPCC environment except for queue '' | + | **There are no wall time limits in our HPCC environment except for queue '' |
- | + | ||
- | * [[cluster: | + | |
- | * [[cluster: | + | |
===== Other Stuff ===== | ===== Other Stuff ===== | ||
Line 110: | Line 108: | ||
Checkpointing is supported in all queues, how it works [[cluster: | Checkpointing is supported in all queues, how it works [[cluster: | ||
- | For a list of software installed consult [[cluster: | + | For a list of software installed consult [[cluster: |
Details on all scratch spaces consult [[cluster: | Details on all scratch spaces consult [[cluster: | ||
Line 118: | Line 116: | ||
Sample scripts for job submissions (serial, array, parallel, forked and gpu) can be found at ''/ | Sample scripts for job submissions (serial, array, parallel, forked and gpu) can be found at ''/ | ||
- | From off-campus you need to VPN in first at [[http://webvpn.wesleyan.edu]] | + | From off-campus you need to VPN in first at [[http://vpn.wesleyan.edu]] |