This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:126 [2020/07/28 13:10] hmeij07 |
cluster:126 [2022/03/30 15:06] hmeij07 |
||
---|---|---|---|
Line 10: | Line 10: | ||
The High Performance Compute Cluster (HPCC) is comprised of several login nodes (all are on our internal network (vlan 52) // | The High Performance Compute Cluster (HPCC) is comprised of several login nodes (all are on our internal network (vlan 52) // | ||
- | * primary login node '' | + | * node '' |
- | * secondary | + | * primary |
- | * secondary login node '' | + | * node '' |
* sandbox '' | * sandbox '' | ||
- | * rebuild '' | ||
* zenoss monitoring and alerting server '' | * zenoss monitoring and alerting server '' | ||
- | * server '' | + | * secondary login server '' |
- | * server '' | + | * server '' |
* server '' | * server '' | ||
- | * storage servers '' | + | * storage servers '' |
- | * storage servers '' | + | * storage servers '' |
- | * storage servers '' | + | * storage servers '' |
Several types of compute nodes are available via the scheduler: | Several types of compute nodes are available via the scheduler: | ||
- | * All are running CentOS 6.10 or CentOS 7.7 | + | * All are running CentOS 6.10 or CentOS 7.7 (except cottontail2/ |
* All are x86_64, Intel Xeon chips from 2006 onwards | * All are x86_64, Intel Xeon chips from 2006 onwards | ||
* All are on private networks (192.168.x.x and/or 10.10.x.x, no internet) | * All are on private networks (192.168.x.x and/or 10.10.x.x, no internet) | ||
Line 47: | Line 46: | ||
* 12 nodes with dual twelve core chips (Xeon Silver 4214, 2.20 Ghz) in ASUS ESC4000G4 2U rack servers with a memory footprint of 96 GB ( 1,152 GB, about 20 teraflops | * 12 nodes with dual twelve core chips (Xeon Silver 4214, 2.20 Ghz) in ASUS ESC4000G4 2U rack servers with a memory footprint of 96 GB ( 1,152 GB, about 20 teraflops | ||
+ | |||
+ | * | ||
All queues are available for job submissions via all login nodes. Some nodes on Infiniband switches for parallel computational jobs (queues: mw256fd, hp12). | All queues are available for job submissions via all login nodes. Some nodes on Infiniband switches for parallel computational jobs (queues: mw256fd, hp12). | ||
Line 78: | Line 79: | ||
* hp12 is the default queue | * hp12 is the default queue | ||
* for processing lots of small to medium memory footprint jobs | * for processing lots of small to medium memory footprint jobs | ||
- | * mwgpu is for GPU enabled software primarily (Amber, Lammps, NAMD, Gromacs, Matlab, Mathematica) | + | * mwgpu is for GPU (K20) enabled software primarily (Amber, Lammps, NAMD, Gromacs, Matlab, Mathematica) |
* be sure to reserve one or more job slot for each GPU used [[cluster: | * be sure to reserve one or more job slot for each GPU used [[cluster: | ||
* be sure to use the correct wrapper script to set up mpirun from mvapich2, mpich3 or openmpi | * be sure to use the correct wrapper script to set up mpirun from mvapich2, mpich3 or openmpi | ||
Line 97: | Line 98: | ||
* Be sure to use mpich3 for Amber | * Be sure to use mpich3 for Amber | ||
* Priority access for Amber jobs till 10/01/2020 | * Priority access for Amber jobs till 10/01/2020 | ||
- | * test (swallowtail, | + | * test (swallowtail, |
* wall time of 8 hours of CPU usage | * wall time of 8 hours of CPU usage | ||
- | **There are no wall time limits in our HPCC environment except for queue '' | + | **There are no wall time limits in our HPCC environment except for queue '' |
===== Other Stuff ===== | ===== Other Stuff ===== | ||
Line 114: | Line 115: | ||
For HPCC acknowledgements consult [[cluster: | For HPCC acknowledgements consult [[cluster: | ||
- | Sample scripts for job submissions (serial, array, parallel, forked and gpu) can be found at ''/ | + | Sample scripts for job submissions (serial, array, parallel, forked and gpu) can be found at ''/ |
- | From off-campus you need to VPN in first at [[http:// | + | From off-campus you need to VPN in first, download GlobalProtect client |