Warning: Undefined array key "DOKU_PREFS" in /usr/share/dokuwiki/inc/common.php on line 2082
cluster:126 [DokuWiki]

User Tools

Site Tools


cluster:126

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:126 [2020/07/28 13:10]
hmeij07
cluster:126 [2022/03/30 15:06]
hmeij07
Line 10: Line 10:
 The High Performance Compute Cluster (HPCC) is comprised of several login nodes (all are on our internal network (vlan 52)  //wesleyan.edu// so VPN is required for off campus access) The High Performance Compute Cluster (HPCC) is comprised of several login nodes (all are on our internal network (vlan 52)  //wesleyan.edu// so VPN is required for off campus access)
  
-  * primary login node ''cottontail'' (Supermicro 4U), primary scheduler (old snapshots of /home on local disk array) +  * node ''cottontail'' (Supermicro 4U), old scheduler 
-  * secondary login node ''cottontail2'' (HP Proliant G380 2U), backup scheduler +  * primary login server ''cottontail2'' (Supermicro 1U), new slurm scheduler 
-  * secondary login node ''swallowtail'' (Dell PowerEdge 2950 2U), backup scheduler, databases+  * node ''swallowtail'' (Dell PowerEdge 2950 2U), backup scheduler, databases
   * sandbox ''petaltail'' (Dell PowerEdge 2950 2U), test box, Warewulf provisioning CentOS6   * sandbox ''petaltail'' (Dell PowerEdge 2950 2U), test box, Warewulf provisioning CentOS6
-  * rebuild ''whitetail:/lvhomes'' (from old /home)(HP Proliant G380 2U), Warewulf OpenHPC provisioning CentOS7 
   * zenoss monitoring and alerting server ''hpcmon'' (supermicro 1U, centos6)   * zenoss monitoring and alerting server ''hpcmon'' (supermicro 1U, centos6)
-  * server ''greentail52'' (SuperMicro 36+2, 2U), serving out /sanscratch  +  * secondary login server ''greentail52'' (SuperMicro 36+2, 2U), serving out /sanscratch  
-  * server ''sharptail'' (Supermicro 4U),  old /home NFS (defunct), will be rebuild for zfshomes replication+  * server ''sharptail'' (Supermicro 4U),  /lvm_data (backup), /zfshomes replication
   * server ''sharptail2'' (Supermicro 2U), disaster recovery for off site (active users only)   * server ''sharptail2'' (Supermicro 2U), disaster recovery for off site (active users only)
-  * storage servers ''rstore0'' and ''rstore2'' (Supermicro 4U), NFS mounts and Samba shares (2x 120T+  * storage servers ''rstore4'' and ''rstore5'' (Supermicro 4U), replicated, Samba shares (2x 220T
-  * storage servers ''rstore4'' and ''rstore6'' (Supermicro 4U), NFS mounts and Samba shares (2x 220T) +  * storage servers ''rstore6'' and ''rstore7'' (Supermicro 4U), replicated, Samba shares (2x 220T) 
-  * storage servers ''mstore0/mindstorsrv1'' (Supermicro 4U), mounted on all HPC nodes (2x 110T)+  * storage servers ''mstore0/mindstorsrv1'' (Supermicro 4U), replicated, mounted on all HPC nodes (2x 110T)
  
 Several types of compute nodes are available via the scheduler:  Several types of compute nodes are available via the scheduler: 
  
-  * All are running CentOS 6.10 or CentOS 7.7+  * All are running CentOS 6.10 or CentOS 7.7 (except cottontail2/n100,n101 run rocky8.5)
   * All are x86_64, Intel Xeon chips from 2006 onwards   * All are x86_64, Intel Xeon chips from 2006 onwards
   * All are on private networks (192.168.x.x and/or 10.10.x.x, no internet)   * All are on private networks (192.168.x.x and/or 10.10.x.x, no internet)
Line 47: Line 46:
  
   * 12 nodes with dual twelve core chips (Xeon Silver 4214, 2.20 Ghz) in ASUS ESC4000G4 2U rack servers with a memory footprint of 96 GB ( 1,152 GB, about 20 teraflops  dpfp). These node each have four RTX1080S (32 GB memory footprint) gpus providing 702 teraflops (mixed mode). Known as the "rtx2080" rack, nodes n79-n90, queue exx96, 432 job slots.   * 12 nodes with dual twelve core chips (Xeon Silver 4214, 2.20 Ghz) in ASUS ESC4000G4 2U rack servers with a memory footprint of 96 GB ( 1,152 GB, about 20 teraflops  dpfp). These node each have four RTX1080S (32 GB memory footprint) gpus providing 702 teraflops (mixed mode). Known as the "rtx2080" rack, nodes n79-n90, queue exx96, 432 job slots.
 +
 +  * 
  
 All queues are available for job submissions via all login nodes. Some nodes on Infiniband switches for parallel computational jobs (queues: mw256fd, hp12).  Our total job slot count is roughly 2,144 with our physical core count 1,480. Our total teraflops compute capacity is about 58 cpu side, 25 gpu side (double precision floating point) and 702 gpu side (mixed mode). Our total memory footprint is about 528 GB gpu side,  8,532 GB cpu side. All queues are available for job submissions via all login nodes. Some nodes on Infiniband switches for parallel computational jobs (queues: mw256fd, hp12).  Our total job slot count is roughly 2,144 with our physical core count 1,480. Our total teraflops compute capacity is about 58 cpu side, 25 gpu side (double precision floating point) and 702 gpu side (mixed mode). Our total memory footprint is about 528 GB gpu side,  8,532 GB cpu side.
Line 78: Line 79:
   * hp12 is the default queue   * hp12 is the default queue
     * for processing lots of small to medium memory footprint jobs     * for processing lots of small to medium memory footprint jobs
-  * mwgpu is for GPU enabled software primarily (Amber, Lammps, NAMD, Gromacs, Matlab, Mathematica)+  * mwgpu is for GPU (K20) enabled software primarily (Amber, Lammps, NAMD, Gromacs, Matlab, Mathematica)
     * be sure to reserve one or more job slot for each GPU used [[cluster:192|EXX96]]     * be sure to reserve one or more job slot for each GPU used [[cluster:192|EXX96]]
     * be sure to use the correct wrapper script to set up mpirun from mvapich2, mpich3 or openmpi     * be sure to use the correct wrapper script to set up mpirun from mvapich2, mpich3 or openmpi
Line 97: Line 98:
     * Be sure to use mpich3 for Amber     * Be sure to use mpich3 for Amber
     * Priority access for Amber jobs till 10/01/2020     * Priority access for Amber jobs till 10/01/2020
-  * test (swallowtail, petaltail, cottontail2)+  * test (swallowtail, petaltail, cottontail2, n29, n33)
     * wall time of 8 hours of CPU usage      * wall time of 8 hours of CPU usage 
  
-**There are no wall time limits in our HPCC environment except for queue ''test''.** You are responsible for checkpointing though. Consult these pages, all nodes in all queues are DMTCP enabled (read [[cluster:190|DMTCP]]. Logins nodes and storage nodes are on UPS but all compute nodes are on utility power. Crashes do happen, be prepared to restart your long running jobs.+**There are no wall time limits in our HPCC environment except for queue ''test''.** You are responsible for checkpointing though. Consult these pages, all nodes in all queues are DMTCP enabled (read [[cluster:190|DMTCP]]. Login nodes and storage nodes are on UPS but all compute nodes are on utility power. Crashes do happen, be prepared to restart your long running jobs.
  
 ===== Other Stuff ===== ===== Other Stuff =====
Line 114: Line 115:
 For HPCC acknowledgements consult [[cluster:53|Acknowledgement]] page For HPCC acknowledgements consult [[cluster:53|Acknowledgement]] page
  
-Sample scripts for job submissions (serial, array, parallel, forked and gpu) can be found at ''/home/hmeij/jobs/''+Sample scripts for job submissions (serial, array, parallel, forked and gpu) can be found at ''/zfshomes/hmeij/jobs/'' and ''/zfshomes/hmeij/k20redo''
  
-From off-campus you need to VPN in first at [[http://vpn.wesleyan.edu]]+From off-campus you need to VPN in first, download GlobalProtect client  at [[http://vpn.wesleyan.edu]]
  
  
cluster/126.txt · Last modified: 2023/10/23 15:37 by hmeij07