Warning: Undefined array key "DOKU_PREFS" in /usr/share/dokuwiki/inc/common.php on line 2082
cluster:126 [DokuWiki]

User Tools

Site Tools


cluster:126

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:126 [2019/05/30 15:21]
hmeij07 [Description]
cluster:126 [2020/02/27 09:40]
hmeij07
Line 10: Line 10:
 The High Performance Compute Cluster (HPCC) is comprised of several login nodes (all are on our domain //wesleyan.edu// behind VPN for off campus access) The High Performance Compute Cluster (HPCC) is comprised of several login nodes (all are on our domain //wesleyan.edu// behind VPN for off campus access)
  
-  * primary login node ''cottontail'' (Supermicro 4U), Openlava scheduler and snapshot engine +  * primary login node ''cottontail'' (Supermicro 4U), primary scheduler and snapshot engine for /home 
-  * secondary login node ''cottontail2'' (HP Proliant G380 2U), backup scheduler, standby for /sanscratch+  * secondary login node ''cottontail2'' (HP Proliant G380 2U), backup scheduler
   * secondary login node ''swallowtail'' (Dell PowerEdge 2950 2U), backup scheduler, databases   * secondary login node ''swallowtail'' (Dell PowerEdge 2950 2U), backup scheduler, databases
-  * (old login node) ''petaltail'' (Dell PowerEdge 2950 2U), test box (may crash), Warewulf provisioning +  * sandbox ''petaltail'' (Dell PowerEdge 2950 2U), test box, Warewulf provisioning CentOS6 
-  * (old login node) ''greentail'' (HP Proliant G380 2U), /sanscratch primary NFS server +  * sandbox ''whitetail'' (HP Proliant G380 2U), Warewulf OpenHPC provisioning CentOS7 
-  * <del>(old node) ''whitetail'' (Angstrom Blade 1U), Hadoop Cloudera test server</del> +  * zenoss monitoring and alerting server ''hpcmon'' (supermicro 1U, centos6) 
-  * (not to be used aslogin node ''sharptail'' (Supermicro 4U), /home primary NFS server +  * NFS server ''greentail52'' (SuperMicro 36+2, 2U), /sanscratch  
-  * DR node ''sharptail2'' (Supermicro 2U), disaster recovery for /home, off site +  * (only log in when moving conterntfile server node ''sharptail'' (Supermicro 4U), /home NFS server 
-  * Storage servers ''rstore0'' and ''rstore2'' (Supermicro 4U), NFS mounts and Samba shares+  * DR node ''sharptail2'' (Supermicro 2U), disaster recovery for /home, off site (active users only) 
 +  * storage servers ''rstore0'' and ''rstore2'' (Supermicro 4U), NFS mounts and Samba shares (2x 120T) 
 +  * storage servers ''rstore4'' and ''rstore6'' (Supermicro 4U), NFS mounts and Samba shares (2x 220T) 
 +  * mindstore storage servers ''mstore0/mstore1'' (Supermicro 4U), available on HPC (2x 110T)
  
-Several types of compute nodes are available via the OpenLava scheduler, http://www.openlava.org+Several types of compute nodes are available via the scheduler: 
  
-  * All are running CentOS6.[4-9], x86_64, Intel Xeon chips (except the Angstrom blades which are AMD Opteron) +  * All are running CentOS6.10 or CentOS7.7 
-  * All are on private networks (no internet) +  * All are x86_64, Intel Xeon chips from 2006 onwards 
-  * All mount /home (10TB, to be expanded to 25TB fall 2017) and /sanscratch (33TB+  * All are on private networks (192.168.x.x or 10.10.x.x, no internet) 
 +  * All mount /home (10TB, to be replaced by FreeNAS/ZFS 190T appliance 2020) and /sanscratch (xfs,55TB
   * All have local disks providing varying amounts of /localscratch (usually Raid0, no backup!)   * All have local disks providing varying amounts of /localscratch (usually Raid0, no backup!)
-  * Hyperthreading is on but only on newer hardware are 50% of logical cores allocated.+  * Hyperthreading is on but only 50% of logical cores allocated via scheduler
  
 Compute node categories which usually align with queues: Compute node categories which usually align with queues:
Line 40: Line 44:
   * 18 nodes with dual twelve core chips (Xeon E5-2650 v4, 2.2 Ghz) in Supermicro 1U rack servers with a memory footprint of 128 GB each (2,304 GB). This cluster has a compute capacity of 14.3 teraflops (estimated). Known as the Microway "Carlos" CPU cluster, or nodes n60-n77, queue mw128, 648 job slots.   * 18 nodes with dual twelve core chips (Xeon E5-2650 v4, 2.2 Ghz) in Supermicro 1U rack servers with a memory footprint of 128 GB each (2,304 GB). This cluster has a compute capacity of 14.3 teraflops (estimated). Known as the Microway "Carlos" CPU cluster, or nodes n60-n77, queue mw128, 648 job slots.
  
-  * 1 node with dual eight core chips (Xeon E5-2620 v4, 2.10 Ghz) in Supermicro 1U rack server with a memory footprint of 64 GB (128 GB). This node has four GTX1080Ti (44 GB memory footprint) gpus providing 1.42 teraflops. Known as the "donated Ambernode n78, queue amber128, 24 job slots.+  * 1 node with dual eight core chips (Xeon E5-2620 v4, 2.10 Ghz) in Supermicro 1U rack server with a memory footprint of 128 GB. This node has four GTX1080i (32 GB memory footprint) gpus. Known as the "amber128" queue, 24 job slots.
  
-All queues are available for job submissions via all login nodes. All nodes on Infiniband switches for parallel computational jobs (excludes tinymemmw128 and amber128 queues) Our total job slot count is roughly 1,712 with our physical core count 1,192. Our total teraflops compute capacity is about 38 cpu side25 gpu side (double precision floating point). Our total memory footprint is about 144 GB gpu side 7,408 GB cpu side.+  * 12 nodes with dual twelve core chips (Xeon Silver 42142.20 Ghz) in ASUS ESC4000G4 2U rack servers with a memory footprint of 96 GB ( 1,152 GBabout 20 teraflops  dpfp). These node each have four RTX1080S (32 GB memory footprint) gpus providing 702 teraflops (mixed mode). Known as the "rtx2080" racknodes n79-n90queue exx96, 432 job slots.
  
-Home directory file system are provided (via NFS or IPoIB) by the node ''sharptail'' (our file server) from a direct attached disk array. In total, 10 TB of /home disk space is accessible to the users. Node ''greentail'' makes available 33 TB of scratch space at /sanscratch via NFS.  In addition all nodes provide local scratch space at /localscratch (excludes queue tinymem)The Openlava scheduler automatically makes directories in both these scratch areas for each job (named after JOBPID). Backup services for /home are provided via disk-to-disk point-in-time snapshots from node ''sharptail'' to node ''cottontail'' disk arrays. (dailyweeklymonthly snapshots are mounted read only on ''cottontail'' for self-serve content retrievals)Some chemists have their home directories on node ''ringtail'' which provides 33 TB via /home33. Some psyc folks also have their own storage of 110 TB via /mindstore.  In addition no-quotano-backup user directories can be requested in /homeextra1 (7 Tor /homeextar2 (5 T). +All queues are available for job submissions via all login nodes. Some nodes on Infiniband switches for parallel computational jobs (queues: me256fd, hp12).  Our total job slot count is roughly 2,144 with our physical core count 1,480Our total teraflops compute capacity is about 58 cpu side25 gpu side (double precision floating pointand 702 gpu side (mixed mode). Our total memory footprint is about 528 GB gpu side,  8,532 GB cpu side.
  
-Two Rstore storage servers each provide about 104 TB of usable backup space which is not mounted on the compute nodes. Each Rstore server's content is replicated to a dedicated passive standby server of same size, located in same data center but in different racks. As of Spring 2019 we have added two new Rstore servers of 220 T each, fully backed up with replication.+Home directory file system are provided (via NFS or IPoIB) by the node ''sharptail'' (our file server) from a direct attached disk array. In total, 10 TB of /home disk space is accessible to the users. Node ''greentail52'' makes available 55 TB of scratch space at /sanscratch via NFS.  In addition all nodes provide local scratch space at /localscratch (excludes queue tinymem). The scheduler automatically makes directories in both these scratch areas for each job (named after JOBPID). Backup services for /home are provided via disk-to-disk point-in-time snapshots from node ''sharptail'' to node ''cottontail'' disk arrays. (daily, weekly, monthly snapshots are mounted read only on ''cottontail'' for self-serve content retrievals). Some faculty have their home directories on node ''ringtail'' which provides 33 TB via /home33. Some faculty also have their own storage (2x 110 TB via /mindstore).  In addition no-quota, no-backup user directories can be requested in /homeextra1 (7 T) or /homeextra2 (5 T).  All home directories will migrate to a FreeNAS/ZFS appliance named ''hpcstore'' in 2020 (190T usable, scalable to 1.2P).  
 + 
 +Two (old) Rstore storage servers each provide about 104 TB of usable backup space which is not mounted on the compute nodes. Each Rstore server's content is replicated to a dedicated passive standby server of same size, located in same data center but in different racks. As of Spring 2019 we have added two new Rstore servers of 220 T each, fully backed up with replication.
  
  
 ===== Our Queues ===== ===== Our Queues =====
  
-Commercial software has their own queue limited by available licenses. There are no scheduler license resources, just queue jobs up in appropriate queue. Jobs are processed on the  nodes of hp12, mwgpu <del>mw256</del>, and mw256fd queues. That can change if we need to.+Commercial software has their own queue limited by available licenses. There are no scheduler license resources, just queue jobs up in appropriate queue. Commercial software jobs are processed on the  nodes of mw256fd and mw128
  
 ^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^ ^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^
-|  stata  |   //na//  |  //na//  |  //na//  |   QDR infiniband | //any host// |  6 licenses |+|  stata  |   //na//  |  //na//  |  //na//  |   QDR Infiniband | //any host// |  6 licenses |
  
 Note: Matlab and Mathematica now have "unlimited licenses". Note: Matlab and Mathematica now have "unlimited licenses".
Line 65: Line 71:
 |  tinymem  |   14  |  32  |  448  | gigabit ethernet  | n39-n59 |  CPU  | |  tinymem  |   14  |  32  |  448  | gigabit ethernet  | n39-n59 |  CPU  |
 |  mw128  |   18  |  128  |  648  | gigabit ethernet  | n60-n77 |  CPU  | |  mw128  |   18  |  128  |  648  | gigabit ethernet  | n60-n77 |  CPU  |
-|  amber128  |    |  128  |  24  | gigabit ethernet  | n78 |  GPCU & CPU  |+|  amber128  |    |  128  |  24  | gigabit ethernet  | n78 |  GPU & CPU  | 
 +|  exx96  |   12  |  96  |  432  | gigabit ethernet  | n79-n90 |  GPU & CPU  |
  
 Some guidelines for appropriate queue usage with detailed page links: Some guidelines for appropriate queue usage with detailed page links:
Line 71: Line 78:
   * hp12 is the default queue   * hp12 is the default queue
     * for processing lots of small to medium memory footprint jobs     * for processing lots of small to medium memory footprint jobs
-  * mw256 are for jobs requiring large memory access (up to 24 jobs slots per node) 
-    * for exclusive use of a node reserve all memory 
   * mwgpu is for GPU enabled software primarily (Amber, Lammps, NAMD, Gromacs, Matlab, Mathematica)   * mwgpu is for GPU enabled software primarily (Amber, Lammps, NAMD, Gromacs, Matlab, Mathematica)
-    * be sure to reserve one or more job slot for each GPU used [[cluster:119|Submitting GPU Jobs]] +    * be sure to reserve one or more job slot for each GPU used [[cluster:192|EXX96]] 
-    * be sure to use the correct wrapper script to set up mpirun from mvapich2+    * be sure to use the correct wrapper script to set up mpirun from mvapich2, mpich3 or openmpi
   * mw256fd  are for jobs requiring large memory access (up to 24 jobs slots per node)    * mw256fd  are for jobs requiring large memory access (up to 24 jobs slots per node) 
     * or requiring lots of threads (job slots) confined to single node (Gaussian, Autodeck)     * or requiring lots of threads (job slots) confined to single node (Gaussian, Autodeck)
Line 88: Line 93:
   * mw128 (bought with faculty startup funds) tailored for Gaussian jobs   * mw128 (bought with faculty startup funds) tailored for Gaussian jobs
     * About 2TB /localscratch (Raid 10) on each node     * About 2TB /localscratch (Raid 10) on each node
-    * Priority access for Carlos' group till summer 2020+    * Priority access for Carlos' group till 07/01/2020
   * amber128 (donated hardware) tailored for Amber16 jobs   * amber128 (donated hardware) tailored for Amber16 jobs
     * Be sure to use mpich3 for Amber     * Be sure to use mpich3 for Amber
-    * Priority access for Amber jobs+    * Priority access for Amber jobs till 10/01/2020
   * test (swallowtail, petaltail, cottontail2)   * test (swallowtail, petaltail, cottontail2)
-    * wall time of 8 hours of CPU usage +    * wall time of 8 hours of CPU usage 
- +
-**There are no wall time limits in our HPCC environment except for queue ''test''.** You are responsible for checkpointing though. Consult these pages, all nodes in all queues are BLCR enabled. Logins nodes and storage nodes are on UPS but all compute nodes are on utility power. Crashes do happen, be prepared to restart your long running jobs.+
  
-  [[cluster:147|BLCR Checkpoint in OL3]] Serial Jobs +**There are no wall time limits in our HPCC environment except for queue ''test''.** You are responsible for checkpointing though. Consult these pages, all nodes in all queues are DMTCP enabled (read [[cluster:190|DMTCP]]. Logins nodes and storage nodes are on UPS but all compute nodes are on utility power. Crashes do happen, be prepared to restart your long running jobs.
-  * [[cluster:148|BLCR Checkpoint in OL3]] Parallel Jobs+
  
 ===== Other Stuff ===== ===== Other Stuff =====
Line 104: Line 106:
 Home directory policy and Rstore storage options [[cluster:136|HomeDir and Storage Options]] Home directory policy and Rstore storage options [[cluster:136|HomeDir and Storage Options]]
  
-Checkpointing is supported in all queues, how it works [[cluster:124|BLCR]] page+Checkpointing is supported in all queues, how it works [[cluster:190|DMTCP]] page
  
-For a list of software installed consult [[cluster:73|Software List]] page+For a list of software installed consult [[cluster:73|Software List]] page, endless...
  
 Details on all scratch spaces consult [[cluster:142|Scratch Spaces]] page Details on all scratch spaces consult [[cluster:142|Scratch Spaces]] page
Line 114: Line 116:
 Sample scripts for job submissions (serial, array, parallel, forked and gpu) can be found at ''/home/hmeij/jobs/'' Sample scripts for job submissions (serial, array, parallel, forked and gpu) can be found at ''/home/hmeij/jobs/''
  
-From off-campus you need to VPN in first at [[http://webvpn.wesleyan.edu]]+From off-campus you need to VPN in first at [[http://vpn.wesleyan.edu]]
  
  
cluster/126.txt · Last modified: 2023/10/23 15:37 by hmeij07