User Tools

Site Tools


cluster:126

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:126 [2017/06/01 19:34]
hmeij07 [Description]
cluster:126 [2017/07/12 14:32]
hmeij07
Line 17: Line 17:
   * (old node) ''whitetail'' (Angstrom Blade 1U), Hadoop Cloudera test server   * (old node) ''whitetail'' (Angstrom Blade 1U), Hadoop Cloudera test server
   * (not to be used as) login node ''sharptail'' (Supermicro 4U), /home primary NFS server   * (not to be used as) login node ''sharptail'' (Supermicro 4U), /home primary NFS server
-  * (to be installed summer 2017) ''homedr'' (Supermicro 2U), disaster recovery for /home, off site+  * (to be populated summer 2017) ''sharptail2'' (Supermicro 2U), disaster recovery for /home, off site
   * Storage servers ''rstore0'' and ''rstore2'' (Supermicro 4U), NFS mounts and Samba shares   * Storage servers ''rstore0'' and ''rstore2'' (Supermicro 4U), NFS mounts and Samba shares
  
 Several types of compute nodes are available via the OpenLava scheduler, http://www.openlava.org:  Several types of compute nodes are available via the OpenLava scheduler, http://www.openlava.org: 
  
-  * All are running CentOS6.8, x86_64, Intel Xeon chips (except the Angstrom blades which are AMD Opteron)+  * All are running CentOS6.[4-9], x86_64, Intel Xeon chips (except the Angstrom blades which are AMD Opteron)
   * All are on private networks (no internet)   * All are on private networks (no internet)
   * All mount /home (10TB, to be expanded to 25TB fall 2017) and /sanscratch (33TB)    * All mount /home (10TB, to be expanded to 25TB fall 2017) and /sanscratch (33TB) 
Line 40: Line 40:
   * 14 nodes with dual ten core chips (Xeon E5-2550 v3, 2.3 Ghz) in Supermicro 1U rack servers with a memory footprint of 32 GB each (256 GB). This cluster has a compute capacity of 12 teraflops (estimated). Known as the Microway tinymem cluster, or n46-n59, queue tinymem, 448 job slots.   * 14 nodes with dual ten core chips (Xeon E5-2550 v3, 2.3 Ghz) in Supermicro 1U rack servers with a memory footprint of 32 GB each (256 GB). This cluster has a compute capacity of 12 teraflops (estimated). Known as the Microway tinymem cluster, or n46-n59, queue tinymem, 448 job slots.
  
-All queues are available for job submissions via all login nodes. All nodes on Infiniband switches for parallel computational jobs (excludes bss24 and tinymem queues).  Our total job slot count is roughly 1,040, our physical core count 744. Our total teraflops compute capacity is about 22 cpu side, 23 gpu side. Our total memory footprint is about 100 GB gpu side,  4,976 GB cpu side (excludes queue bss24).+  * 18 nodes with dual twelve core chips (Xeon E5-2650 v4, 2.2 Ghz) in Supermicro 1U rack servers with a memory footprint of 128 GB each (2,304 GB). This cluster has a compute capacity of 14.3 teraflops (estimated). Known as the Microway "Carlos" CPU cluster, or nodes n60-n77, queue mw128, 648 job slots. 
 + 
 +All queues are available for job submissions via all login nodes. All nodes on Infiniband switches for parallel computational jobs (excludes bss24tinymem and mw128 queues).  Our total job slot count is roughly 1,688, our physical core count 1,176. Our total teraflops compute capacity is about 36 cpu side, 23 gpu side. Our total memory footprint is about 100 GB gpu side,  7,280 GB cpu side (excludes queue bss24).
  
 Home directory file system are provided (via NFS or IPoIB) by the node ''sharptail'' (our file server) from a direct attached disk array. In total, 10 TB of /home disk space is accessible to the users. Node ''greentail'' makes available 33 TB of scratch space at /sanscratch via NFS.  In addition all nodes provide local scratch space at /localscratch (excludes queue tinymem). The Openlava scheduler automatically makes directories in both these scratch areas for each job (named after JOBPID). Backup services for /home are provided via disk-to-disk snapshots from node ''sharptail'' to node ''cottontail'' disk arrays. (daily, weekly, monthly snapshots are mounted read only on ''cottontail'' for self-serve content retrievals) Home directory file system are provided (via NFS or IPoIB) by the node ''sharptail'' (our file server) from a direct attached disk array. In total, 10 TB of /home disk space is accessible to the users. Node ''greentail'' makes available 33 TB of scratch space at /sanscratch via NFS.  In addition all nodes provide local scratch space at /localscratch (excludes queue tinymem). The Openlava scheduler automatically makes directories in both these scratch areas for each job (named after JOBPID). Backup services for /home are provided via disk-to-disk snapshots from node ''sharptail'' to node ''cottontail'' disk arrays. (daily, weekly, monthly snapshots are mounted read only on ''cottontail'' for self-serve content retrievals)
Line 51: Line 53:
 ===== Our Queues ===== ===== Our Queues =====
  
-Commercial software has their own queue limited by available licenses (no need to check out licenses). Jobs are processed on the  nodes of hp12, mw256, and mw256fd queues. That can change if we need to.+Commercial software has their own queue limited by available licenses. There are no scheduler license resources, just queue jobs up in appropriate queue. Jobs are processed on the  nodes of hp12, mw256, and mw256fd queues. That can change if we need to.
  
 ^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^ ^Queue^Nr Of Nodes^Total GB Mem Per Node^Total Cores In Queue^Switch^Hosts^Notes^
Line 62: Line 64:
 |  hp12  |   32  |  12  |  256  | QDR infiniband  | n1-n32 |  CPU  | |  hp12  |   32  |  12  |  256  | QDR infiniband  | n1-n32 |  CPU  |
 |  bss24  |  42  |  24  |   84  | gigabit ethernet  | b1-b49 |  CPU  | |  bss24  |  42  |  24  |   84  | gigabit ethernet  | b1-b49 |  CPU  |
-|  mw256  |    |  256  |  140  | QDR infiniband  | n33-n37 |  CPU  | +|  mw256  |    |  256  |  80  | QDR infiniband  | n33-n37 |  CPU  | 
-|  mwgpu  |    |  256  |  20  | QDR infiniband  | n33-n37 |  GPU & CPU  |+|  mwgpu  |    |  256  |  40  | QDR infiniband  | n33-n37 |  GPU & CPU  |
 |  mw256fd  |    |  256  |  256  | QDR infiniband  | n38-n45 |  CPU  | |  mw256fd  |    |  256  |  256  | QDR infiniband  | n38-n45 |  CPU  |
-|  tinymem  |   14  |  32  |  560  | gigabit ethernet  | n39-n59 |  CPU  |+|  tinymem  |   14  |  32  |  448  | gigabit ethernet  | n39-n59 |  CPU  | 
 +|  mw128  |   18  |  129  |  648  | gigabit ethernet  | n60-n77 |  CPU  |
  
 Some guidelines for appropriate queue usage with detailed page links: Some guidelines for appropriate queue usage with detailed page links:
  
   * hp12 is the default queue   * hp12 is the default queue
-    * for processing lots of small memory footprint jobs+    * for processing lots of small to medium memory footprint jobs
   * bss24, primarily used by bioinformatics group, available to all if needed   * bss24, primarily used by bioinformatics group, available to all if needed
-    * when not in use shut down, email me (hmeij@wes) or PEND jobs (hpcadmin will get notified) +    * when not in use powered off, email me (hmeij@wes) or PEND jobs (hpcadmin will get notified) 
-    * also our Hadoop cluster (access via head node whitetail) [[cluster:115|Use Hadoop Cluster]]+    * also our Hadoop cluster [[cluster:115|Use Hadoop Cluster]]
   * mw256 are for jobs requiring large memory access (up to 24 jobs slots per node)   * mw256 are for jobs requiring large memory access (up to 24 jobs slots per node)
 +    * for exclusive use of a node reserve all memory
   * mwgpu is for GPU enabled software primarily (Amber, Lammps, NAMD, Gromacs, Matlab, Mathematica)   * mwgpu is for GPU enabled software primarily (Amber, Lammps, NAMD, Gromacs, Matlab, Mathematica)
     * be sure to reserve one or more job slot for each GPU used [[cluster:119|Submitting GPU Jobs]]     * be sure to reserve one or more job slot for each GPU used [[cluster:119|Submitting GPU Jobs]]
-    * be sure to use the correct wrapper script for mpirun from mvapich2+    * be sure to use the correct wrapper script to set up mpirun from mvapich2
   * mw256fd  are for jobs requiring large memory access (up to 24 jobs slots per node)    * mw256fd  are for jobs requiring large memory access (up to 24 jobs slots per node) 
     * or requiring lots of threads (job slots) confined to single node (Gaussian, Autodeck)     * or requiring lots of threads (job slots) confined to single node (Gaussian, Autodeck)
-    * or requiring access to /localscratch on these node which is 175 GB on a 15K disk.+    * or requiring access to fast /localscratch which is 175 GB on a 15K disk.
       * you must stage and save results, for an example read [[https://dokuwiki.wesleyan.edu       * you must stage and save results, for an example read [[https://dokuwiki.wesleyan.edu
       * /doku.php?id=cluster:103#submit_2]]       * /doku.php?id=cluster:103#submit_2]]
-    * /localscratch5tb, unique on these nodes, is a Raid 0 file system of 3 disks providing 5 TB local scratch+    * or requiring larger /localscratch5tb, which is a Raid 0 file system of 5 TB 
       * stage temporary data in /localscratch5tb/username/ and it will not be removed       * stage temporary data in /localscratch5tb/username/ and it will not be removed
   * tinymem are for small serial jobs with small memory requirements   * tinymem are for small serial jobs with small memory requirements
-    * has a sataDOM (non spinning 16G device on motherboard) for operating system +    * nodes have a sataDOM (non spinning 16G USB device on motherboard) for operating system 
-    * do not use /localscratch on these nodes +    * do not use /localscratch on these nodes, no no 
-  * test (swallowtail, petaltail, greentail)+  * mw128 (bought with faculty startup funds) tailored for Gaussian jobs 
 +    * About 2TB /localscratch (Raid 10) on each node 
 +    * Priority access for Carlos' group till summer 2020 
 +  * test (swallowtail, petaltail, cottontail2)
     * wall time of 8 hours of CPU usage     * wall time of 8 hours of CPU usage
  
-**There are no wall time limits in our HPCC except for queue ''test''.** You are responsible for checkpointing though. Consult these pages, all nodes in all queues are BLCR enabled. Logins nodes and storage nodes are on UPS but all compute nodes are on utility power. Crashes do happen, be prepared to restart your long running jobs.+**There are no wall time limits in our HPCC environment except for queue ''test''.** You are responsible for checkpointing though. Consult these pages, all nodes in all queues are BLCR enabled. Logins nodes and storage nodes are on UPS but all compute nodes are on utility power. Crashes do happen, be prepared to restart your long running jobs.
  
   * [[cluster:147|BLCR Checkpoint in OL3]] Serial Jobs   * [[cluster:147|BLCR Checkpoint in OL3]] Serial Jobs
cluster/126.txt · Last modified: 2023/10/23 19:37 by hmeij07