Warning: Undefined array key "DOKU_PREFS" in /usr/share/dokuwiki/inc/common.php on line 2082
cluster:126 [DokuWiki]

User Tools

Site Tools


cluster:126

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision Both sides next revision
cluster:126 [2020/02/27 09:40]
hmeij07
cluster:126 [2020/07/28 13:10]
hmeij07
Line 8: Line 8:
 ===== Description ===== ===== Description =====
  
-The High Performance Compute Cluster (HPCC) is comprised of several login nodes (all are on our domain //wesleyan.edu// behind VPN for off campus access)+The High Performance Compute Cluster (HPCC) is comprised of several login nodes (all are on our internal network (vlan 52)  //wesleyan.edu// so VPN is required for off campus access)
  
-  * primary login node ''cottontail'' (Supermicro 4U), primary scheduler and snapshot engine for /home+  * primary login node ''cottontail'' (Supermicro 4U), primary scheduler (old snapshots of /home on local disk array)
   * secondary login node ''cottontail2'' (HP Proliant G380 2U), backup scheduler   * secondary login node ''cottontail2'' (HP Proliant G380 2U), backup scheduler
   * secondary login node ''swallowtail'' (Dell PowerEdge 2950 2U), backup scheduler, databases   * secondary login node ''swallowtail'' (Dell PowerEdge 2950 2U), backup scheduler, databases
   * sandbox ''petaltail'' (Dell PowerEdge 2950 2U), test box, Warewulf provisioning CentOS6   * sandbox ''petaltail'' (Dell PowerEdge 2950 2U), test box, Warewulf provisioning CentOS6
-  * sandbox ''whitetail'' (HP Proliant G380 2U), Warewulf OpenHPC provisioning CentOS7+  * rebuild ''whitetail:/lvhomes'' (from old /home)(HP Proliant G380 2U), Warewulf OpenHPC provisioning CentOS7
   * zenoss monitoring and alerting server ''hpcmon'' (supermicro 1U, centos6)   * zenoss monitoring and alerting server ''hpcmon'' (supermicro 1U, centos6)
-  * NFS server ''greentail52'' (SuperMicro 36+2, 2U), /sanscratch  +  * server ''greentail52'' (SuperMicro 36+2, 2U), serving out /sanscratch  
-  * (only log in when moving conternt) file server node ''sharptail'' (Supermicro 4U), /home NFS server +  * server ''sharptail'' (Supermicro 4U),  old /home NFS (defunct), will be rebuild for zfshomes replication 
-  * DR node ''sharptail2'' (Supermicro 2U), disaster recovery for /home, off site (active users only)+  * server ''sharptail2'' (Supermicro 2U), disaster recovery for off site (active users only)
   * storage servers ''rstore0'' and ''rstore2'' (Supermicro 4U), NFS mounts and Samba shares (2x 120T)   * storage servers ''rstore0'' and ''rstore2'' (Supermicro 4U), NFS mounts and Samba shares (2x 120T)
   * storage servers ''rstore4'' and ''rstore6'' (Supermicro 4U), NFS mounts and Samba shares (2x 220T)   * storage servers ''rstore4'' and ''rstore6'' (Supermicro 4U), NFS mounts and Samba shares (2x 220T)
-  * mindstore storage servers ''mstore0/mstore1'' (Supermicro 4U), available on HPC (2x 110T)+  * storage servers ''mstore0/mindstorsrv1'' (Supermicro 4U), mounted on all HPC nodes (2x 110T)
  
 Several types of compute nodes are available via the scheduler:  Several types of compute nodes are available via the scheduler: 
  
-  * All are running CentOS6.10 or CentOS7.7+  * All are running CentOS 6.10 or CentOS 7.7
   * All are x86_64, Intel Xeon chips from 2006 onwards   * All are x86_64, Intel Xeon chips from 2006 onwards
-  * All are on private networks (192.168.x.x or 10.10.x.x, no internet) +  * All are on private networks (192.168.x.x and/or 10.10.x.x, no internet) 
-  * All mount /home (10TB, to be replaced by FreeNAS/ZFS 190T appliance 2020) and /sanscratch (xfs,55TB+  * All mount /zfshomes (190T FreeNAS/ZFS appliance 2020) and /sanscratch (xfs, 55T
   * All have local disks providing varying amounts of /localscratch (usually Raid0, no backup!)   * All have local disks providing varying amounts of /localscratch (usually Raid0, no backup!)
   * Hyperthreading is on but only 50% of logical cores allocated via scheduler   * Hyperthreading is on but only 50% of logical cores allocated via scheduler
Line 48: Line 48:
   * 12 nodes with dual twelve core chips (Xeon Silver 4214, 2.20 Ghz) in ASUS ESC4000G4 2U rack servers with a memory footprint of 96 GB ( 1,152 GB, about 20 teraflops  dpfp). These node each have four RTX1080S (32 GB memory footprint) gpus providing 702 teraflops (mixed mode). Known as the "rtx2080" rack, nodes n79-n90, queue exx96, 432 job slots.   * 12 nodes with dual twelve core chips (Xeon Silver 4214, 2.20 Ghz) in ASUS ESC4000G4 2U rack servers with a memory footprint of 96 GB ( 1,152 GB, about 20 teraflops  dpfp). These node each have four RTX1080S (32 GB memory footprint) gpus providing 702 teraflops (mixed mode). Known as the "rtx2080" rack, nodes n79-n90, queue exx96, 432 job slots.
  
-All queues are available for job submissions via all login nodes. Some nodes on Infiniband switches for parallel computational jobs (queues: me256fd, hp12).  Our total job slot count is roughly 2,144 with our physical core count 1,480. Our total teraflops compute capacity is about 58 cpu side, 25 gpu side (double precision floating point) and 702 gpu side (mixed mode). Our total memory footprint is about 528 GB gpu side,  8,532 GB cpu side.+All queues are available for job submissions via all login nodes. Some nodes on Infiniband switches for parallel computational jobs (queues: mw256fd, hp12).  Our total job slot count is roughly 2,144 with our physical core count 1,480. Our total teraflops compute capacity is about 58 cpu side, 25 gpu side (double precision floating point) and 702 gpu side (mixed mode). Our total memory footprint is about 528 GB gpu side,  8,532 GB cpu side.
  
-Home directory file system are provided (via NFS or IPoIB) by the node ''sharptail'' (our file server) from a direct attached disk array. In total, 10 TB of /home disk space is accessible to the users. Node ''greentail52'' makes available 55 TB of scratch space at /sanscratch via NFS.  In addition all nodes provide local scratch space at /localscratch (excludes queue tinymem). The scheduler automatically makes directories in both these scratch areas for each job (named after JOBPID). Backup services for /home are provided via disk-to-disk point-in-time snapshots from node ''sharptail'' to node ''cottontail'' disk arrays. (daily, weekly, monthly snapshots are mounted read only on ''cottontail'' for self-serve content retrievals). Some faculty have their home directories on node ''ringtail'' which provides 33 TB via /home33. Some faculty also have their own storage (2x 110 TB via /mindstore).  In addition no-quota, no-backup user directories can be requested in /homeextra1 (7 T) or /homeextra2 (5 T).  All home directories will migrate to a FreeNAS/ZFS appliance named ''hpcstore'' in 2020 (190T usable, scalable to 1.2P). +Home directory file system are provided (via NFS or IPoIB) by the node ''hpcstore'' (our file server) from a direct attached disk array. In total, 190 TB of /zfshomes disk space is accessible to the users. Node ''greentail52'' makes available 55 TB of scratch space at /sanscratch via NFS.  In addition all nodes provide local scratch space at /localscratch (excludes queue tinymem). The scheduler automatically makes directories in both these scratch areas for each job (named after JOBPID). Backup services for /zfshomes are provided via disk-to-disk replication from node ''hpcstore'' to node ''sharptail'' disk arrays. The TrueNAs/ZFS appliance performs daily snapshots with a retention window of 365 days. Some faculty have their home directories on node ''ringtail'' which provides 33 TB via /home33. Some faculty also have their own storage (2x 110 TB via /mindstore).  All home directories will migrate to a FreeNAS/ZFS appliance named ''hpcstore'' in 2020 (190T usable, scalable to 1.2P, ETA summer 2020). 
  
-Two (old) Rstore storage servers each provide about 104 TB of usable backup space which is not mounted on the compute nodes. Each Rstore server's content is replicated to a dedicated passive standby server of same size, located in same data center but in different racks. As of Spring 2019 we have added two new Rstore servers of 220 T each, fully backed up with replication.+Two (old) Rstore storage servers each provide about 104 TB of usable backup space which is not mounted on the compute nodes. Each Rstore server's content is replicated to a dedicated passive standby server of same size, located in same data center but in different racks. As of Spring 2019 we have added two new Rstore servers of 220 T each, fully backed up with replication. Faculty may request shares for their labs to off load static content from the HPCC.
  
  
cluster/126.txt · Last modified: 2023/10/23 15:37 by hmeij07