This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:189 [2020/01/11 17:57] hmeij07 |
cluster:189 [2020/01/11 21:12] hmeij07 |
||
---|---|---|---|
Line 3: | Line 3: | ||
===== Structure and History of HPCC ===== | ===== Structure and History of HPCC ===== | ||
+ | |||
+ | As promised at the CLAC HPC Mindshare event at Swarthmore College Jan 2020. Here is the Funding and Priority Policies with some context around it. | ||
==== History ==== | ==== History ==== | ||
- | In 2006, 4 Wesleyan | + | In 2006, 4 Wesleyan |
- | The Advisory Group meets with the user base yearly | + | The Advisory Group meets with the user base yearly |
==== Structure ==== | ==== Structure ==== | ||
- | The Wesleyan HPCC is part of the **Scientific Computing and Informatics Center** ([[https:// | + | The Wesleyan HPCC is part of the **Scientific Computing and Informatics Center** ([[https:// |
- | The QAC has an [[https:// | + | The QAC has an [[https:// |
==== Funding Policy ==== | ==== Funding Policy ==== | ||
Line 37: | Line 39: | ||
A gpu hour of usage is 3x the cpu hourly rate.\\ | A gpu hour of usage is 3x the cpu hourly rate.\\ | ||
- | user base stats, annual meeting, spring reading week | + | We currently have about 1,450 physical cpu cores, 60 gpus, 520 gb of gpu memory and 8,560 gb cpu memory provided by about 120 compute nodes and login nodes. Scratch spaces are provide local to compute nodes (2-5 tb) or over the network via NFS (55 tb). Home directories are under quota (10 tb) but these will disappear in the future with the TrueNAS/ZFS appliance (190 tb, 475 tb effective assuming a compression rate of 2.5x). a guide can be found here [[cluster: |
- | hpcc stats cpu cores, gpus, mem, hdd (rough) link to guide | ||
- | latest deployment: nvidia gpu cloud on premise (docker containers) link | ||
- | Script preempts nodes every 2 hours. | ||
+ | ==== Priority Policy ==== | ||
- | + | This policy was put in place about 3 years ago to deal with the issues surrounding new monies infusions from for example new faculty " | |
- | ===== Priority Access ===== | + | |
- | + | ||
- | This page will describe the Priority Access Policy in place at the current time (Jan 2020) for the HPCC. This policy was put in place about 3 years ago to deal with the issues surrounding new monies infusions from for example new faculty " | + | |
There are few Principles in this Priority Access Policy | There are few Principles in this Priority Access Policy | ||
Line 54: | Line 51: | ||
- Priority access is granted for 3 years starting at the date of deployment (user access). | - Priority access is granted for 3 years starting at the date of deployment (user access). | ||
- Only applies to newly purchased resources which should be under warranty in the priority period. | - Only applies to newly purchased resources which should be under warranty in the priority period. | ||
- | + | - | |
- | The main objective is to build an HPCC for all users with no (permanent) special treatment of a subgroup. | + | **The main objective is to build an HPCC community resource |
The first principle implies that all users have access to the new resources immidiately when deployed. Root privilege is for hpcadmin only, sudo privilge may be used if/when necessary to achieve some purpose. The hpcadmin will maintain the new resource(s) while configuration(s) of new resource(s) will be done by consent of all parties involved. Final approval by the Advisory Group initiates deployment activities. | The first principle implies that all users have access to the new resources immidiately when deployed. Root privilege is for hpcadmin only, sudo privilge may be used if/when necessary to achieve some purpose. The hpcadmin will maintain the new resource(s) while configuration(s) of new resource(s) will be done by consent of all parties involved. Final approval by the Advisory Group initiates deployment activities. |