This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:189 [2020/01/10 19:52] hmeij07 [Priority Access] |
cluster:189 [2020/01/11 18:07] hmeij07 [Funding Policy] |
||
---|---|---|---|
Line 1: | Line 1: | ||
\\ | \\ | ||
**[[cluster: | **[[cluster: | ||
+ | |||
+ | ===== Structure and History of HPCC ===== | ||
+ | |||
+ | ==== History ==== | ||
+ | |||
+ | In 2006, 4 Wesleyan qfaculty members approached ITS with a proposal to centrally manage a whigh performance computing center (HPCC) seeding the effort with an NSF grant (about $190K). ITS offered 0.5 FTE for a dedicated " | ||
+ | |||
+ | The Advisory Group meets with the user base yearly in reading week of the Spring semester (early May) before everybody scatters for the summer. At this meeting the hpcadmin reviews the past year, previews the coming year and users are contributing feedback on progress and problems. | ||
+ | |||
+ | ==== Structure ==== | ||
+ | |||
+ | The Wesleyan HPCC is part of the **Scientific Computing and Informatics Center** ([[https:// | ||
+ | |||
+ | The QAC has an [[https:// | ||
+ | |||
+ | ==== Funding Policy ==== | ||
+ | |||
+ | After an 8 year run of the HPCC, and a drying up of grant opportunities at NSF, it was decided to explore self-funding so the HPCC effort could continue without external dependence on funds. A report was made of the HPCC progress including topics such as Publications, | ||
+ | |||
+ | Several months later a pattern emerged. | ||
+ | |||
+ | In order for the HPC user base to raise $15K annually, CPU and GPU hourly usage was deployed. A dictionary is maintained listing PIs and their members (students majors, lab students, grads, phd candidates, collaborators, | ||
+ | |||
+ | Here is 2019's queue usage [[cluster: | ||
+ | |||
+ | Contribution Scheme for 01 July 2019 onwards\\ | ||
+ | Hours (K) - Rate ($/CPU Hour)\\ | ||
+ | * 0-5 = Free | ||
+ | * >5-25 = 0.03 | ||
+ | * >25-125 = 0.006 | ||
+ | * >125-625 = 0.0012 | ||
+ | * > | ||
+ | * >3125 = 0.000048 | ||
+ | A cpu usage of 3,125,000 hours/year would cost $ 2,400.00 \\ | ||
+ | A gpu hour of usage is 3x the cpu hourly rate.\\ | ||
+ | |||
+ | We currently have about 1,450 physical cpu cores, 60 gpus, 520 gb of gpu memory and 8,560 gb cpu memory provided by about 120 compute nodes and login nodes. Scratch spaces are provide local to compute nodes (2-5 tb) or over the network via NFS (55 tb). Home directories are under quota (10 tb) but these will disappear in the future with the TrueNAS/ZFS appliance (190 tb, 475 tb effective assuming a compression rate of 2.5x). a guide can be found here [[cluster: | ||
+ | |||
===== Priority Access ===== | ===== Priority Access ===== | ||
Line 6: | Line 44: | ||
This page will describe the Priority Access Policy in place at the current time (Jan 2020) for the HPCC. This policy was put in place about 3 years ago to deal with the issues surrounding new monies infusions from for example new faculty " | This page will describe the Priority Access Policy in place at the current time (Jan 2020) for the HPCC. This policy was put in place about 3 years ago to deal with the issues surrounding new monies infusions from for example new faculty " | ||
- | There are two main Principles in this Priority Access Policy | + | There are few Principles in this Priority Access Policy |
- | - Contributions, | + | - Contributions, |
- Priority access is granted for 3 years starting at the date of deployment (user access). | - Priority access is granted for 3 years starting at the date of deployment (user access). | ||
+ | - Only applies to newly purchased resources which should be under warranty in the priority period. | ||
The main objective is to build an HPCC for all users with no (permanent) special treatment of a subgroup. | The main objective is to build an HPCC for all users with no (permanent) special treatment of a subgroup. |