This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:189 [2020/01/10 19:52] hmeij07 [Priority Access] |
cluster:189 [2020/01/11 17:54] hmeij07 [Funding Policy] |
||
---|---|---|---|
Line 1: | Line 1: | ||
\\ | \\ | ||
**[[cluster: | **[[cluster: | ||
+ | |||
+ | ===== Structure and History of HPCC ===== | ||
+ | |||
+ | ==== History ==== | ||
+ | |||
+ | In 2006, 4 Wesleyan qfaculty members approached ITS with a proposal to centrally manage a whigh performance computing center (HPCC) seeding the effort with an NSF grant (about $190K). ITS offered 0.5 FTE for a dedicated " | ||
+ | |||
+ | ==== Structure ==== | ||
+ | |||
+ | The Wesleyan HPCC is part of the **Scientific Computing and Informatics Center** ([[https:// | ||
+ | |||
+ | The QAC has an [[https:// | ||
+ | |||
+ | ==== Funding Policy ==== | ||
+ | |||
+ | After an 8 year run of the HPCC, and a drying up of grant opportunities at NSF, it was decided to explore self-funding so the HPCC effort could continue without external dependence on funds. A report was made of the HPCC progress including topics such as Publications, | ||
+ | |||
+ | Several months later a pattern emerged. | ||
+ | |||
+ | In order for the HPC user base to raise $15K annually, CPU and GPU hourly usage was deployed. A dictionary is maintained listing PIs and their members (students majors, lab students, grads, phd candidates, collaborators, | ||
+ | |||
+ | Here is 2019's queue usage [[cluster: | ||
+ | |||
+ | Contribution Scheme for 01 July 2019 onwards\\ | ||
+ | Hours (K) - Rate ($/CPU Hour)\\ | ||
+ | * 0-5 = Free | ||
+ | * >5-25 = 0.03 | ||
+ | * >25-125 = 0.006 | ||
+ | * >125-625 = 0.0012 | ||
+ | * > | ||
+ | * >3125 = 0.000048 | ||
+ | A cpu usage of 3,125,000 hours/year would cost $ 2,400.00 \\ | ||
+ | A gpu hour of usage is 3x the cpu hourly rate.\\ | ||
+ | |||
+ | user base stats, annual meeting, spring reading week | ||
+ | |||
+ | hpcc stats cpu cores, gpus, mem, hdd (rough) link to guide | ||
+ | latest deployment: nvidia gpu cloud on premise (docker containers) link | ||
+ | Script preempts nodes every 2 hours. | ||
+ | |||
+ | |||
===== Priority Access ===== | ===== Priority Access ===== | ||
Line 6: | Line 47: | ||
This page will describe the Priority Access Policy in place at the current time (Jan 2020) for the HPCC. This policy was put in place about 3 years ago to deal with the issues surrounding new monies infusions from for example new faculty " | This page will describe the Priority Access Policy in place at the current time (Jan 2020) for the HPCC. This policy was put in place about 3 years ago to deal with the issues surrounding new monies infusions from for example new faculty " | ||
- | There are two main Principles in this Priority Access Policy | + | There are few Principles in this Priority Access Policy |
- | - Contributions, | + | - Contributions, |
- Priority access is granted for 3 years starting at the date of deployment (user access). | - Priority access is granted for 3 years starting at the date of deployment (user access). | ||
+ | - Only applies to newly purchased resources which should be under warranty in the priority period. | ||
The main objective is to build an HPCC for all users with no (permanent) special treatment of a subgroup. | The main objective is to build an HPCC for all users with no (permanent) special treatment of a subgroup. |