This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:189 [2020/01/11 17:16] hmeij07 [Priority Access] |
cluster:189 [2020/01/11 18:07] hmeij07 [Funding Policy] |
||
---|---|---|---|
Line 6: | Line 6: | ||
==== History ==== | ==== History ==== | ||
- | In 2006, 4 Wesleyan qfaculty members approached ITS with a proposal to centrally manage | + | In 2006, 4 Wesleyan qfaculty members approached ITS with a proposal to centrally manage |
+ | The Advisory Group meets with the user base yearly in reading week of the Spring semester (early May) before everybody scatters for the summer. At this meeting the hpcadmin reviews the past year, previews the coming year and users are contributing feedback on progress and problems. | ||
- | Add nsf $ startup $ donation $ | + | ==== Structure ==== |
- | Now 40k zfs 96k gpu exp | + | |
- | Add scheme 2016 2019 | + | |
- | Make historic page | + | |
- | Passwd line count 25 coll 200 class | + | |
- | hpcc - scic - qac - associate provost | + | |
- | 2005 central management, its 0.5 fte hpcadmin , nfs grant link | + | |
- | 5 year review with provost link to paper (honors theses) | + | |
- | funding policy | + | |
- | priority access policy | + | |
- | add VA questions | + | |
- | user base stats, annual meeting, spring reading week | + | |
- | 2019 queue usage stats link | + | |
- | adv group details, administrative | + | |
- | hpcc stats cpu cores, gpus, mem, hdd (rough) link to guide | + | |
- | latest deployment: nvidia gpu cloud on premise (docker containers) link | + | |
- | Add funding model | + | The Wesleyan HPCC is part of the **Scientific Computing and Informatics Center** ([[https:// |
- | contrib scheme contrib code ... | + | |
- | Links to hist use. Usage 2019. | + | |
- | 3x gpu vs cpu. | + | |
- | Script preempts nodes every 2 hours. | + | |
+ | The QAC has an [[https:// | ||
+ | |||
+ | ==== Funding Policy ==== | ||
+ | |||
+ | After an 8 year run of the HPCC, and a drying up of grant opportunities at NSF, it was decided to explore self-funding so the HPCC effort could continue without external dependence on funds. A report was made of the HPCC progress including topics such as Publications, | ||
+ | |||
+ | Several months later a pattern emerged. | ||
+ | |||
+ | In order for the HPC user base to raise $15K annually, CPU and GPU hourly usage was deployed. A dictionary is maintained listing PIs and their members (students majors, lab students, grads, phd candidates, collaborators, | ||
+ | |||
+ | Here is 2019's queue usage [[cluster: | ||
+ | |||
+ | Contribution Scheme for 01 July 2019 onwards\\ | ||
+ | Hours (K) - Rate ($/CPU Hour)\\ | ||
+ | * 0-5 = Free | ||
+ | * >5-25 = 0.03 | ||
+ | * >25-125 = 0.006 | ||
+ | * >125-625 = 0.0012 | ||
+ | * > | ||
+ | * >3125 = 0.000048 | ||
+ | A cpu usage of 3,125,000 hours/year would cost $ 2,400.00 \\ | ||
+ | A gpu hour of usage is 3x the cpu hourly rate.\\ | ||
+ | |||
+ | We currently have about 1,450 physical cpu cores, 60 gpus, 520 gb of gpu memory and 8,560 gb cpu memory provided by about 120 compute nodes and login nodes. Scratch spaces are provide local to compute nodes (2-5 tb) or over the network via NFS (55 tb). Home directories are under quota (10 tb) but these will disappear in the future with the TrueNAS/ZFS appliance (190 tb, 475 tb effective assuming a compression rate of 2.5x). a guide can be found here [[cluster: | ||