This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:189 [2020/01/11 17:16] hmeij07 [Priority Access] |
cluster:189 [2020/02/27 18:12] hmeij07 [Funding Policy] |
||
---|---|---|---|
Line 3: | Line 3: | ||
===== Structure and History of HPCC ===== | ===== Structure and History of HPCC ===== | ||
+ | |||
+ | As promised at the CLAC HPC Mindshare event at Swarthmore College Jan 2020. Here is the Funding and Priority Policies with some context around it. Questions/ | ||
==== History ==== | ==== History ==== | ||
- | In 2006, 4 Wesleyan | + | In 2006, 4 Wesleyan |
+ | The Advisory Group meets with the user base yearly during the reading week of the Spring semester (early May) before everybody scatters for the summer. At this meeting, the hpcadmin reviews the past year, previews the coming year, and the user base are contributing feedback on progress and problems. | ||
- | Add nsf $ startup $ donation $ | + | ==== Structure ==== |
- | Now 40k zfs 96k gpu exp | + | |
- | Add scheme 2016 2019 | + | |
- | Make historic page | + | |
- | Passwd line count 25 coll 200 class | + | |
- | hpcc - scic - qac - associate provost | + | |
- | 2005 central management, its 0.5 fte hpcadmin , nfs grant link | + | |
- | 5 year review with provost link to paper (honors theses) | + | |
- | funding policy | + | |
- | priority access policy | + | |
- | add VA questions | + | |
- | user base stats, annual meeting, spring reading week | + | |
- | 2019 queue usage stats link | + | |
- | adv group details, administrative | + | |
- | hpcc stats cpu cores, gpus, mem, hdd (rough) link to guide | + | |
- | latest deployment: nvidia gpu cloud on premise (docker containers) link | + | |
- | Add funding model | + | The Wesleyan HPCC is part of the **Scientific Computing and Informatics Center** ([[https:// |
- | contrib scheme contrib code ... | + | |
- | Links to hist use. Usage 2019. | + | |
- | 3x gpu vs cpu. | + | |
- | Script preempts nodes every 2 hours. | + | |
+ | The QAC has an [[https:// | ||
+ | ==== Funding Policy ==== | ||
- | ===== Priority Access ===== | + | After an 8 year run of the HPCC, and a drying up of grant opportunities at NSF, it was decided to explore self-funding so the HPCC effort could continue without external dependencies on funds. A report was compiled of the HPCC progress including topics such as Publications, |
- | This page will describe | + | Several months later a pattern emerged. |
+ | |||
+ | In order for the HPCC user base to raise $15K annually, CPU and GPU hourly usage monitoring was deployed | ||
+ | |||
+ | Here is queue usage for 2019 [[cluster: | ||
+ | |||
+ | Contribution Scheme for 01 July 2019 onwards\\ | ||
+ | Hours (K) - Rate ($/CPU Hour)\\ | ||
+ | * 0-5 = Free | ||
+ | * >5-25 = 0.03 | ||
+ | * >25-125 = 0.006 | ||
+ | * >125-625 = 0.0012 | ||
+ | * > | ||
+ | * >3125 = 0.000048 | ||
+ | A cpu usage of 3,125,000 hours/year would cost $ 2,400.00 \\ | ||
+ | A gpu hour of usage is 3x the cpu hourly rate.\\ | ||
+ | |||
+ | We currently have about 1,450 physical cpu cores (all Xeon), 72 gpus (20x K20, 4x GTX2018Ti, 48x RTX2080S), 520 gb of gpu memory and 8,560 gb of cpu memory. Provided by about 120 compute nodes and login nodes. Scratch spaces are provided local to compute nodes (2-5 tb) or over the network via NFS (55 tb), consult [[cluster: | ||
+ | |||
+ | |||
+ | ==== Priority Policy ==== | ||
+ | |||
+ | This policy was put in place about 3 years ago to deal with the issues surrounding new monies infusions from for example; new faculty " | ||
There are few Principles in this Priority Access Policy | There are few Principles in this Priority Access Policy | ||
- | - Contributions, | + | - Contributions, |
- Priority access is granted for 3 years starting at the date of deployment (user access). | - Priority access is granted for 3 years starting at the date of deployment (user access). | ||
- Only applies to newly purchased resources which should be under warranty in the priority period. | - Only applies to newly purchased resources which should be under warranty in the priority period. | ||
+ | |||
+ | **The main objective is to build an HPCC community resource for all users with no (permanent) special treatment of any subgroup.** | ||
+ | |||
+ | The first principle implies that all users have access to the new resource(s) immediately when deployed. Root privilege is for hpcadmin only, sudo privilege may be used if/when necessary to achieve some purpose. The hpcadmin will maintain the new resource(s) while configuration(s) of the new resource(s) will be done by consent of all parties involved. Final approval by the Advisory Group initiates deployment activities. | ||
- | The main objective | + | The second principle grants priority access to certain resource(s) for a limited time to a limited group. The same PI/users relationship will be used as is used in the CPU/GPU Usage Contribution scheme. Priority access specifically means: If during the priority period the priority members' |
- | The first principle implies that all users have access to the new resources immidiately when deployed. Root privilege is for hpcadmin only, sudo privilge | + | All users should be aware this may happen so please checkpoint your jobs with a checkpoint interval |
- | The second principle grants priority access to certain resource(s) for a limited time to a limited group. The same PI/users relationship will be used as is used in the CPU Usage Contribution scheme. Priority access means if during the priority period the priority members jobs go into pending mode for more than 24 hours the hpcadmin will clear compute nodes of running jobs and force those pending jobs to run. | + | ==== General ==== |
- | All users should be aware this may happen so please checkpoint your jobs with a checkpoint interval | + | There are 557 lines in ''/ |
+ | Rstore is a platform for storing research static data. The hope is to move static data off the HPCC and mount it read-only back onto the HPCC login nodes. | ||
+ | The Data Center has recently been renovated so the HPCC has no more cooling problems (It used to be in the event of a cooling tower failure, within 3 hours the HPCC would push temps above 85F). No more. We have sufficient rack space (5) and power for expansion. For details on that "live renovation" | ||