User Tools

Site Tools


cluster:189

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:189 [2020/01/10 19:57]
hmeij07 [Priority Access]
cluster:189 [2020/01/11 18:07]
hmeij07 [Funding Policy]
Line 1: Line 1:
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
 +
 +===== Structure and History of HPCC =====
 +
 +==== History ====
 +
 +In 2006, 4 Wesleyan qfaculty members approached ITS with a proposal to centrally manage a whigh performance computing center (HPCC) seeding the effort with an NSF grant (about $190K). ITS offered 0.5 FTE for a dedicated "hpcadmin". An Advisory Group was formed by these faculty plus hpcadmin (5 members, not necessarily our current "power users"). Another NSF grant reward was added in 2010 (about $105K). An alumni donation followed in 2016 (about $10K).  In 2018 the first instance of "faculty startup monies" was contribute to the HPCC (about $92K, see "Priority Policy" below. In 2019, a TrueNAS/ZFS appliance was purchased (about $40K) followed in 2020 by a GPU expansion project (about $96K). The latter two were self-funded expenditures, see "Funding Policy" below. To view the NSF grants visit [[cluster:169|Acknowledgement]]
 +
 +The Advisory Group meets with the user base yearly in reading week of the Spring semester (early May) before everybody scatters for the summer. At this meeting the hpcadmin reviews the past year, previews the coming year and users are contributing feedback on progress and problems.
 +
 +==== Structure ====
 +
 +The Wesleyan HPCC is part of the **Scientific Computing and Informatics Center** ([[https://www.wesleyan.edu/scic/| SCIC ]]).  The SCIC project leader is appointed by the Director of the **Quantitative Analysis Center** [[https://www.wesleyan.edu/qac/| QAC ]]. The Director of the QAC reports to the Associate Provost. The hpcadmin has a direct report with ITS Deputy Director and an indirect report with QAC Director.
 +
 +The QAC has an [[https://www.wesleyan.edu/qac/apprenticeship/index.html|Apprenticeship]] Program in which students are trained in Linux and several program languagues of their choice and other options. From this pool of students some become the QAC and SCIC helpdesk and tutors.
 +
 +==== Funding Policy ====
 +
 +After an 8 year run of the HPCC, and a drying up of grant opportunities at NSF, it was decided to explore self-funding so the HPCC effort could continue without external dependence on funds. A report was made of the HPCC progress including topics such as Publications, Citations, Honors Theses, Growth in Jobs Submitted, Pattern of Pending Jobs, and General Inventory. The report summary can be viewed at this page [[cluster:130| Provost Report ]]. This report was discussed between Provost and HPC Advisory Group.
 +
 +Several months later a pattern emerged.  The Provost would annually contribute $25K if the HPC user base raised $15K annually.  That would amount to $160K in 4 years enough for a hardware refresh or new hardware acquisition.  Finances also contributed $10K for maintenance such as failed disks, network switches, etc, but these funds do not "roll over". Use it or loose it. All funds start July 1st.
 +
 +In order for the HPC user base to raise $15K annually, CPU and GPU hourly usage was deployed. A dictionary is maintained listing PIs and their members (students majors, lab students, grads, phd candidates, collaborators, etc).  Each PI then quarterly contributes to the user fund based on  a scheme yieldingq $15K annually.
 +
 +Here is 2019's queue usage [[cluster:188|2019 Queue Usage]] and 2019 contribution scheme.
 +
 +Contribution Scheme for 01 July 2019 onwards\\
 +Hours (K) - Rate ($/CPU Hour)\\
 +  * 0-5 = Free
 +  * >5-25 = 0.03
 +  * >25-125 = 0.006
 +  * >125-625 = 0.0012
 +  * >625-3125 = 0.00024
 +  * >3125 = 0.000048
 +A cpu usage of 3,125,000 hours/year would cost $ 2,400.00 \\
 +A gpu hour of usage is 3x the cpu hourly rate.\\
 +
 +We currently have about 1,450 physical cpu cores, 60 gpus, 520 gb of gpu memory and 8,560 gb cpu memory provided by about 120 compute nodes and login nodes. Scratch spaces are provide local to compute nodes (2-5 tb) or over the network via NFS (55 tb). Home directories are under quota (10 tb) but these will disappear in the future with the TrueNAS/ZFS appliance (190 tb, 475 tb effective assuming a compression rate of 2.5x). a guide can be found here [[cluster:82|Brief Description]] and the software is located here [[cluster:73|CD-HIT]]
 +
  
 ===== Priority Access ===== ===== Priority Access =====
cluster/189.txt ยท Last modified: 2024/02/12 16:47 by hmeij07