User Tools

Site Tools


cluster:189

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:189 [2020/01/10 19:57]
hmeij07 [Priority Access]
cluster:189 [2020/01/11 17:27]
hmeij07
Line 1: Line 1:
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
 +
 +===== Structure and History of HPCC =====
 +
 +==== History ====
 +
 +In 2006, 4 Wesleyan qfaculty members approached ITS with a proposal to centrally manage a whigh performance computing center (HPCC) seeding the effort with an NSF grant (about $190K). ITS offered 0.5 FTE for a dedicated "hpcadmin". An Advisory Group was formed by these faculty plus hpcadmin (5 members, not necessarily our current "power users"). Another NSF grant reward was added in 2010 (about $105K). An alumni donation followed in 2016 (about $10K).  In 2018 the first instance of "faculty startup monies" was contribute to the HPCC (about $92K, see "Priority Policy" below. In 2019, a TrueNAS/ZFS appliance was purchased (about $40K) followed in 2020 by a GPU expansion project (about $96K). The latter two were self-funded expenditures, see "Funding Policy" below. To view the NSF grants visit [[cluster:169|Acknowledgement]]
 +
 +==== Structure ====
 +
 +The Wesleyan HPCC is part of the **Scientific Computing and Informatics Center** ([[https://www.wesleyan.edu/scic/| SCIC ]]).  The SCIC project leader is appointed by the Director of the **Quantitative Analysis Center** [[https://www.wesleyan.edu/qac/| QAC ]]. The Director of the QAC reports to the Associate Provost. The hpcadmin has a direct report with ITS Deputy Director and an indirect report with QAC Director.
 +
 +
 +Add scheme 2016 2019
 +Passwd line count 25 coll 200 class
 +5 year review with provost link to paper (honors theses)
 +funding policy
 +priority access policy
 +add VA questions
 +user base stats, annual meeting, spring reading week
 +2019 queue usage stats link
 +adv group details, administrative
 +hpcc stats cpu cores, gpus, mem, hdd (rough) link to guide
 +latest deployment: nvidia gpu cloud on premise (docker containers) link
 +
 +Add funding model 
 +contrib scheme contrib code ...
 + Links to hist use. Usage 2019. 
 +3x gpu vs cpu. 
 +Script preempts nodes every 2 hours. 
 +
 +
  
 ===== Priority Access ===== ===== Priority Access =====
cluster/189.txt ยท Last modified: 2024/02/12 16:47 by hmeij07