Both sides previous revision
Previous revision
Next revision
|
Previous revision
Next revision
Both sides next revision
|
cluster:189 [2020/01/12 15:01] hmeij07 |
cluster:189 [2020/01/12 20:56] hmeij07 [Funding Policy] |
The Wesleyan HPCC is part of the **Scientific Computing and Informatics Center** ([[https://www.wesleyan.edu/scic/| SCIC ]]). The SCIC project leader is appointed by the Director of the **Quantitative Analysis Center** [[https://www.wesleyan.edu/qac/| QAC ]]. The Director of the QAC reports directly to the Associate Provost. The hpcadmin has a direct report with the ITS Deputy Director and an indirect report with the QAC Director. | The Wesleyan HPCC is part of the **Scientific Computing and Informatics Center** ([[https://www.wesleyan.edu/scic/| SCIC ]]). The SCIC project leader is appointed by the Director of the **Quantitative Analysis Center** [[https://www.wesleyan.edu/qac/| QAC ]]. The Director of the QAC reports directly to the Associate Provost. The hpcadmin has a direct report with the ITS Deputy Director and an indirect report with the QAC Director. |
| |
The QAC has an [[https://www.wesleyan.edu/qac/apprenticeship/index.html|Apprenticeship]] Program in which students are trained in Linux and several program languages of their choice and other options (like SQL or GIS). From this pool of students the hope is some become the QAC and SCIC help desk and tutors. | The QAC has an [[https://www.wesleyan.edu/qac/apprenticeship/index.html|Apprenticeship]] Program in which students are trained in Linux and several programming languages of their choice and other options (like SQL or GIS). From this pool of students the hope is some become the QAC and SCIC help desk and tutors. |
| |
==== Funding Policy ==== | ==== Funding Policy ==== |
A gpu hour of usage is 3x the cpu hourly rate.\\ | A gpu hour of usage is 3x the cpu hourly rate.\\ |
| |
We currently have about 1,450 physical cpu cores (all Xeon), 60 gpus (K20, GTX2018Ti, RTX2080S), 520 gb of gpu memory and 8,560 gb of cpu memory. Provided by about 120 compute nodes and login nodes. Scratch spaces are provided local to compute nodes (2-5 tb) or over the network via NFS (55 tb), consult [[cluster:142|Scratch Spaces]]. Home directories are under quota (10 tb) but these will disappear in the future with the TrueNAS/ZFS appliance (190 tb, 475 tb effective assuming a compression rate of 2.5x, consult [[cluster:186|Home Dir Server]]). A HPCC guide can be found here [[cluster:126|Brief Guide to HPCC]] and the (endless!) software list is located here [[cluster:73|Software Page]] | We currently have about 1,450 physical cpu cores (all Xeon), 72 gpus (20x K20, 4x GTX2018Ti, 48x RTX2080S), 520 gb of gpu memory and 8,560 gb of cpu memory. Provided by about 120 compute nodes and login nodes. Scratch spaces are provided local to compute nodes (2-5 tb) or over the network via NFS (55 tb), consult [[cluster:142|Scratch Spaces]]. Home directories are under quota (10 tb) but these will disappear in the future with the TrueNAS/ZFS appliance (190 tb, 475 tb effective assuming a compression rate of 2.5x, consult [[cluster:186|Home Dir Server]]). A HPCC guide can be found here [[cluster:126|Brief Guide to HPCC]] and the (endless!) software list is located here [[cluster:73|Software Page]]. We run CentOS 6.10 or 7.6 flavors of OS. |
| |
| |
==== Priority Policy ==== | ==== Priority Policy ==== |
| |
This policy was put in place about 3 years ago to deal with the issues surrounding new monies infusions from for example; new faculty "startup monies", new grant monies (NSF,NIH, DoD, others), or donations made to the HPCC for a specific purpose (as in GTX gpus for Amber). All users have the same priority. All queues have the same priority (except the "test" queue which has the highest priority). Scheduler policy is FIFO overlayed by Round Robin. | This policy was put in place about 3 years ago to deal with the issues surrounding new monies infusions from for example; new faculty "startup monies", new grant monies (NSF, NIH, DoD, others), or donations made to the HPCC for a specific purpose (as in GTX gpus for Amber). All users have the same priority. All queues have the same priority (except the "test" queue which has the highest priority). Scheduler policy is FIFO overlayed by Round Robin. There is no "wall time" on any queue. |
| |
There are few Principles in this Priority Access Policy | There are few Principles in this Priority Access Policy |