Both sides previous revision
Previous revision
Next revision
|
Previous revision
Next revision
Both sides next revision
|
cluster:189 [2020/01/12 15:19] hmeij07 [Structure] |
cluster:189 [2020/01/12 20:56] hmeij07 [Funding Policy] |
A gpu hour of usage is 3x the cpu hourly rate.\\ | A gpu hour of usage is 3x the cpu hourly rate.\\ |
| |
We currently have about 1,450 physical cpu cores (all Xeon), 60 gpus (K20, GTX2018Ti, RTX2080S), 520 gb of gpu memory and 8,560 gb of cpu memory. Provided by about 120 compute nodes and login nodes. Scratch spaces are provided local to compute nodes (2-5 tb) or over the network via NFS (55 tb), consult [[cluster:142|Scratch Spaces]]. Home directories are under quota (10 tb) but these will disappear in the future with the TrueNAS/ZFS appliance (190 tb, 475 tb effective assuming a compression rate of 2.5x, consult [[cluster:186|Home Dir Server]]). A HPCC guide can be found here [[cluster:126|Brief Guide to HPCC]] and the (endless!) software list is located here [[cluster:73|Software Page]] | We currently have about 1,450 physical cpu cores (all Xeon), 72 gpus (20x K20, 4x GTX2018Ti, 48x RTX2080S), 520 gb of gpu memory and 8,560 gb of cpu memory. Provided by about 120 compute nodes and login nodes. Scratch spaces are provided local to compute nodes (2-5 tb) or over the network via NFS (55 tb), consult [[cluster:142|Scratch Spaces]]. Home directories are under quota (10 tb) but these will disappear in the future with the TrueNAS/ZFS appliance (190 tb, 475 tb effective assuming a compression rate of 2.5x, consult [[cluster:186|Home Dir Server]]). A HPCC guide can be found here [[cluster:126|Brief Guide to HPCC]] and the (endless!) software list is located here [[cluster:73|Software Page]]. We run CentOS 6.10 or 7.6 flavors of OS. |
| |
| |
==== Priority Policy ==== | ==== Priority Policy ==== |
| |
This policy was put in place about 3 years ago to deal with the issues surrounding new monies infusions from for example; new faculty "startup monies", new grant monies (NSF,NIH, DoD, others), or donations made to the HPCC for a specific purpose (as in GTX gpus for Amber). All users have the same priority. All queues have the same priority (except the "test" queue which has the highest priority). Scheduler policy is FIFO overlayed by Round Robin. | This policy was put in place about 3 years ago to deal with the issues surrounding new monies infusions from for example; new faculty "startup monies", new grant monies (NSF, NIH, DoD, others), or donations made to the HPCC for a specific purpose (as in GTX gpus for Amber). All users have the same priority. All queues have the same priority (except the "test" queue which has the highest priority). Scheduler policy is FIFO overlayed by Round Robin. There is no "wall time" on any queue. |
| |
There are few Principles in this Priority Access Policy | There are few Principles in this Priority Access Policy |