This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:189 [2020/01/11 17:53] hmeij07 [Funding Policy] |
cluster:189 [2020/01/11 17:57] hmeij07 |
||
---|---|---|---|
Line 7: | Line 7: | ||
In 2006, 4 Wesleyan qfaculty members approached ITS with a proposal to centrally manage a whigh performance computing center (HPCC) seeding the effort with an NSF grant (about $190K). ITS offered 0.5 FTE for a dedicated " | In 2006, 4 Wesleyan qfaculty members approached ITS with a proposal to centrally manage a whigh performance computing center (HPCC) seeding the effort with an NSF grant (about $190K). ITS offered 0.5 FTE for a dedicated " | ||
+ | |||
+ | The Advisory Group meets with the user base yearly in reading week of the Spring semester (early May) before everybody scatters for the summer. At this meeting the hpcadmin reviews the past year, previews the coming year and users are contributing feedback on progress and problems. | ||
==== Structure ==== | ==== Structure ==== | ||
Line 33: | Line 35: | ||
* >3125 = 0.000048 | * >3125 = 0.000048 | ||
A cpu usage of 3,125,000 hours/year would cost $ 2,400.00 \\ | A cpu usage of 3,125,000 hours/year would cost $ 2,400.00 \\ | ||
+ | A gpu hour of usage is 3x the cpu hourly rate.\\ | ||
- | |||
- | priority access policy | ||
user base stats, annual meeting, spring reading week | user base stats, annual meeting, spring reading week | ||
- | 2019 queue usage stats link | + | |
- | adv group details, administrative | + | |
hpcc stats cpu cores, gpus, mem, hdd (rough) link to guide | hpcc stats cpu cores, gpus, mem, hdd (rough) link to guide | ||
latest deployment: nvidia gpu cloud on premise (docker containers) link | latest deployment: nvidia gpu cloud on premise (docker containers) link |