This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:189 [2020/01/11 21:27] hmeij07 |
cluster:189 [2020/01/12 14:59] hmeij07 [Priority Policy] |
||
---|---|---|---|
Line 4: | Line 4: | ||
===== Structure and History of HPCC ===== | ===== Structure and History of HPCC ===== | ||
- | As promised at the CLAC HPC Mindshare event at Swarthmore College Jan 2020. Here is the Funding and Priority Policies with some context around it. | + | As promised at the CLAC HPC Mindshare event at Swarthmore College Jan 2020. Here is the Funding and Priority Policies with some context around it. Questions/ |
==== History ==== | ==== History ==== | ||
- | In 2006, 4 Wesleyan faculty members approached ITS with a proposal to centrally manage a high performance computing center (HPCC) seeding the effort with an NSF grant (about $190K). ITS offered 0.5 FTE for a dedicated " | + | In 2006, 4 Wesleyan faculty members approached ITS with a proposal to centrally manage a high performance computing center (HPCC) seeding the effort with an NSF grant (about $190K, two racks full of Dell PE1950, a total of 256 physical cpu cores on Infiniband). ITS offered 0.5 FTE for a dedicated " |
The Advisory Group meets with the user base yearly during the reading week of the Spring semester (early May) before everybody scatters for the summer. At this meeting, the hpcadmin reviews the past year, previews the coming year, and the user base are contributing feedback on progress and problems. | The Advisory Group meets with the user base yearly during the reading week of the Spring semester (early May) before everybody scatters for the summer. At this meeting, the hpcadmin reviews the past year, previews the coming year, and the user base are contributing feedback on progress and problems. | ||
Line 44: | Line 44: | ||
==== Priority Policy ==== | ==== Priority Policy ==== | ||
- | This policy was put in place about 3 years ago to deal with the issues surrounding new monies infusions from for example new faculty " | + | This policy was put in place about 3 years ago to deal with the issues surrounding new monies infusions from for example; new faculty " |
There are few Principles in this Priority Access Policy | There are few Principles in this Priority Access Policy | ||
- | - Contributions, | + | - Contributions, |
- Priority access is granted for 3 years starting at the date of deployment (user access). | - Priority access is granted for 3 years starting at the date of deployment (user access). | ||
- Only applies to newly purchased resources which should be under warranty in the priority period. | - Only applies to newly purchased resources which should be under warranty in the priority period. | ||
- | - | + | |
**The main objective is to build an HPCC community resource for all users with no (permanent) special treatment of any subgroup.** | **The main objective is to build an HPCC community resource for all users with no (permanent) special treatment of any subgroup.** | ||
- | The first principle implies that all users have access to the new resources immidiately | + | The first principle implies that all users have access to the new resource(s) immediately |
- | The second principle grants priority access to certain resource(s) for a limited time to a limited group. The same PI/users relationship will be used as is used in the CPU Usage Contribution scheme. Priority access means if during the priority period the priority members jobs go into pending mode for more than 24 hours the hpcadmin will clear compute nodes of running jobs and force those pending jobs to run. | + | The second principle grants priority access to certain resource(s) for a limited time to a limited group. The same PI/users relationship will be used as is used in the CPU/GPU Usage Contribution scheme. Priority access |
All users should be aware this may happen so please checkpoint your jobs with a checkpoint interval of 24 hours. Please consult | All users should be aware this may happen so please checkpoint your jobs with a checkpoint interval of 24 hours. Please consult | ||
+ | ==== General ==== | ||
+ | There are 557 lines in ''/ | ||
+ | |||
+ | Rstore is a platform for storing research static data. The hope is to move static data off the HPCC and mount it read-only back onto the HPCC login nodes. | ||
+ | |||
+ | The Data Center has recently been renovated so the HPCC has no more cooling power problems (It used to be in the event of a cooling tower failure, within 3 hours the HPCC would push temps above 85F). No more. We have sufficient rack space (5) and power for expansion. For details on that "live renovation" | ||