This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Last revision Both sides next revision | ||
cluster:153 [2017/03/01 13:31] hmeij07 |
cluster:153 [2017/12/06 08:51] hmeij07 |
||
---|---|---|---|
Line 1: | Line 1: | ||
\\ | \\ | ||
**[[cluster: | **[[cluster: | ||
+ | |||
+ | |||
+ | |||
+ | |||
+ | https:// | ||
+ | |||
+ | Due Jan 31, 2018, tottally refocused on network, killing the ideas on this page | ||
+ | --- // | ||
==== NSF CC* ==== | ==== NSF CC* ==== | ||
- | * Create a $1M+ CC* proposal to meet the research, staff and cyberinfrastructure | + | * Create a $1 Million+ CC* proposal to meet the research, staff and cyberinfrastructure |
* Needs/Wants of small, primarily undergraduate, | * Needs/Wants of small, primarily undergraduate, | ||
* To share faculty expertise (current research projects and new collaborations) | * To share faculty expertise (current research projects and new collaborations) | ||
Line 11: | Line 19: | ||
* Establish a single, participants' | * Establish a single, participants' | ||
* Provide for multiple off-site, geographically dispersed, backup storage targets | * Provide for multiple off-site, geographically dispersed, backup storage targets | ||
- | * Establish a global Data Management Plan for all institutions' participants | + | * Establish a global Data Management Plan for all institutions |
Line 17: | Line 25: | ||
* Data Driven Multi-Campus/ | * Data Driven Multi-Campus/ | ||
- | * RFP mid March? | + | * <del>RFP mid March?</ |
- | * RFI CC* (due Arpil 5th, 2017) at https:// | + | * RFI CC* (due April 5th, 2017) at https:// |
- | * Pull Wesleyan brain storm meeting together | + | * Pull Wesleyan brain storm meeting together, scheduled for 3/21/2017 |
=== 2016 === | === 2016 === | ||
* RFP CC* at https:// | * RFP CC* at https:// | ||
- | * Need to find an example of somewhat larger colleges' | + | * Henk to find an example of somewhat larger colleges' |
+ | * these are all very large, we should also investigate XSEDE and NSF/ | ||
+ | * https:// | ||
+ | * https:// | ||
+ | * https:// | ||
+ | * Probably more aligned with our goals, click on PDF | ||
+ | * http:// | ||
+ | * Regional Access a report to NSF, good read | ||
+ | * https:// | ||
+ | * key part: the local aspect is important (flexibility and personalized attention) to ensure researcher success and productivity...collaborating at the campus and regional levels is an important aspect in building the services | ||
+ | * XSEDE https:// | ||
+ | * for research this involves writing XSEDE proposals to get time allocations, | ||
+ | * Jeststream (allocates XSEDE resources) https:// | ||
+ | * this also requires writing proposals for time allocations, | ||
=== Thoughts === | === Thoughts === | ||
- | Until we have a 2017 CC* RFP here is a list of random thoughts to keep track of. | + | **Until we have a 2017 CC* RFP announcements** this is a list of random thoughts to keep track of. |
- | + | ||
- | * leveraging a common environment and skill set amongst small, northeast, liberal arts colleges | + | |
- | + | ||
- | - we do science too | + | |
- | + | ||
- | - 2-4 sysadmin FTEs covering 4-8 colleges for 3 years | + | |
- | + | ||
- | - redundancy in sysadmin coverage, round robin | + | |
- | + | ||
- | - each member to assess how to provide funding past year 3 | + | |
- | + | ||
- | - possible: Hampton, Middlebury, Trinity, Wellesley, Wesleyan, Williams | + | |
- | + | ||
- | * each member will do an internal survey of local computing and storage desires | + | |
- | + | ||
- | - gpu, cpu, hybrid of such, visualization, | + | |
- | + | ||
- | - infiniband enabled /home storage with ipoib, ethernet backups with snapshots, head node...one rack (switches, UPS) | + | |
- | + | ||
- | - possible off site backup of /home, round robin | + | |
- | + | ||
- | * hardware desires vetted for too much redundancy | + | |
- | + | ||
- | - custom hardware requests highlight each member' | + | |
- | + | ||
- | * leveraging a common HPC deployment software stack | + | |
- | + | ||
- | - OpenHPC with shared deployment modules | + | |
- | + | ||
- | - user account creation | + | |
- | + | ||
- | - each members' | + | |
- | + | ||
- | * annual and bi-annual meetings | + | |
- | + | ||
- | - of sysadmins | + | |
- | + | ||
- | - of researchers | + | |
- | + | ||
- | * develop platform for data preservation and public access | + | |
- | - data distribution | + | * 2016 Parsing of proposal language (Science-driven requirements are the primary motivation for any proposed activity.) |
+ | * Data Driven Multi-Campus/ | ||
+ | * NSF strongly urges the community to think broadly and not simply rely on traditional models when considering multi-campus and/or multi-institutional cyberinfrastructure | ||
+ | * Proposals are expected to be science-driven | ||
+ | * Proposals...significant impact on the multi-campus, | ||
+ | * methods for assured federated access | ||
+ | * Multi-institution/ | ||
+ | * across a range of disciplines, | ||
+ | * Cyber Team,2-4 awards, | ||
+ | * This program area seeks to build centers of expertise and knowledge in cyberinfrastructure | ||
+ | * Proposals in this area should describe the multi-institutional science-driven needs | ||
+ | * Proposals ... engagement, interactions, | ||
+ | * Proposals ...areas of expertise, such as networking, distributed data management, and scientific computing. | ||
+ | * Proposals may request up to four full-time equivalents (FTE) for up to three years (inside of $1.5M???) | ||
+ | * Proposals should describe plans for broadening participation, | ||
+ | * Leverage a common HPC environment among small, northeast, colleges | ||
+ | * we do science too (compare to larger colleges) | ||
+ | * need to drive proposal with unique science efforts by each college <--- important! | ||
+ | * think about how to build the data driven multi campus effort | ||
+ | * 1-3 sysadmin FTEs covering 3-6 colleges for 3 years | ||
+ | * redundancy in sysadmin coverage, round robin | ||
+ | * each member to assess how to provide funding past year 3 | ||
+ | * Possible Members (underlined=informally contacted, bolded=interested party): | ||
+ | * Hamilton, | ||
+ | * **Lafayette**, | ||
+ | * **Middlebury**, | ||
+ | * **Swarthmore**, | ||
+ | * Trinity, image analysis | ||
+ | * __Wellesley__, | ||
+ | * **Wesleyan**, | ||
+ | * **Williams**, | ||
+ | * Each member will do an internal survey of local computing and storage desires, put together 1-2 racks | ||
+ | * gpu, cpu, hybrid of such, visualization, | ||
+ | * infiniband enabled /home storage with ipoib, head node (both on UPS), sandbox | ||
+ | * ethernet storage for backup with snapshots (LVM or rsnapshot.org) (on UPS, switches) | ||
+ | * possible off site backup of data, round robin, among members | ||
+ | * Hardware desires/ | ||
+ | * Each site should offer something unique | ||
+ | * Shared with whole community of members | ||
+ | * Custom hardware requests highlight each member' | ||
+ | * Leverage a common HPC deployment software stack | ||
+ | * OpenHPC (openhpc.community) with shared deployment modules (CentOS 7.x/ | ||
+ | * Application software modules to deploy per members' | ||
+ | * Commercial software? (License type would need to be discussed with vendors) | ||
+ | * Shared skill set with experts in certain specific areas and tools | ||
+ | * Facilitate classroom usage (local temp accounts recycled by semester? | ||
+ | * User account creation | ||
+ | * Participants can request accounts on each members' | ||
+ | * Local accounts with request workflow, SSH keys federation < | ||
+ | * Collaborative non-member account requests? | ||
+ | * Meetings, ofcourse | ||
+ | * of sysadmin group | ||
+ | * of advisory group | ||
+ | * Data Management Plan (DMP) | ||
+ | * To satisfy NSF public | ||
+ | * One site for all (Islandora, Dataverse, or ???). One members' | ||
+ | * ... | ||