cluster:153
This is an old revision of the document!
NSF CC*
- Create a $1 Million+ CC* proposal to meet the research, staff and cyberinfrastructure
- Needs/Wants of small, primarily undergraduate, northeast Higher Ed institutions
- To share faculty expertise (current research projects and new collaborations)
- To share staff expertise (deploy and support, sustain after grant period)
- To share unique computational capabilities in a variety of disciplines
- Establish a single, participants' wide, user directory with global access
- Provide for multiple off-site, geographically dispersed, backup storage targets
- Establish a global Data Management Plan for all institutions participation
2017
- Data Driven Multi-Campus/Multi-Institution Model, or/and Cyber Team
RFP mid March?More like May/June (found 2016 announcement via AdvancedClustering of 06/02/2016)
- RFI CC* (due April 5th, 2017) at https://www.nsf.gov/pubs/2017/nsf17031/nsf17031.jsp
- Pull Wesleyan brain storm meeting together, scheduled for 3/21/2017
2016
- Henk to find an example of somewhat larger colleges' successful proposal in 2016
- these are all very large, we should also investigate XSEDE and NSF/Jetstream (cloud based)
-
- for research this involves writing XSEDE proposals to get time allocations, quarterly awarded
- Jeststream (allocates XSEDE resources) https://jetstream-cloud.org/
- this also requires writing proposals for time allocations, half year cycles, not sure (?)
Thoughts
Until we have a 2017 CC* RFP announcements this is a list of random thoughts to keep track of.
- Leverage a common HPC environment among small, northeast, colleges
- we do science too (compare to larger colleges)
- 1-3 sysadmin FTEs covering 3-6 colleges for 3 years
- redundancy in sysadmin coverage, round robin
- each member to assess how to provide funding past year 3
- Possible Members (underlined=informally contacted, bolded=interested party):
- Hamilton,Lafayette,Middlebury,Trinity,Wellesley,Wesleyan,Williams
- Each member will do an internal survey of local computing and storage desires, put together 1-2 racks
- gpu, cpu, hybrid of such, visualization, deep learning,etc…(no UPS, compute nodes)
- infiniband enabled /home storage with ipoib, head node (both on UPS), sandbox
- ethernet storage for backup with snapshots (LVM or rsnapshot.org) (on UPS, switches)
- possible off site backup of data, round robin, among members
- Hardware desires/configs to be vetted for too much redundancy
- Each site should offer something unique
- Shared with whole community of members
- Custom hardware requests highlight each member's area of expertise
- Leverage a common HPC deployment software stack
- OpenHPC (openhpc.community) with shared deployment modules (CentOS 7.2/Warewulf/Slurm)
- Application software modules to deploy per members' requirements
- Commercial software? (License type would need to be discussed with vendors)
- Shared skill set with experts in certain specific areas and tools
- Facilitate classroom usage (local temp accounts recycled by semester?)
- User account creation
- Participants can request accounts on each members' installation
- Local accounts with request workflow or InCommon or …?
- Collaborative non-member account requests?
- Meetings, ofcourse
- of sysadmin group
- of advisory group
- Data Management Plan
- To satisfy NSF public data access policy and distribution
- One site for all (Islandora, Dspace, ???). One members' expertise?
- …
cluster/153.1488547789.txt.gz · Last modified: by hmeij07
