Warning: Undefined array key "DOKU_PREFS" in /usr/share/dokuwiki/inc/common.php on line 2082
cluster:153 [DokuWiki]

User Tools

Site Tools


cluster:153

Warning: Undefined array key 24 in /usr/share/dokuwiki/inc/html.php on line 1453

Warning: Undefined array key -1 in /usr/share/dokuwiki/inc/html.php on line 1458

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
cluster:153 [2017/03/01 13:03]
hmeij07 created
cluster:153 [2017/12/06 10:34]
hmeij07
Line 1: Line 1:
-Proposal Focus+\\ 
 +**[[cluster:0|Back]]**
  
  
  
  
-[C* multi-campus extract of key lines]+https://nsf.gov/pubs/2018/nsf18508/nsf18508.htm
  
 +Due Jan 30, 2018, totally refocused on network, killing the ideas on this page
 + --- //[[hmeij@wesleyan.edu|Henk]] 2017/12/06 08:50//
  
 +==== NSF CC* ====
  
 +  * Create a $1 Million+ CC* proposal to meet the research, staff and cyberinfrastructure
 +  * Needs/Wants of small, primarily undergraduate, northeast Higher Ed institutions
 +  * To share faculty expertise (current research projects and new collaborations)
 +  * To share staff expertise (deploy and support, sustain after grant period)
 +  * To share unique computational capabilities in a variety of disciplines
 +  * Establish a single, participants' wide, user directory with global access
 +  * Provide for multiple off-site, geographically dispersed, backup storage targets
 +  * Establish a global Data Management Plan for all institutions participation
  
-Proposal Guideline 
  
 +=== 2017 ===
  
 +  * Data Driven Multi-Campus/Multi-Institution Model, or/and Cyber Team 
 +  * <del>RFP mid March?</del> More like May/June (found 2016 announcement via AdvancedClustering of 06/02/2016)
  
 +  * RFI CC* (due April 5th, 2017) at https://www.nsf.gov/pubs/2017/nsf17031/nsf17031.jsp
 +  * Pull Wesleyan brain storm meeting together, scheduled for 3/21/2017
  
-* leveraging a common environment and skill set amongst small, northeast, liberal arts colleges+=== 2016 ===
  
-we do science too+  * RFP CC* at https://www.nsf.gov/pubs/2016/nsf16567/nsf16567.htm 
 +  * Henk to find an example of somewhat larger colleges' successful proposal in 2016 
 +    * these are all very large, we should also investigate XSEDE and NSF/Jetstream (cloud based) 
 +    * https://www.nsf.gov/awardsearch/showAward?AWD_ID=1541215&HistoricalAwards=false 
 +    * https://www.nsf.gov/awardsearch/showAward?AWD_ID=1640834&HistoricalAwards=false 
 +    * https://www.nsf.gov/awardsearch/showAward?AWD_ID=1006576&HistoricalAwards=false 
 +  * Probably more aligned with our goals, click on PDF 
 +    * http://fasterdata.es.net/campusCIplanning/university-of-new-hampshire-cyberinfrastructure-plan-2/ 
 +  * Regional Access a report to NSF, good read 
 +    * https://drive.google.com/file/d/0B9RBtxud9RbBemdYcGpnOExPUmM/view 
 +    * key part: the local aspect is important (flexibility and personalized attention) to ensure researcher success and productivity...collaborating at the campus and regional levels is an important aspect in building the services 
 +  * XSEDE https://www.xsede.org/using-xsede 
 +    * for research this involves writing XSEDE proposals to get time allocations, quarterly awarded 
 +  * Jeststream (allocates XSEDE resources) https://jetstream-cloud.org/ 
 +    * this also requires writing proposals for time allocations, half year cycles, not sure (?)
  
-- 2-4 sysadmin FTEs covering 4-8 colleges for 3 years+=== Thoughts ===
  
-- redundancy in sysadmin coverage, round robin+**Until we have a 2017 CC* RFP announcements** this is a list of random thoughts to keep track of.
  
-- each member to assess how to provide funding past year 3+  * 2016 Parsing of proposal language (Science-driven requirements are the primary motivation for any proposed activity.) 
 +    * Data Driven Multi-Campus/Multi-Institution Model Implementations,1-2 awards,$3M,4 years 
 +      * NSF strongly urges the community to think broadly and not simply rely on traditional models when considering multi-campus and/or multi-institutional cyberinfrastructure 
 +      * Proposals are expected to be science-driven 
 +      *  Proposals...significant impact on the multi-campus, multi-institutional ... through direct engagements...how adoption and usage will be measured and monitored 
 +      * methods for assured federated access 
 +      * Multi-institution/regional caliber data preservation and access lifecycles 
 +      * across a range of disciplines, including participating institutions’ information technology (IT) 
 +    * Cyber Team,2-4 awards,$1.5M,3 years 
 +      * This program area seeks to build centers of expertise and knowledge in cyberinfrastructure 
 +      * Proposals in this area should describe the multi-institutional science-driven needs  
 +      * Proposals ... engagement, interactions, and partnerships with science and engineering research as well as education and training activities. 
 +      * Proposals ...areas of expertise, such as networking, distributed data management, and scientific computing.  
 +      * Proposals may request up to four full-time equivalents (FTE) for up to three years (inside of $1.5M???) 
 +      * Proposals should describe plans for broadening participation, including how under-resourced institutions can be meaningfully engaged. 
 +  * Leverage a common HPC environment among small, northeast, colleges 
 +    * we do science too (compare to larger colleges) 
 +    * need to drive proposal with unique science efforts by each college <--- important! 
 +    * think about how to build the data driven multi campus effort 
 +  * 1-3 sysadmin FTEs covering 3-6 colleges for 3 years 
 +    * redundancy in sysadmin coverage, round robin 
 +    * each member to assess how to provide funding past year 3 
 +  * Possible Members (underlined=informally contacted, bolded=interested party):  
 +    * Hamilton, 
 +    * **Lafayette**, engineering college HPC 
 +    * **Middlebury**,
 +    * **Swarthmore**,
 +    * Trinity, image analysis 
 +    * __Wellesley__,
 +    * **Wesleyan**, Gaussian HPC, text mining HPC 
 +    * **Williams**, Islandora DMP 
 +  * Each member will do an internal survey of local computing and storage desires, put together 1-2 racks 
 +    * gpu, cpu, hybrid of such, visualization, deep learning,etc...(no UPS, compute nodes) 
 +    * infiniband enabled /home storage with ipoib, head node (both on UPS), sandbox 
 +    * ethernet storage for backup with snapshots (LVM or rsnapshot.org) (on UPS, switches) 
 +    * possible off site backup of data, round robin, among members 
 +  * Hardware desires/configs to be vetted for too much redundancy  
 +    * Each site should offer something unique 
 +    * Shared with whole community of members 
 +    * Custom hardware requests highlight each member's area of expertise 
 +  * Leverage a common HPC deployment software stack 
 +    * OpenHPC (openhpc.community) with shared deployment modules (CentOS 7.x/Warewulf/Slurm) 
 +    * Application software modules to deploy per members' requirements 
 +    * Commercial software? (License type would need to be discussed with vendors) 
 +    * Shared skill set with experts in certain specific areas and tools 
 +    * Facilitate classroom usage (local temp accounts recycled by semester?
 +  * User account creation 
 +    * Participants can request accounts on each members' installation 
 +    * Local accounts with request workflow, SSH keys federation <del>InCommon</del> or ...? 
 +    * Collaborative non-member account requests? 
 +  * Meetings, ofcourse 
 +    * of sysadmin group 
 +    * of advisory group 
 +  * Data Management Plan (DMP) 
 +    * To satisfy NSF public data access policy and distribution 
 +    * One site for all (Islandora, Dataverse, or ???). One members' expertise? 
 +  * ...
  
-- possible: Hampton, Middlebury, Trinity, Wellesley, Wesleyan, Williams 
  
-each member will do an internal survey of local computing and storage desires+\\ 
 +**[[cluster:0|Back]]**
  
-- gpu, cpu, hybrid of such, visualization, deep learning...one rack (no UPS, compute nodes) 
  
-- infiniband enabled /home storage with ipoib, ethernet backups with snapshots, head node...one rack (switches, UPS) 
  
-- possible off site backup of /home, round robin 
  
-* hardware desires vetted for too much redundancy  
- 
-- custom hardware requests highlight each member's area of expertise 
- 
-* leveraging a common HPC deployment software stack 
- 
-- OpenHPC with shared deployment modules 
- 
-- user account creation 
- 
-- each members' users may request accounts at any other member's hardware 
- 
-* annual and bi-annual meetings 
- 
-- of sysadmins 
- 
-- of researchers 
- 
-* develop platform for data preservation and public access 
- 
-- data distribution plan 
  
cluster/153.txt · Last modified: 2017/12/06 10:34 by hmeij07