While attending the 2020 Eastern Regional Network conference (view Karen's slides presented at ERN) an idea surfaced around 10G network. If we deploy Cottontail2 and migrate onto 10G network speeds what if we tried for a Science DMZ with a Data Transfer Node (click on Architecture, left side, scroll down for simple setup) with a Area 1 proposal in the CC* program? The end-to-end points would be Internet cloud and other universities to DMZ/DTN to HPCC head node.
Full Proposal Deadline(s) (due by 5 p.m. submitter's local time):
That DTN sits securely on a border router of our network. Then we provide a tunnel (somehow) directly to our head node which can make the content available to compute nodes (or directly DTN to compute nodes). And the DTN provides a way outward to reach our Wesleyan Figshare repository. We'd need science drives and user examples to complete the proposal. And help designing it.
Hah! Not an original idea then, here is a school smaller than us with no graduate body (albeit a pre-med school) doing just that F&M in Lancaster, PA. Their Plan is online.
ESnet, a group in Livermore National Labs, has a NSF-funded mandate to assist in network architecture, science DMZ planning, and DTN ideas to facilitate research programs (as states Jason Simms from Lafayette). We could engage ESnet probably on a design proposal network wise. Add a TrueNAS/ZFS appliance (see below). And write up the science drivers for the proposal.
What a Data Transfer Node would do for us (of the top of my head)
What a TrueNAS/ZFS M60-HA might cost…includes RJ45 10GBase-T switch plus cables
~$250K (my guess), that would eat up half the proposal monies
Border Router and other network gear identified by ESnet needs to be added.
ITS has tried this before but it's focus was Science DMZ to science buildings and the labs in those. The focus here is Science DMZ with DTN to the HPCC in data center. Need to find that old proposal. Preliminary meeting with ITS network yielded some issues.