User Tools

Site Tools


cluster:201


Back

DMZ with DTN

While attending the 2020 Eastern Regional Network conference (view Karen's slides presented at ERN) an idea surfaced around 10G network. If we deploy Cottontail2 and migrate onto 10G network speeds what if we tried for a Science DMZ with a Data Transfer Node (click on Architecture, left side, scroll down for simple setup) with a Area 1 proposal in the CC* program? The end-to-end points would be Internet cloud and other universities to DMZ/DTN to HPCC head node.

Full Proposal Deadline(s) (due by 5 p.m. submitter's local time):

  • March 01, 2021
  • October 11, 2021 (submit end summer 2021?)

That DTN sits securely on a border router of our network. Then we provide a tunnel (somehow) directly to our head node which can make the content available to compute nodes (or directly DTN to compute nodes). And the DTN provides a way outward to reach our Wesleyan Figshare repository. We'd need science drives and user examples to complete the proposal. And help designing it.

Hah! Not an original idea then, here is a school smaller than us with no graduate body (albeit a pre-med school) doing just that F&M in Lancaster, PA. Their Plan is online.

ESnet, a group in Livermore National Labs, has a NSF-funded mandate to assist in network architecture, science DMZ planning, and DTN ideas to facilitate research programs (as states Jason Simms from Lafayette). We could engage ESnet probably on a design proposal network wise. Add a TrueNAS/ZFS appliance (see below). And write up the science drivers for the proposal.

What a Data Transfer Node would do for us (of the top of my head)

  • Allow for cloud uploads and downloads of content (genome, hubble, …)
  • Allow for sharing of content with other universities (problematic currently)
  • Allow for content uploads to our figshare scholarly repository
  • Expose DTN content to HPC head node and compute nodes via secure vlan (new one, vlan14?)

What a TrueNAS/ZFS M60-HA might cost…includes RJ45 10GBase-T switch plus cables
~$250K (my guess), that would eat up half the proposal monies
Border Router and other network gear identified by ESnet needs to be added.

  • TrueNAS M60-HA 4U with 2 expansion shelves
  • Write Cache NVDIMM, L2 Cache 6.4 TB
  • Dual controllers for High Availability (HA)
  • DDR4 768 gb memory (max for unit)
  • Two 100G plus two 10G interfaces (swap to 4x10G?)
  • 10G connections to Data Center core switch (LACP 10G to internet and to HPCC?)
  • Two storage expansion shelves (max 12 total) with 18TB SAS drives 7.2K
  • Raw capacity 1.85P, usable under raidz1 1.05P (for performance)
  • SSH protocol (sftp, scp, ssh, hpnssh), NFSv3, S3 API
  • in-line compression, snapshots, self healing (scrub, read & write checksums and caches)
  • AD/LDAP account manager built in TrueNAS
  • Warranty Silver (3 year) includes after hours and on site support

ITS has tried this before but it's focus was Science DMZ to science buildings and the labs in those. The focus here is Science DMZ with DTN to the HPCC in data center. Need to find that old proposal. Preliminary meeting with ITS network yielded some issues.

  • Get a 3rd circuit of 10G dedicated to Science DMZ (Wesleyan has 2 circuits, 5G and 2G, dual homed)
  • Connect circuit directly to Border Router
  • Connect Science Router to Border Router (juniper QXF?), 2x10G LACP to DTN
  • Security policy for DTN is done using ACLs on the Science DMZ switch or router
  • Then DTN to 10G to top of rack switch, to Palo Alto firewall, to vlan52, to new head node (10G)
  • Previous Project Summary (FINAL) for Science DMZ, Aug 2016
  • Previous CEN assistance SOW for Science DMZ, Aug 2016
  • Previous Facilities, Equipment and Other Resources for Science DMZ, Aug 2016


Back

cluster/201.txt · Last modified: 2020/12/26 15:33 by hmeij07