User Tools

Site Tools


cluster:102

Warning: Undefined array key -1 in /usr/share/dokuwiki/inc/html.php on line 1458

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cluster:102 [2011/06/29 09:58]
hmeij
cluster:102 [2020/08/24 07:19] (current)
hmeij07
Line 2: Line 2:
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
  
-This is my second NAT story, for the first one look at [[cluster:51|The Story Of NAT, part 1]]+Note #1 
 + 
 +CentOS 8.1 with the standard firewalld.\\ 
 +If this is of interest to you this was how I managed to get it work: 
 +<code> 
 +EXTIFACE=MASTER_NODE_EXT_INTERFACE_DEVICE (e.g. eno1) 
 +INTIFACE=MASTER_NODE_INTERNAL_INTERFACE_DEVICE (e.g. eno2) 
 +INTIPADDR=MASTER_IP_OF_INTERNAL_IFAC 
 +PREFIX=PREFIX_OF_INTERNAL_NETWORK 
 +firewall-cmd --change-interface=${EXTIFACE} --zone=public 
 +firewall-cmd --change-interface=${INTIFACE} --zone=trusted --permanent 
 +firewall-cmd --permanent --direct --passthrough ipv4 -t nat -I POSTROUTING -o ${EXTIFACE} -j MASQUERADE -s ${INTIPADDR}/${PREFIX} 
 +firewall-cmd --set-default-zone=trusted 
 +firewall-cmd --reload 
 +</code> 
 +  
 +And make sure the default route is set on all compute nodes.  
 + 
 +Note #2 
 + 
 +configured Shorewall on a cluster to do NAT through the head node. 
 + 
 +Edit the file /etc/shorewall/snat   and add this line: 
 +<code> 
 +MASQUERADE 192.168.0.0/24      eno1 
 +</code> 
 +where 192.168.0 is the address range of your node interfaces - clearly you need to change this to fit 
 +en01 is the external interface on the head node 
 + 
 +My /etc/shorewall/interfaces contains this (forwarding ib0) 
 +<code> 
 +nat     eno1    detect  dhcp 
 +nat     ib0     detect  dhcp 
 +</code> 
 +so substitute ib0 for your internal ethernet interface 
 + 
 + 
 + 
  
 ==== NAT Story, part 2 ==== ==== NAT Story, part 2 ====
 +
 +This is my second NAT story, for the first one look at [[cluster:51|The Story Of NAT, part 1]]
 +
 +
  
 Writing this up so I will remember what I did, and why.  Basic problem is this: How do you make a filesystem in a public VLAN available on a private network?  One solution is to work with Network Address Translation, or NAT in short.  More information at [[http://en.wikipedia.org/wiki/Network_address_translation|http://en.wikipedia.org/wiki/Network_address_translation]] Writing this up so I will remember what I did, and why.  Basic problem is this: How do you make a filesystem in a public VLAN available on a private network?  One solution is to work with Network Address Translation, or NAT in short.  More information at [[http://en.wikipedia.org/wiki/Network_address_translation|http://en.wikipedia.org/wiki/Network_address_translation]]
 +
 +We have a storage device which we refer to as flexstorage.wesleyan.edu which serves up a file system on login node petaltail.
 +
 +<code>
 +
 +[root@petaltail ~]# host flexstorage
 +flexstorage.wesleyan.edu has address 129.133.24.81
 +
 +[root@petaltail ~]# df -h /home/dlbgroup
 +Filesystem            Size  Used Avail Use% Mounted on
 +flexstorage.wesleyan.edu:/share/dlbgroup
 +                     1000G  588G  413G  59% /home/dlbgroup
 +
 +</code>
 +
 +Host petaltail has the following interfaces.  The file system in question is mounted on host petaltail as VLAN 1 can reach VLAN 24.
 +
 +<code>
 +
 +eth0      Link encap:Ethernet  HWaddr 00:18:8B:51:FA:42
 +          inet addr:192.168.1.217  Bcast:192.168.255.255  Mask:255.255.0.0
 +eth1      Link encap:Ethernet  HWaddr 00:18:8B:51:FA:44
 +          inet addr:10.10.100.217  Bcast:10.10.255.255  Mask:255.255.0.0
 +eth2      Link encap:Ethernet  HWaddr 00:15:17:80:8D:F2
 +          inet addr:129.133.1.225  Bcast:129.133.1.255  Mask:255.255.255.0
 +eth3      Link encap:Ethernet  HWaddr 00:15:17:80:8D:F3
 +          inet addr:192.168.2.2  Bcast:192.168.2.255  Mask:255.255.255.0
 +
 +</code>
 +
 +But a compute node on our cluster, for example node b1, has the following interfaces, all private
 +
 +<code>
 +
 +eth0      Link encap:Ethernet  HWaddr 00:13:D3:F2:C8:EC  
 +          inet addr:192.168.1.7  Bcast:192.168.255.255  Mask:255.255.0.0
 +eth1      Link encap:Ethernet  HWaddr 00:13:D3:F2:C8:ED  
 +          inet addr:10.10.100.7  Bcast:10.10.255.255  Mask:255.255.0.0
 +
 +</code>
 +
 +So in order to for the compute node b1 to reach the flexstorage server we need to use NAT rules and define a path/route.  First we start on petaltail and edit the iptables file and add a "nat filter" masquerade/post routing directives and in the "filter filter" set up a rule connecting eth1 and eth2.
 +
 +<code>
 +
 +*nat
 +# fss public to 10.10
 +-A POSTROUTING -o eth2 -j MASQUERADE
 +COMMIT
 +
 +*filter
 +# fss public via 10.10
 +-A FORWARD -i eth1 -o eth2 -m state --state RELATED,ESTABLISHED -j ACCEPT
 +...
 +COMMIT
 +
 +</code>
 +
 +Next, on the compute nodes we need to add routing path and then mount the file system (using an IP because there is no name resolving).  These commands are stuck in /etc/rc.local for persistence.
 +
 +<code>
 +
 +# /etc/rc.local
 +route add -host 129.133.24.81 gw  10.10.100.217 eth1
 +mount 129.133.24.81:/share/dlbgroup /home/dlbgroup -t nfs -o soft,intr,bg
 +
 +[root@b1 ~]# df -h /home/dlbgroup
 +Filesystem            Size  Used Avail Use% Mounted on
 +129.133.24.81:/share/dlbgroup
 +                     1000G  588G  413G  59% /home/dlbgroup
 +
 +</code>
 +
 +There is ofcourse a penalty in performance doing this.
 +
 +<code>
 +
 +[root@petaltail ~]#  time dd if=/dev/zero of=/home/dlbgroup/foo bs=1024 count=1000000
 +1000000+0 records in
 +1000000+0 records out
 +1024000000 bytes (1.0 GB) copied, 107.961 seconds, 9.5 MB/s
 +
 +real    1m47.964s
 +user    0m0.322s
 +sys     0m2.094s
 +
 +[root@b1 ~]# time dd if=/dev/zero of=/home/dlbgroup/foo bs=1024 count=1000000
 +1000000+0 records in
 +1000000+0 records out
 +1024000000 bytes (1.0 GB) copied, 110.017 seconds, 9.3 MB/s
 +
 +real    1m50.027s
 +user    0m0.271s
 +sys     0m4.073s
 +
 +</code>
  
  
cluster/102.1309355917.txt.gz · Last modified: 2011/06/29 09:58 by hmeij