User Tools

Site Tools


cluster:102

Warning: Undefined array key -1 in /usr/share/dokuwiki/inc/html.php on line 1458

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cluster:102 [2011/06/29 11:21]
hmeij
cluster:102 [2020/08/24 07:19] (current)
hmeij07
Line 2: Line 2:
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
  
-This is my second NAT story, for the first one look at [[cluster:51|The Story Of NAT, part 1]]+Note #1 
 + 
 +CentOS 8.1 with the standard firewalld.\\ 
 +If this is of interest to you this was how I managed to get it work: 
 +<code> 
 +EXTIFACE=MASTER_NODE_EXT_INTERFACE_DEVICE (e.g. eno1) 
 +INTIFACE=MASTER_NODE_INTERNAL_INTERFACE_DEVICE (e.g. eno2) 
 +INTIPADDR=MASTER_IP_OF_INTERNAL_IFAC 
 +PREFIX=PREFIX_OF_INTERNAL_NETWORK 
 +firewall-cmd --change-interface=${EXTIFACE} --zone=public 
 +firewall-cmd --change-interface=${INTIFACE} --zone=trusted --permanent 
 +firewall-cmd --permanent --direct --passthrough ipv4 -t nat -I POSTROUTING -o ${EXTIFACE} -j MASQUERADE -s ${INTIPADDR}/${PREFIX} 
 +firewall-cmd --set-default-zone=trusted 
 +firewall-cmd --reload 
 +</code> 
 +  
 +And make sure the default route is set on all compute nodes.  
 + 
 +Note #2 
 + 
 +configured Shorewall on a cluster to do NAT through the head node. 
 + 
 +Edit the file /etc/shorewall/snat   and add this line: 
 +<code> 
 +MASQUERADE 192.168.0.0/24      eno1 
 +</code> 
 +where 192.168.0 is the address range of your node interfaces - clearly you need to change this to fit 
 +en01 is the external interface on the head node 
 + 
 +My /etc/shorewall/interfaces contains this (forwarding ib0) 
 +<code> 
 +nat     eno1    detect  dhcp 
 +nat     ib0     detect  dhcp 
 +</code> 
 +so substitute ib0 for your internal ethernet interface 
 + 
 + 
 + 
  
 ==== NAT Story, part 2 ==== ==== NAT Story, part 2 ====
 +
 +This is my second NAT story, for the first one look at [[cluster:51|The Story Of NAT, part 1]]
 +
 +
  
 Writing this up so I will remember what I did, and why.  Basic problem is this: How do you make a filesystem in a public VLAN available on a private network?  One solution is to work with Network Address Translation, or NAT in short.  More information at [[http://en.wikipedia.org/wiki/Network_address_translation|http://en.wikipedia.org/wiki/Network_address_translation]] Writing this up so I will remember what I did, and why.  Basic problem is this: How do you make a filesystem in a public VLAN available on a private network?  One solution is to work with Network Address Translation, or NAT in short.  More information at [[http://en.wikipedia.org/wiki/Network_address_translation|http://en.wikipedia.org/wiki/Network_address_translation]]
Line 48: Line 90:
 </code> </code>
  
-So in order to for the compute node b1 to reach the flexstorage server we need to use NAT rules and define a path/route.  First we start on petaltail and edit the iptables files and add a "nat filter" and in the "filter filter" set up rules for forwarding and post routing.+So in order to for the compute node b1 to reach the flexstorage server we need to use NAT rules and define a path/route.  First we start on petaltail and edit the iptables file and add a "nat filter" masquerade/post routing directives and in the "filter filter" set up a rule connecting eth1 and eth2.
  
 <code> <code>
Line 65: Line 107:
 </code> </code>
  
-Next+Next, on the compute nodes we need to add routing path and then mount the file system (using an IP because there is no name resolving).  These commands are stuck in /etc/rc.local for persistence. 
 + 
 +<code> 
 + 
 +# /etc/rc.local 
 +route add -host 129.133.24.81 gw  10.10.100.217 eth1 
 +mount 129.133.24.81:/share/dlbgroup /home/dlbgroup -t nfs -o soft,intr,bg 
 + 
 +[root@b1 ~]# df -h /home/dlbgroup 
 +Filesystem            Size  Used Avail Use% Mounted on 
 +129.133.24.81:/share/dlbgroup 
 +                     1000G  588G  413G  59% /home/dlbgroup 
 + 
 +</code> 
 + 
 +There is ofcourse a penalty in performance doing this. 
 + 
 +<code> 
 + 
 +[root@petaltail ~]#  time dd if=/dev/zero of=/home/dlbgroup/foo bs=1024 count=1000000 
 +1000000+0 records in 
 +1000000+0 records out 
 +1024000000 bytes (1.0 GB) copied, 107.961 seconds, 9.5 MB/s 
 + 
 +real    1m47.964s 
 +user    0m0.322s 
 +sys     0m2.094s 
 + 
 +[root@b1 ~]# time dd if=/dev/zero of=/home/dlbgroup/foo bs=1024 count=1000000 
 +1000000+0 records in 
 +1000000+0 records out 
 +1024000000 bytes (1.0 GB) copied, 110.017 seconds, 9.3 MB/s 
 + 
 +real    1m50.027s 
 +user    0m0.271s 
 +sys     0m4.073s 
 + 
 +</code> 
  
  
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
cluster/102.1309360867.txt.gz · Last modified: 2011/06/29 11:21 by hmeij