User Tools

Site Tools


cluster:150

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cluster:150 [2016/09/16 14:37]
hmeij07
cluster:150 [2016/11/29 19:36] (current)
hmeij07
Line 2: Line 2:
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
  
 +==== Rsync Daemon/Rsnapshot ====
  
-==== The problem ====+==== The Problem ====
  
 Trying to offload heavy read/write traffic from our file server. I also did a deep information dive to assess if we could afford enterprise level storage. That answer basically means a $42K layout at the low end and up to $70K for the high end. I've detailed the result here [[cluster:149|Enterprise Storage]], lots to be gained by doing that but implementing the **Short Term** plan first as detailed on that page.  Trying to offload heavy read/write traffic from our file server. I also did a deep information dive to assess if we could afford enterprise level storage. That answer basically means a $42K layout at the low end and up to $70K for the high end. I've detailed the result here [[cluster:149|Enterprise Storage]], lots to be gained by doing that but implementing the **Short Term** plan first as detailed on that page. 
Line 48: Line 49:
 </code> </code>
  
-Back on source server, configure the ''/etc/rsnapshot.conf'' file, here are my settings+Back on target server where snapshots will reside, configure the ''/etc/rsnapshot.conf'' file, here are my settings
  
 <code> <code>
Line 92: Line 93:
 <code> <code>
  
-# on source +# on target 
-/usr/local/bin/rsnapshot daily &+[root@cottontail ~]# /usr/local/bin/rsnapshot daily &
  
-watch on target+# on source
 [root@sharptail ~]# lsof -i:873 [root@sharptail ~]# lsof -i:873
 COMMAND   PID USER   FD   TYPE    DEVICE SIZE/OFF NODE NAME COMMAND   PID USER   FD   TYPE    DEVICE SIZE/OFF NODE NAME
Line 103: Line 104:
  
  
-see what is is doing+check what rsync is doing
 [root@sharptail ~]# strace -p 29717 [root@sharptail ~]# strace -p 29717
 Process 29717 attached - interrupt to quit Process 29717 attached - interrupt to quit
Line 118: Line 119:
  
 Suggest you debug with a small directory first, /home is 10TB in our case with 40+ million files. Suggest you debug with a small directory first, /home is 10TB in our case with 40+ million files.
 +
 +Then remount the user inaccessible area /mnt/home/.snapshots for user access on /snapshots
 +
 +<code>
 +
 +# /etc/exports content, then exportfs -ra
 +/mnt/home/.snapshots       localhost.localdomain(sync,rw,no_all_squash,no_root_squash)
 +
 +# /etc/fstab content, then mount /snapshot/daily.0
 +/dev/mapper/VG0-lvhomesnaps        /mnt/home/.snapshots     xfs    rw,tcp,intr,bg              0 0
 +localhost:/mnt/home/.snapshots/daily.0 /snapshots/daily.0   nfs   ro   0 0
 +
 +# test
 +[root@cottontail .snapshots]# touch /snapshots/daily.0/sharptail/home/hmeij/tmp/foo
 +touch: cannot touch `/snapshots/daily.0/sharptail/home/hmeij/tmp/foo': Read-only file system
 +
 +</code>
  
 Finally, if you get this error, which is hardly informative, you've set num-tries to 0 like I did and fussed over it for some time. Set to 1 or leave uncommented. Finally, if you get this error, which is hardly informative, you've set num-tries to 0 like I did and fussed over it for some time. Set to 1 or leave uncommented.
Line 125: Line 143:
 2016-09-15T11:37:06] /usr/local/bin/rsnapshot daily: ERROR: /usr/bin/rsync returned 0.00390625 while processing rsync://sharptail-ib00::root/home/hmeij/python/ 2016-09-15T11:37:06] /usr/local/bin/rsnapshot daily: ERROR: /usr/bin/rsync returned 0.00390625 while processing rsync://sharptail-ib00::root/home/hmeij/python/
  
-<code>+</code>
  
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
cluster/150.1474036652.txt.gz · Last modified: 2016/09/16 14:37 by hmeij07