User Tools

Site Tools


cluster:151

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:151 [2016/11/04 15:23]
hmeij07
cluster:151 [2016/11/29 16:20]
hmeij07 [Resync Data #2]
Line 24: Line 24:
  
   * ''cottontail:/usr/local/bin/beegfs-admin-gui''   * ''cottontail:/usr/local/bin/beegfs-admin-gui''
 +
 +==== Resync Data #2 ====
 +
 +If you have 2 buddymirrors and 2 storage servers each with 2 storage objects, beegfs will write to all primary storage targets even if numtargets is to 1 ... it will use all storage objects so best to numtargets's value equal to the number of primary storage objects. And then of course the content flow from primary to secondary for high availability.
 +
 +How does one add a server?
 +
 +<code>
 +
 +# define storage objects, 2 per server
 +[root@petaltail ~]# /opt/beegfs/sbin/beegfs-setup-storage -p /data/lv1/beegfs_storage -s 217 -i 21701 -m cottontail
 +[root@petaltail ~]# /opt/beegfs/sbin/beegfs-setup-storage -p /data/lv2/beegfs_storage -s 217 -i 21702 -m cottontail
 +[root@swallowtail data]# /opt/beegfs/sbin/beegfs-setup-storage -p /data/lv1/beegfs_storage -s 136 -i 13601 -m cottontail 
 +[root@swallowtail data]# /opt/beegfs/sbin/beegfs-setup-storage -p /data/lv2/beegfs_storage -s 136 -i 13602 -m cottontail
 +
 +
 +[root@cottontail2 ~]# beegfs-df
 +METADATA SERVERS:
 +TargetID        Pool        Total         Free    %      ITotal       IFree    %
 +========        ====        =====         ====    =      ======       =====    =
 +     250         low     122.3GiB     116.6GiB  95%        7.8M        7.6M  98%
 +
 +STORAGE TARGETS:
 +TargetID        Pool        Total         Free    %      ITotal       IFree    %
 +========        ====        =====         ====    =      ======       =====    =
 +   13601         low     291.4GiB     164.6GiB  56%       18.5M       18.5M 100%
 +   13602         low     291.4GiB     164.6GiB  56%       18.5M       18.5M 100%
 +   21701         low     291.2GiB     130.5GiB  45%       18.5M       16.2M  87%
 +   21702         low     291.2GiB     130.5GiB  45%       18.5M       16.2M  87%
 +
 +# define mirrrogroups
 +[root@cottontail2 ~]# beegfs-ctl --addmirrorgroup --primary=21701 --secondary=13601 --groupid=1
 +[root@cottontail2 ~]# beegfs-ctl --addmirrorgroup --primary=13602 --secondary=21702 --groupid=2
 +
 +[root@cottontail2 ~]# beegfs-ctl --listmirrorgroups
 +     BuddyGroupID   PrimaryTargetID SecondaryTargetID
 +     ============   =============== =================
 +                1             21701             13601
 +                2             13602             21702
 +
 +# define buddygroups, numtargets=1
 +[root@cottontail2 ~]# beegfs-ctl --setpattern --buddymirror /mnt/beegfs/home1 --chunksize=512k --numtargets=1
 +New chunksize: 524288
 +New number of storage targets: 1
 +Path: /home1
 +Mount: /mnt/beegfs
 +
 +[root@cottontail2 ~]# beegfs-ctl --setpattern --buddymirror /mnt/beegfs/home2 --chunksize=512k --numtargets=1
 +New chunksize: 524288
 +New number of storage targets: 1
 +Path: /home2
 +Mount: /mnt/beegfs
 +
 +# drop /home/hmeij in /mnt/beegfs/home1/hmeij
 +[root@petaltail mysql_bak_ptt]# find /data/lv1/beegfs_storage/ -type f | wc -l
 +3623
 +[root@petaltail mysql_bak_ptt]# find /data/lv2/beegfs_storage/ -type f | wc -l
 +3678
 +[root@swallowtail data]# find /data/lv1/beegfs_storage/ -type f | wc -l
 +3623
 +[root@swallowtail data]# find /data/lv2/beegfs_storage/ -type f | wc -l
 +3678
 +
 +# with numtargets=1 beegfs still writes to all primary targets found in all buddygroups
 +
 +# rebuild test servers with from scratch with numparts=2
 +# drop hmeij/ into home1/ and obtain slightly more files (couple of 100s), not double the amount
 +# /home/hmeij has 7808 files in it which gets split over primaries but numparts=2 would yield 15,616 files?
 +# drop another copy in home2/ and file counts double to circa 7808
 +[root@cottontail2 ~]# beegfs-ctl --getentryinfo  /mnt/beegfs/home1
 +Path: /home1
 +Mount: /mnt/beegfs
 +EntryID: 0-583C50A1-FA
 +Metadata node: cottontail2 [ID: 250]
 +Stripe pattern details:
 ++ Type: Buddy Mirror
 ++ Chunksize: 512K
 ++ Number of storage targets: desired: 2
 +[root@cottontail2 ~]# beegfs-ctl --getentryinfo  /mnt/beegfs/home2
 +Path: /home2
 +Mount: /mnt/beegfs
 +EntryID: 1-583C50A1-FA
 +Metadata node: cottontail2 [ID: 250]
 +Stripe pattern details:
 ++ Type: Buddy Mirror
 ++ Chunksize: 512K
 ++ Number of storage targets: desired: 2
 +
 +Source: /home/hmeij 7808 files in 10G
 +
 +TargetID        Pool        Total         Free    %      ITotal       IFree    %
 +========        ====        =====         ====    =      ======       =====    =
 +   13601         low     291.4GiB      63.1GiB  22%       18.5M       18.5M 100%
 +   13602         low     291.4GiB      63.1GiB  22%       18.5M       18.5M 100%
 +   21701         low     291.2GiB     134.6GiB  46%       18.5M       16.2M  87%
 +   21702         low     291.2GiB     134.6GiB  46%       18.5M       16.2M  87%
 +[root@cottontail2 ~]# rsync -ac --bwlimit=2500 /home/hmeij /mnt/beegfs/home1/  &
 +[root@cottontail2 ~]# rsync -ac --bwlimit=2500 /home/hmeij /mnt/beegfs/home2/  &
 +TargetID        Pool        Total         Free    %      ITotal       IFree    %
 +========        ====        =====         ====    =      ======       =====    =
 +   13601         low     291.4GiB      43.5GiB  15%       18.5M       18.5M 100%
 +   13602         low     291.4GiB      43.5GiB  15%       18.5M       18.5M 100%
 +   21701         low     291.2GiB     114.9GiB  39%       18.5M       16.1M  87%
 +   21702         low     291.2GiB     114.9GiB  39%       18.5M       16.1M  87%
 +
 +</code> 
 +
 +==== Resync Data #1 ====
 +
 +[[http://www.beegfs.com/wiki/StorageSynchronization|StorageSynchronization Link]]
 +
 +//If the primary storage target of a buddy group is unreachable, it will get marked as offline and a failover to the secondary target will be issued. In this case, the former secondary target will become the new primary target.//
 +
 +Testing out fail over and deletion of data on secondary then a full resync process:
 +
 + 
 +  * started a full --resyncstorage --mirrorgroupid=101 --timestamp=0
 +  * got --getentryinfo EntryID for a file in my /mnt/beegfs/home/path/to/file and did the same for the directory the file was located in
 +  * did a cat /mnt/beegfs/home/path/to/file on a client (just fine)
 +  * brought primary storage down
 +  * redid the cat above (it hangs for a couple of minutes, then displays the file content)
 +  * while primary down, I ran rm -rf   /mnt/beegfs/home/path/to/ removing directory holding file
 +  * a cat now generates the expected file not found error
 +  * brought up primary and started a full --resyncstorage --mirrorgroupid=101 --timestamp=0
 +  * the nr of files and dirs discovered is as expected lower by the correct values
 +  * when I now search for the EntryIDs obtained before they are gone from /data/beegfs-storage (as expected).
 +
 +Nice that it works.
 +
 +So you can full storage content mirror. You'll still need rsnapshots to recover lost data or point in time restores.
  
 ==== Mirror Data ==== ==== Mirror Data ====
Line 109: Line 239:
 [root@petaltail chroots]# ls -lh /var/chroots/data/beegfs_storage/buddymir/u2018/5817/9/60-58179513-30/178-581797C8-30 [root@petaltail chroots]# ls -lh /var/chroots/data/beegfs_storage/buddymir/u2018/5817/9/60-58179513-30/178-581797C8-30
 -rw-rw-rw- 1 root root 4.9G Apr  7  2014 /var/chroots/data/beegfs_storage/buddymir/u2018/5817/9/60-58179513-30/178-581797C8-30 -rw-rw-rw- 1 root root 4.9G Apr  7  2014 /var/chroots/data/beegfs_storage/buddymir/u2018/5817/9/60-58179513-30/178-581797C8-30
 +                                                                          ^^^^^^^^
  
 # copy on secondary # copy on secondary
Line 115: Line 246:
 [root@swallowtail ~]# ls -lh /data/beegfs_storage/buddymir/u2018/5817/9/60-58179513-30/178-581797C8-30 [root@swallowtail ~]# ls -lh /data/beegfs_storage/buddymir/u2018/5817/9/60-58179513-30/178-581797C8-30
 -rw-rw-rw- 1 root root 4.9G Apr  7  2014 /data/beegfs_storage/buddymir/u2018/5817/9/60-58179513-30/178-581797C8-30 -rw-rw-rw- 1 root root 4.9G Apr  7  2014 /data/beegfs_storage/buddymir/u2018/5817/9/60-58179513-30/178-581797C8-30
 +                                                              ^^^^^^^^
  
 # seems to work, notice the ''buddymir'' directory on primary/secondary # seems to work, notice the ''buddymir'' directory on primary/secondary
Line 140: Line 272:
    Ports: UDP: 8005; TCP: 8005    Ports: UDP: 8005; TCP: 8005
    Interfaces: ib1(RDMA) ib1(TCP) eth1(TCP) eth0(TCP)    Interfaces: ib1(RDMA) ib1(TCP) eth1(TCP) eth0(TCP)
 +               ^^^
  
 </code> </code>
Line 154: Line 286:
   * do BEFORE populating XFS file systems   * do BEFORE populating XFS file systems
  
 +==== Meta Backup/Restore =====
 +
 +[[http://www.fhgfs.com/wiki/wikka.php?wakka=FAQ#ea_backup|External Link]]
 +
 +<code>
 +
 +# latest tar
 +rpm -Uvh /sanscratch/tmp/beegfs/tar-1.23-15.el6_8.x86_64.rpm
 +
 +# backup
 +cd /data; tar czvf /sanscratch/tmp/beegfs/meta-backup/n38-meta.tar.gz beegfs_meta/ --xattrs
 +
 +# restore
 +cd /data;  tar xvf /sanscratch/tmp/beegfs/meta-backup/n38-meta.tar.gz --xattrs
 +
 +# test
 +cd /data; diff -r beegfs_meta beegfs_meta.orig
 +# no results
 +
 +</code>
 +
 +
 +
 +==== Resync Meta ====
 +
 +[[http://www.beegfs.com/wiki/AboutMirroring2012#hn_59ca4f8bbb_4|External Link]]
 +
 +  * older versions
 +  * new future version will work like storage mirror with HA and self-healing
 ==== Mirror Meta ==== ==== Mirror Meta ====
 +
 +//Metadata mirroring can currently not be disabled after it has been enabled for a certain directory//
  
 Definitely want Meta content mirrored, that way I can use the n38-n45 nodes with local 15K disk, plus maybe cottontail2 (raid 1 with hot and cold spare). Definitely want Meta content mirrored, that way I can use the n38-n45 nodes with local 15K disk, plus maybe cottontail2 (raid 1 with hot and cold spare).
Line 182: Line 345:
 [root@sharptail ~]# ssh n38 find /data/beegfs_meta -name 3-581392E1-31 [root@sharptail ~]# ssh n38 find /data/beegfs_meta -name 3-581392E1-31
 /data/beegfs_meta/mirror/49.dentries/54/6C/0-581392F0-30/#fSiDs#/3-581392E1-31 /data/beegfs_meta/mirror/49.dentries/54/6C/0-581392F0-30/#fSiDs#/3-581392E1-31
-                  ^^^^^^+                  ^^^^^^ ^^
 # and find # and find
 [root@sharptail ~]# ssh n39 find /data/beegfs_meta -name 3-581392E1-31 [root@sharptail ~]# ssh n39 find /data/beegfs_meta -name 3-581392E1-31
Line 259: Line 422:
     * set in /etc/beegfs-[storage|client|meta|admon|mgmtd].conf and restart services     * set in /etc/beegfs-[storage|client|meta|admon|mgmtd].conf and restart services
  
-  * backup beeGFS EA metadata, see faq +  * backup/restore/mirror 
-    * attempt a restore +    * see more towards top this page
-    * or just snapshot+
  
   * storage server tuning   * storage server tuning
cluster/151.txt · Last modified: 2016/12/06 20:14 by hmeij07