User Tools

Site Tools


cluster:151

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:151 [2016/11/10 19:49]
hmeij07 [Resync Data]
cluster:151 [2016/11/22 20:27]
hmeij07 [Resync Data #2]
Line 25: Line 25:
   * ''cottontail:/usr/local/bin/beegfs-admin-gui''   * ''cottontail:/usr/local/bin/beegfs-admin-gui''
  
-==== Resync Data ====+==== Resync Data #2 ==== 
 + 
 +If you have 2 buddymirrors and 2 storage servers each with 2 storage objects, beegfs will write to all primary storage targets even if numtargets is to 1 ... it will use all storage objects so best to numtargets's value equal to the number of primary storage objects. And then of course the content flow from primary to secondary for high availability. 
 + 
 +How does one add a server? 
 + 
 +<code> 
 + 
 +# define storage objects, 2 per server 
 +[root@petaltail ~]# /opt/beegfs/sbin/beegfs-setup-storage -p /data/lv1/beegfs_storage -s 217 -i 21701 -m cottontail 
 +[root@petaltail ~]# /opt/beegfs/sbin/beegfs-setup-storage -p /data/lv2/beegfs_storage -s 217 -i 21702 -m cottontail 
 +[root@swallowtail data]# /opt/beegfs/sbin/beegfs-setup-storage -p /data/lv1/beegfs_storage -s 136 -i 13601 -m cottontail  
 +[root@swallowtail data]# /opt/beegfs/sbin/beegfs-setup-storage -p /data/lv2/beegfs_storage -s 136 -i 13602 -m cottontail 
 + 
 + 
 +[root@cottontail2 ~]# beegfs-df 
 +METADATA SERVERS: 
 +TargetID        Pool        Total         Free    %      ITotal       IFree    % 
 +========        ====        =====         ====    =      ======       =====    = 
 +     250         low     122.3GiB     116.6GiB  95%        7.8M        7.6M  98% 
 + 
 +STORAGE TARGETS: 
 +TargetID        Pool        Total         Free    %      ITotal       IFree    % 
 +========        ====        =====         ====    =      ======       =====    = 
 +   13601         low     291.4GiB     164.6GiB  56%       18.5M       18.5M 100% 
 +   13602         low     291.4GiB     164.6GiB  56%       18.5M       18.5M 100% 
 +   21701         low     291.2GiB     130.5GiB  45%       18.5M       16.2M  87% 
 +   21702         low     291.2GiB     130.5GiB  45%       18.5M       16.2M  87% 
 + 
 +# define mirrrogroups 
 +[root@cottontail2 ~]# beegfs-ctl --addmirrorgroup --primary=21701 --secondary=13601 --groupid=1 
 +[root@cottontail2 ~]# beegfs-ctl --addmirrorgroup --primary=13602 --secondary=21702 --groupid=2 
 + 
 +[root@cottontail2 ~]# beegfs-ctl --listmirrorgroups 
 +     BuddyGroupID   PrimaryTargetID SecondaryTargetID 
 +     ============   =============== ================= 
 +                1             21701             13601 
 +                2             13602             21702 
 + 
 +# define buddygroups, numtargets=1 
 +[root@cottontail2 ~]# beegfs-ctl --setpattern --buddymirror /mnt/beegfs/home1 --chunksize=512k --numtargets=1 
 +New chunksize: 524288 
 +New number of storage targets: 1 
 +Path: /home1 
 +Mount: /mnt/beegfs 
 + 
 +[root@cottontail2 ~]# beegfs-ctl --setpattern --buddymirror /mnt/beegfs/home2 --chunksize=512k --numtargets=1 
 +New chunksize: 524288 
 +New number of storage targets: 1 
 +Path: /home2 
 +Mount: /mnt/beegfs 
 + 
 +# drop /home/hmeij in /mnt/beegfs/home1/hmeij 
 +[root@petaltail mysql_bak_ptt]# find /data/lv1/beegfs_storage/ -type f | wc -l 
 +3623 
 +[root@petaltail mysql_bak_ptt]# find /data/lv2/beegfs_storage/ -type f | wc -l 
 +3678 
 +[root@swallowtail data]# find /data/lv1/beegfs_storage/ -type f | wc -l 
 +3623 
 +[root@swallowtail data]# find /data/lv2/beegfs_storage/ -type f | wc -l 
 +3678 
 + 
 +# redefine with numtargets=2, no error 
 +# resync - no results 
 +# dropping hmeij/ into home2/ yields same amount of files 
 + 
 +# rebuild test servers with from scratch with numparts=2 
 +# drop hmeij/ into home1/ and obtain slightly more files (couple of 100s), not double the amount 
 +# /home/hmeij has 7808 files in it which gets split over primaries but numparts=2 would yield 15,616 files? 
 +# drop another copy in home2/ and file counts double to circa 7808 
 + 
 +Path: /home1/hmeij/xen/bvm1.img 
 +Mount: /mnt/beegfs 
 +EntryID: 1B-5834626F-FA 
 +Metadata node: cottontail2 [ID: 250] 
 +Stripe pattern details: 
 ++ Type: Buddy Mirror 
 ++ Chunksize: 512K 
 ++ Number of storage targets: desired: 2; actual: 2 
 ++ Storage mirror buddy groups: 
 +  + 2 
 +  + 1 
 + 
 + 
 +</code>  
 + 
 +==== Resync Data #1 ====
  
 [[http://www.beegfs.com/wiki/StorageSynchronization|StorageSynchronization Link]] [[http://www.beegfs.com/wiki/StorageSynchronization|StorageSynchronization Link]]
Line 36: Line 122:
   * started a full --resyncstorage --mirrorgroupid=101 --timestamp=0   * started a full --resyncstorage --mirrorgroupid=101 --timestamp=0
   * got --getentryinfo EntryID for a file in my /mnt/beegfs/home/path/to/file and did the same for the directory the file was located in   * got --getentryinfo EntryID for a file in my /mnt/beegfs/home/path/to/file and did the same for the directory the file was located in
-  * did a cat /mnt.beegfs/home/path/to/file on a client (just fine)+  * did a cat /mnt/beegfs/home/path/to/file on a client (just fine)
   * brought primary storage down   * brought primary storage down
   * redid the cat above (it hangs for a couple of minutes, then displays the file content)   * redid the cat above (it hangs for a couple of minutes, then displays the file content)
Line 133: Line 219:
 [root@petaltail chroots]# ls -lh /var/chroots/data/beegfs_storage/buddymir/u2018/5817/9/60-58179513-30/178-581797C8-30 [root@petaltail chroots]# ls -lh /var/chroots/data/beegfs_storage/buddymir/u2018/5817/9/60-58179513-30/178-581797C8-30
 -rw-rw-rw- 1 root root 4.9G Apr  7  2014 /var/chroots/data/beegfs_storage/buddymir/u2018/5817/9/60-58179513-30/178-581797C8-30 -rw-rw-rw- 1 root root 4.9G Apr  7  2014 /var/chroots/data/beegfs_storage/buddymir/u2018/5817/9/60-58179513-30/178-581797C8-30
 +                                                                          ^^^^^^^^
  
 # copy on secondary # copy on secondary
Line 139: Line 226:
 [root@swallowtail ~]# ls -lh /data/beegfs_storage/buddymir/u2018/5817/9/60-58179513-30/178-581797C8-30 [root@swallowtail ~]# ls -lh /data/beegfs_storage/buddymir/u2018/5817/9/60-58179513-30/178-581797C8-30
 -rw-rw-rw- 1 root root 4.9G Apr  7  2014 /data/beegfs_storage/buddymir/u2018/5817/9/60-58179513-30/178-581797C8-30 -rw-rw-rw- 1 root root 4.9G Apr  7  2014 /data/beegfs_storage/buddymir/u2018/5817/9/60-58179513-30/178-581797C8-30
 +                                                              ^^^^^^^^
  
 # seems to work, notice the ''buddymir'' directory on primary/secondary # seems to work, notice the ''buddymir'' directory on primary/secondary
Line 178: Line 266:
   * do BEFORE populating XFS file systems   * do BEFORE populating XFS file systems
  
 +==== Meta Backup/Restore =====
 +
 +[[http://www.fhgfs.com/wiki/wikka.php?wakka=FAQ#ea_backup|External Link]]
 +
 +<code>
 +
 +# latest tar
 +rpm -Uvh /sanscratch/tmp/beegfs/tar-1.23-15.el6_8.x86_64.rpm
 +
 +# backup
 +cd /data; tar czvf /sanscratch/tmp/beegfs/meta-backup/n38-meta.tar.gz beegfs_meta/ --xattrs
 +
 +# restore
 +cd /data;  tar xvf /sanscratch/tmp/beegfs/meta-backup/n38-meta.tar.gz --xattrs
 +
 +# test
 +cd /data; diff -r beegfs_meta beegfs_meta.orig
 +# no results
 +
 +</code>
 +
 +
 +
 +==== Resync Meta ====
 +
 +[[http://www.beegfs.com/wiki/AboutMirroring2012#hn_59ca4f8bbb_4|External Link]]
 +
 +  * older versions
 +  * new future version will work like storage mirror with HA and self-healing
 ==== Mirror Meta ==== ==== Mirror Meta ====
 +
 +//Metadata mirroring can currently not be disabled after it has been enabled for a certain directory//
  
 Definitely want Meta content mirrored, that way I can use the n38-n45 nodes with local 15K disk, plus maybe cottontail2 (raid 1 with hot and cold spare). Definitely want Meta content mirrored, that way I can use the n38-n45 nodes with local 15K disk, plus maybe cottontail2 (raid 1 with hot and cold spare).
Line 206: Line 325:
 [root@sharptail ~]# ssh n38 find /data/beegfs_meta -name 3-581392E1-31 [root@sharptail ~]# ssh n38 find /data/beegfs_meta -name 3-581392E1-31
 /data/beegfs_meta/mirror/49.dentries/54/6C/0-581392F0-30/#fSiDs#/3-581392E1-31 /data/beegfs_meta/mirror/49.dentries/54/6C/0-581392F0-30/#fSiDs#/3-581392E1-31
-                  ^^^^^^+                  ^^^^^^ ^^
 # and find # and find
 [root@sharptail ~]# ssh n39 find /data/beegfs_meta -name 3-581392E1-31 [root@sharptail ~]# ssh n39 find /data/beegfs_meta -name 3-581392E1-31
Line 283: Line 402:
     * set in /etc/beegfs-[storage|client|meta|admon|mgmtd].conf and restart services     * set in /etc/beegfs-[storage|client|meta|admon|mgmtd].conf and restart services
  
-  * backup beeGFS EA metadata, see faq +  * backup/restore/mirror 
-    * attempt a restore +    * see more towards top this page
-    * or just snapshot+
  
   * storage server tuning   * storage server tuning
cluster/151.txt · Last modified: 2016/12/06 20:14 by hmeij07