User Tools

Site Tools


cluster:151

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:151 [2016/11/10 20:37]
hmeij07 [Tuning]
cluster:151 [2016/11/22 20:27]
hmeij07 [Resync Data #2]
Line 25: Line 25:
   * ''cottontail:/usr/local/bin/beegfs-admin-gui''   * ''cottontail:/usr/local/bin/beegfs-admin-gui''
  
-==== Resync Data ====+==== Resync Data #2 ==== 
 + 
 +If you have 2 buddymirrors and 2 storage servers each with 2 storage objects, beegfs will write to all primary storage targets even if numtargets is to 1 ... it will use all storage objects so best to numtargets's value equal to the number of primary storage objects. And then of course the content flow from primary to secondary for high availability. 
 + 
 +How does one add a server? 
 + 
 +<code> 
 + 
 +# define storage objects, 2 per server 
 +[root@petaltail ~]# /opt/beegfs/sbin/beegfs-setup-storage -p /data/lv1/beegfs_storage -s 217 -i 21701 -m cottontail 
 +[root@petaltail ~]# /opt/beegfs/sbin/beegfs-setup-storage -p /data/lv2/beegfs_storage -s 217 -i 21702 -m cottontail 
 +[root@swallowtail data]# /opt/beegfs/sbin/beegfs-setup-storage -p /data/lv1/beegfs_storage -s 136 -i 13601 -m cottontail  
 +[root@swallowtail data]# /opt/beegfs/sbin/beegfs-setup-storage -p /data/lv2/beegfs_storage -s 136 -i 13602 -m cottontail 
 + 
 + 
 +[root@cottontail2 ~]# beegfs-df 
 +METADATA SERVERS: 
 +TargetID        Pool        Total         Free    %      ITotal       IFree    % 
 +========        ====        =====         ====    =      ======       =====    = 
 +     250         low     122.3GiB     116.6GiB  95%        7.8M        7.6M  98% 
 + 
 +STORAGE TARGETS: 
 +TargetID        Pool        Total         Free    %      ITotal       IFree    % 
 +========        ====        =====         ====    =      ======       =====    = 
 +   13601         low     291.4GiB     164.6GiB  56%       18.5M       18.5M 100% 
 +   13602         low     291.4GiB     164.6GiB  56%       18.5M       18.5M 100% 
 +   21701         low     291.2GiB     130.5GiB  45%       18.5M       16.2M  87% 
 +   21702         low     291.2GiB     130.5GiB  45%       18.5M       16.2M  87% 
 + 
 +# define mirrrogroups 
 +[root@cottontail2 ~]# beegfs-ctl --addmirrorgroup --primary=21701 --secondary=13601 --groupid=1 
 +[root@cottontail2 ~]# beegfs-ctl --addmirrorgroup --primary=13602 --secondary=21702 --groupid=2 
 + 
 +[root@cottontail2 ~]# beegfs-ctl --listmirrorgroups 
 +     BuddyGroupID   PrimaryTargetID SecondaryTargetID 
 +     ============   =============== ================= 
 +                1             21701             13601 
 +                2             13602             21702 
 + 
 +# define buddygroups, numtargets=1 
 +[root@cottontail2 ~]# beegfs-ctl --setpattern --buddymirror /mnt/beegfs/home1 --chunksize=512k --numtargets=1 
 +New chunksize: 524288 
 +New number of storage targets: 1 
 +Path: /home1 
 +Mount: /mnt/beegfs 
 + 
 +[root@cottontail2 ~]# beegfs-ctl --setpattern --buddymirror /mnt/beegfs/home2 --chunksize=512k --numtargets=1 
 +New chunksize: 524288 
 +New number of storage targets: 1 
 +Path: /home2 
 +Mount: /mnt/beegfs 
 + 
 +# drop /home/hmeij in /mnt/beegfs/home1/hmeij 
 +[root@petaltail mysql_bak_ptt]# find /data/lv1/beegfs_storage/ -type f | wc -l 
 +3623 
 +[root@petaltail mysql_bak_ptt]# find /data/lv2/beegfs_storage/ -type f | wc -l 
 +3678 
 +[root@swallowtail data]# find /data/lv1/beegfs_storage/ -type f | wc -l 
 +3623 
 +[root@swallowtail data]# find /data/lv2/beegfs_storage/ -type f | wc -l 
 +3678 
 + 
 +# redefine with numtargets=2, no error 
 +# resync - no results 
 +# dropping hmeij/ into home2/ yields same amount of files 
 + 
 +# rebuild test servers with from scratch with numparts=2 
 +# drop hmeij/ into home1/ and obtain slightly more files (couple of 100s), not double the amount 
 +# /home/hmeij has 7808 files in it which gets split over primaries but numparts=2 would yield 15,616 files? 
 +# drop another copy in home2/ and file counts double to circa 7808 
 + 
 +Path: /home1/hmeij/xen/bvm1.img 
 +Mount: /mnt/beegfs 
 +EntryID: 1B-5834626F-FA 
 +Metadata node: cottontail2 [ID: 250] 
 +Stripe pattern details: 
 ++ Type: Buddy Mirror 
 ++ Chunksize: 512K 
 ++ Number of storage targets: desired: 2; actual: 2 
 ++ Storage mirror buddy groups: 
 +  + 2 
 +  + 1 
 + 
 + 
 +</code>  
 + 
 +==== Resync Data #1 ====
  
 [[http://www.beegfs.com/wiki/StorageSynchronization|StorageSynchronization Link]] [[http://www.beegfs.com/wiki/StorageSynchronization|StorageSynchronization Link]]
Line 36: Line 122:
   * started a full --resyncstorage --mirrorgroupid=101 --timestamp=0   * started a full --resyncstorage --mirrorgroupid=101 --timestamp=0
   * got --getentryinfo EntryID for a file in my /mnt/beegfs/home/path/to/file and did the same for the directory the file was located in   * got --getentryinfo EntryID for a file in my /mnt/beegfs/home/path/to/file and did the same for the directory the file was located in
-  * did a cat /mnt.beegfs/home/path/to/file on a client (just fine)+  * did a cat /mnt/beegfs/home/path/to/file on a client (just fine)
   * brought primary storage down   * brought primary storage down
   * redid the cat above (it hangs for a couple of minutes, then displays the file content)   * redid the cat above (it hangs for a couple of minutes, then displays the file content)
cluster/151.txt · Last modified: 2016/12/06 20:14 by hmeij07