Warning: Undefined array key "DOKU_PREFS" in /usr/share/dokuwiki/inc/common.php on line 2082
cluster:151 [DokuWiki]

User Tools

Site Tools


cluster:151

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:151 [2016/11/21 14:33]
hmeij07 [Resync Data]
cluster:151 [2016/12/01 14:02]
hmeij07 [Mirror Meta]
Line 6: Line 6:
 A document for me to recall and make notes of what I read in the manual pages and what needs testing. A document for me to recall and make notes of what I read in the manual pages and what needs testing.
  
-Basically during the Summer of 2016 I investigated if the HPCC could afford enterprise level storage. I wanted 99.999% uptime, snapshots, high availability and other goodies such as parallel NFS. Netapp came the closest but, eh, still at $42K lots of other options show up. The story is detailed here at [[cluster:149|The Storage Problem]]+Basically during the Summer of 2016 I investigated if the HPCC could afford enterprise level storage. I wanted 99.999% uptime, snapshots, high availability and other goodies such as parallel NFS. Netapp came the closest but, eh, still at $42K lots of other options show up. That story is detailed at [[cluster:149|The Storage Problem]]
  
 This page is best read from the bottom up. This page is best read from the bottom up.
  
-==== cluster idea ====+==== beeGFS cluster idea ====
  
-  * Storage servers: buy 2 now 4k+4k then 3rd in July 4k?+  * Storage servers:  
 +    * buy 2 with each 12x2TB slow disk, Raid 6, 20T usable (clustered, parallel file system) 
 +      * create 2 6TB volumes on each, quota at 2TB via XFS, 3 users/server  
 +      * only $HOME changes to ''/mnt/beegfs/home[1|2]'' (migrates ~4.5TB away from /home or ~50%) 
 +      * create 2 buddymirrors; each with primary on one, secondary on the other server (high availability) 
 +    * on UPS 
 +    * on Infiniband
  
-  * move test users over on 2 nodes, test, only change is $HOME+  * Client servers: 
 +    * all compute/login nodes become beegfs clients
  
-  * Home cluster +  * Meta servers: 
-    * cottontail (mngt+admingiu+    * cottontail2 (root meta, on Infiniband) plus n38-n45 nodes (on Infiniband) 
-    * 2-3 new units storage (+snapshots/meta backup+    * all mirrored (total=9) 
-    * cottontail2 meta + n38-n45 metaall mirrored+    * cottontail2 on UPS  
 + 
 +  * Management and Monitor servers 
 +    * cottontail (on UPS, on Infiniband) 
 + 
 +  * Backups (rsnapshot.org via rsync daemons [[cluster:150|Rsync Daemon/Rsnapshot]]
 +    * sharptail:/home --> cottontail 
 +    * serverA:/mnt/beegfs/home1 --> serverB (8TB max
 +    * serverB:/mnt/beegfs/home2 --> serverA (8TB max) 
 + 
 +  * Costs (includes 3 year NBD warranty) 
 +    * Microway $12,500 
 +    * CDW 
  
 ==== beegfs-admin-gui ==== ==== beegfs-admin-gui ====
  
   * ''cottontail:/usr/local/bin/beegfs-admin-gui''   * ''cottontail:/usr/local/bin/beegfs-admin-gui''
 +
 +==== upgrade ====
 +
 +  * [[http://www.beegfs.com/content/updating-upgrading-and-versioning/|External Link]]
 +  * New feature - High Availability for Metadata Servers (self-healing, transparent failover)
 +
 +A bit complicated. 
 +
 +  * Repo base URL baseurl=http://www.beegfs.com/release/beegfs_6/dists/rhel6 via http shows only 6.1-el6
 +    * [   ] beegfs-mgmtd-6.1-el6.x86_64.rpm          2016-11-16 16:27  660K 
 +  * '' yum --disablerepo "*" --enablerepo beegfs repolist'' shows
 +    * beegfs-mgmtd.x86_64                            2015.03.r22-el6            beegfs
 +  * ''yum install --disablerepo "*" --enablerepo beegfs --downloadonly --downloaddir=/sanscratch/tmp/beegfs/beegfs_6/ *x86_64* -y''
 +   * http://www.beegfs.com/release/beegfs_6/dists/rhel6/x86_64/beegfs-mgmtd-2015.03.r22-el6.x86_64.rpm: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found" <-- wrong package version
 +
 +
 +So the wget/rpm approach (list all packages present on a particular node else you will get a dependencies failure!)
 +
 +<code>
 +
 +# get them all
 +wget http://www.beegfs.com/release/beegfs_6/dists/rhel6/x86_64/beegfs-mgmtd-6.1-el6.x86_64.rpm
 +
 +# client and meta node
 +rpm -Uvh ./beegfs-common-6.1-el6.noarch.rpm ./beegfs-utils-6.1-el6.x86_64.rpm ./beegfs-opentk-lib-6.1-el6.x86_64.rpm ./beegfs-helperd-6.1-el6.x86_64.rpm ./beegfs-client-6.1-el6.noarch.rpm ./beegfs-meta-6.1-el6.x86_64.rpm
 +
 +# updated?
 +[root@cottontail2 beegfs_6]# beegfs-ctl | head -2
 +BeeGFS Command-Line Control Tool (http://www.beegfs.com)
 +Version: 6.1
 +
 +#Sheeesh
 +</code>
 +
  
 ==== Resync Data #2 ==== ==== Resync Data #2 ====
Line 55: Line 108:
  
 # define mirrrogroups # define mirrrogroups
-[root@cottontail2 ~]# beegfs-ctl --addmirrorgroup --primary=21701 --secondary=13601 --groupid=1 +[root@cottontail2 ~]# beegfs-ctl --addmirrorgroup [--nodetype=storage] --primary=21701 --secondary=13601 --groupid=1 
-[root@cottontail2 ~]# beegfs-ctl --addmirrorgroup --primary=13602 --secondary=21702 --groupid=2+[root@cottontail2 ~]# beegfs-ctl --addmirrorgroup [--nodetype=storage] --primary=13602 --secondary=21702 --groupid=2
  
 [root@cottontail2 ~]# beegfs-ctl --listmirrorgroups [root@cottontail2 ~]# beegfs-ctl --listmirrorgroups
Line 87: Line 140:
 3678 3678
  
-redefine with numtargets=2 +# with numtargets=1 beegfs still writes to all primary targets found in all buddygroups 
-resync + 
 +# rebuild test servers with from scratch with numparts=2 
 +drop hmeij/ into home1/ and obtain slightly more files (couple of 100s), not double the amount 
 +# /home/hmeij has 7808 files in it which gets split over primaries but numparts=2 would yield 15,616 files? 
 +# drop another copy in home2/ and file counts double to circa 7808 
 +[root@cottontail2 ~]# beegfs-ctl --getentryinfo  /mnt/beegfs/home1 
 +Path: /home1 
 +Mount: /mnt/beegfs 
 +EntryID: 0-583C50A1-FA 
 +Metadata node: cottontail2 [ID: 250] 
 +Stripe pattern details: 
 ++ Type: Buddy Mirror 
 ++ Chunksize: 512K 
 ++ Number of storage targets: desired: 2 
 +[root@cottontail2 ~]# beegfs-ctl --getentryinfo  /mnt/beegfs/home2 
 +Path: /home2 
 +Mount: /mnt/beegfs 
 +EntryID: 1-583C50A1-FA 
 +Metadata node: cottontail2 [ID: 250] 
 +Stripe pattern details: 
 ++ Type: Buddy Mirror 
 ++ Chunksize: 512K 
 ++ Number of storage targets: desired: 2 
 + 
 +Source: /home/hmeij 7808 files in 10G 
 + 
 +TargetID        Pool        Total         Free    %      ITotal       IFree    % 
 +========        ====        =====         ====    =      ======       =====    = 
 +   13601         low     291.4GiB      63.1GiB  22%       18.5M       18.5M 100% 
 +   13602         low     291.4GiB      63.1GiB  22%       18.5M       18.5M 100% 
 +   21701         low     291.2GiB     134.6GiB  46%       18.5M       16.2M  87% 
 +   21702         low     291.2GiB     134.6GiB  46%       18.5M       16.2M  87% 
 +[root@cottontail2 ~]# rsync -ac --bwlimit=2500 /home/hmeij /mnt/beegfs/home1/ 
 +[root@cottontail2 ~]# rsync -ac --bwlimit=2500 /home/hmeij /mnt/beegfs/home2/ 
 +TargetID        Pool        Total         Free    %      ITotal       IFree    % 
 +========        ====        =====         ====    =      ======       =====    = 
 +   13601         low     291.4GiB      43.5GiB  15%       18.5M       18.5M 100% 
 +   13602         low     291.4GiB      43.5GiB  15%       18.5M       18.5M 100% 
 +   21701         low     291.2GiB     114.9GiB  39%       18.5M       16.1M  87% 
 +   21702         low     291.2GiB     114.9GiB  39%       18.5M       16.1M  87% 
 + 
 +# first rsync drops roughly 5G in both primaries which then get copied to secondaries. 
 +# second rsync does the same so both storage servers loose 20G roughly 
 +# now shut a storage server down and the whole filesystem can still be accessed (HA)
  
 </code>  </code> 
Line 284: Line 380:
 Content mirroring will require more disk space. Perhaps snapshots to another node is more useful, also solves backup issue. Content mirroring will require more disk space. Perhaps snapshots to another node is more useful, also solves backup issue.
  
- +V6 does buddymirror meta mirroring [[http://www.beegfs.com/wiki/MDMirror|External Link]]
 <code> <code>
  
-# enable meta mirroring, directory based+2015.03 enable meta mirroring, directory based
 # change to 11/04/2016: used --createdir to make this home. # change to 11/04/2016: used --createdir to make this home.
 [root@n7 ~]# beegfs-ctl --mirrormd /mnt/beegfs/home [root@n7 ~]# beegfs-ctl --mirrormd /mnt/beegfs/home
Line 293: Line 389:
 Mount: '/mnt/beegfs'; Path: '/hmeij-mirror' Mount: '/mnt/beegfs'; Path: '/hmeij-mirror'
 Operation succeeded. Operation succeeded.
 +
 +# V6.1 does it a root level not from a path
 +beegfs-ctl --addmirrorgroup --nodetype=meta --primary=38 --secondary=39 --groupid=1 
 +beegfs-ctl --addmirrorgroup --nodetype=meta --primary=250 --secondary=37 --groupid=2 
 +beegfs-ctl --mirrromd
  
 # put some new content in  # put some new content in 
Line 426: Line 527:
   * made easy [[http://www.beegfs.com/wiki/ManualInstallWalkThrough|External Link]]   * made easy [[http://www.beegfs.com/wiki/ManualInstallWalkThrough|External Link]]
   * rpms pulled from repository via petaltail in ''greentail:/sanscratch/tmp/beegfs''   * rpms pulled from repository via petaltail in ''greentail:/sanscratch/tmp/beegfs''
 +    * ''yum --disablerepo "*" --enablerepo beegfs list available''
 +    * use ''yumdownloader''
  
 <code> <code>
cluster/151.txt · Last modified: 2016/12/06 15:14 by hmeij07