User Tools

Site Tools


cluster:151

Warning: Undefined array key -1 in /usr/share/dokuwiki/inc/html.php on line 1458

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cluster:151 [2016/11/29 11:20]
hmeij07 [Resync Data #2]
cluster:151 [2016/12/06 15:14] (current)
hmeij07 [beeGFS cluster idea]
Line 6: Line 6:
 A document for me to recall and make notes of what I read in the manual pages and what needs testing. A document for me to recall and make notes of what I read in the manual pages and what needs testing.
  
-Basically during the Summer of 2016 I investigated if the HPCC could afford enterprise level storage. I wanted 99.999% uptime, snapshots, high availability and other goodies such as parallel NFS. Netapp came the closest but, eh, still at $42K lots of other options show up. The story is detailed here at [[cluster:149|The Storage Problem]]+Basically during the Summer of 2016 I investigated if the HPCC could afford enterprise level storage. I wanted 99.999% uptime, snapshots, high availability and other goodies such as parallel NFS. Netapp came the closest but, eh, still at $42K lots of other options show up. That story is detailed at [[cluster:149|The Storage Problem]]
  
 This page is best read from the bottom up. This page is best read from the bottom up.
  
-==== cluster idea ====+NOTE:
  
-  * Storage servers: buy now 4k+4k then 3rd in July 4k?+I'm reluctantly giving up on beegfs, especially v6.1, it simply works flaky. In the admon gui I can see storage nodes, 4 storage objects, 4 meta servers with clients installed on all meta. /mnt/beegfs is there and content can be created. Then I mirror storage nodes, all is fine. Then I mirror meta servers and the mirrors set up, enabling mirrormd states success. Then the whole environment hangs on /mnt/beegfs.  My sense is helperd is not communication well in a private network environment with no DNS and does not consult /etc/hosts. But I have nothing to back that up with, so I can fix it.
  
-  * move test users over on 2 nodestest, only change is $HOME+Back to adding more XFS into my clusterI'll wait a few more versions. 
 + --- //[[hmeij@wesleyan.edu|Henk]] 2016/12/06 15:10// 
 +==== beeGFS cluster idea ====
  
-  * Home cluster +  * Storage servers:  
-    * cottontail (mngt+admingiu+    * buy 2 with each 12x2TB slow disk, Raid 6, 20T usable (clustered, parallel file system
-    * 2-new units storage (+snapshots/meta backup+      create 6TB volumes on each, quota at 2TB via XFS, users/server  
-    * cottontail2 meta n38-n45 meta, all mirrored+      * only $HOME changes to ''/mnt/beegfs/home[1|2]'' (migrates ~4.5TB away from /home or ~50%) 
 +      * create 2 buddymirrors; each with primary on one, secondary on the other server (high availability) 
 +    * on UPS 
 +    * on Infiniband 
 + 
 +  * Client servers: 
 +    * all compute/login nodes become beegfs clients 
 + 
 +  * Meta servers: 
 +    * cottontail2 (root meta, on Infiniband) plus n38-n45 nodes (on Infiniband) 
 +    * all mirrored (total=9) 
 +    * cottontail2 on UPS  
 + 
 +  * Management and Monitor servers 
 +    * cottontail (on UPS, on Infiniband) 
 + 
 +  * Backups (rsnapshot.org via rsync daemons [[cluster:150|Rsync Daemon/Rsnapshot]]) 
 +    * sharptail:/home --> cottontail 
 +    * serverA:/mnt/beegfs/home1 --> serverB (8TB max) 
 +    * serverB:/mnt/beegfs/home2 --> serverA (8TB max) 
 + 
 +  * Costs (includes 3 year NBD warranty) 
 +    * Microway $12,500 
 +    * CDW $14,700
  
 ==== beegfs-admin-gui ==== ==== beegfs-admin-gui ====
  
   * ''cottontail:/usr/local/bin/beegfs-admin-gui''   * ''cottontail:/usr/local/bin/beegfs-admin-gui''
 +
 +==== upgrade ====
 +
 +  * [[http://www.beegfs.com/content/updating-upgrading-and-versioning/|External Link]]
 +  * New feature - High Availability for Metadata Servers (self-healing, transparent failover)
 +
 +A bit complicated. 
 +
 +  * Repo base URL baseurl=http://www.beegfs.com/release/beegfs_6/dists/rhel6 via http shows only 6.1-el6
 +    * [   ] beegfs-mgmtd-6.1-el6.x86_64.rpm          2016-11-16 16:27  660K 
 +  * '' yum --disablerepo "*" --enablerepo beegfs repolist'' shows
 +    * beegfs-mgmtd.x86_64                            2015.03.r22-el6            beegfs
 +  * ''yum install --disablerepo "*" --enablerepo beegfs --downloadonly --downloaddir=/sanscratch/tmp/beegfs/beegfs_6/ *x86_64* -y''
 +   * http://www.beegfs.com/release/beegfs_6/dists/rhel6/x86_64/beegfs-mgmtd-2015.03.r22-el6.x86_64.rpm: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found" <-- wrong package version
 +
 +
 +So the wget/rpm approach (list all packages present on a particular node else you will get a dependencies failure!)
 +
 +<code>
 +
 +# get them all
 +wget http://www.beegfs.com/release/beegfs_6/dists/rhel6/x86_64/beegfs-mgmtd-6.1-el6.x86_64.rpm
 +
 +# client and meta node
 +rpm -Uvh ./beegfs-common-6.1-el6.noarch.rpm ./beegfs-utils-6.1-el6.x86_64.rpm ./beegfs-opentk-lib-6.1-el6.x86_64.rpm ./beegfs-helperd-6.1-el6.x86_64.rpm ./beegfs-client-6.1-el6.noarch.rpm ./beegfs-meta-6.1-el6.x86_64.rpm
 +
 +# updated?
 +[root@cottontail2 beegfs_6]# beegfs-ctl | head -2
 +BeeGFS Command-Line Control Tool (http://www.beegfs.com)
 +Version: 6.1
 +
 +#Sheeesh
 +</code>
 +
  
 ==== Resync Data #2 ==== ==== Resync Data #2 ====
Line 55: Line 114:
  
 # define mirrrogroups # define mirrrogroups
-[root@cottontail2 ~]# beegfs-ctl --addmirrorgroup --primary=21701 --secondary=13601 --groupid=1 +[root@cottontail2 ~]# beegfs-ctl --addmirrorgroup [--nodetype=storage] --primary=21701 --secondary=13601 --groupid=1 
-[root@cottontail2 ~]# beegfs-ctl --addmirrorgroup --primary=13602 --secondary=21702 --groupid=2+[root@cottontail2 ~]# beegfs-ctl --addmirrorgroup [--nodetype=storage] --primary=13602 --secondary=21702 --groupid=2
  
 [root@cottontail2 ~]# beegfs-ctl --listmirrorgroups [root@cottontail2 ~]# beegfs-ctl --listmirrorgroups
Line 128: Line 187:
    21701         low     291.2GiB     114.9GiB  39%       18.5M       16.1M  87%    21701         low     291.2GiB     114.9GiB  39%       18.5M       16.1M  87%
    21702         low     291.2GiB     114.9GiB  39%       18.5M       16.1M  87%    21702         low     291.2GiB     114.9GiB  39%       18.5M       16.1M  87%
 +
 +# first rsync drops roughly 5G in both primaries which then get copied to secondaries.
 +# second rsync does the same so both storage servers loose 20G roughly
 +# now shut a storage server down and the whole filesystem can still be accessed (HA)
  
 </code>  </code> 
Line 323: Line 386:
 Content mirroring will require more disk space. Perhaps snapshots to another node is more useful, also solves backup issue. Content mirroring will require more disk space. Perhaps snapshots to another node is more useful, also solves backup issue.
  
- +V6 does buddymirror meta mirroring [[http://www.beegfs.com/wiki/MDMirror|External Link]]
 <code> <code>
  
-# enable meta mirroring, directory based+2015.03 enable meta mirroring, directory based
 # change to 11/04/2016: used --createdir to make this home. # change to 11/04/2016: used --createdir to make this home.
 [root@n7 ~]# beegfs-ctl --mirrormd /mnt/beegfs/home [root@n7 ~]# beegfs-ctl --mirrormd /mnt/beegfs/home
Line 332: Line 395:
 Mount: '/mnt/beegfs'; Path: '/hmeij-mirror' Mount: '/mnt/beegfs'; Path: '/hmeij-mirror'
 Operation succeeded. Operation succeeded.
 +
 +# V6.1 does it a root level not from a path
 +beegfs-ctl --addmirrorgroup --nodetype=meta --primary=38 --secondary=39 --groupid=1 
 +beegfs-ctl --addmirrorgroup --nodetype=meta --primary=250 --secondary=37 --groupid=2 
 +beegfs-ctl --mirrromd
  
 # put some new content in  # put some new content in 
Line 465: Line 533:
   * made easy [[http://www.beegfs.com/wiki/ManualInstallWalkThrough|External Link]]   * made easy [[http://www.beegfs.com/wiki/ManualInstallWalkThrough|External Link]]
   * rpms pulled from repository via petaltail in ''greentail:/sanscratch/tmp/beegfs''   * rpms pulled from repository via petaltail in ''greentail:/sanscratch/tmp/beegfs''
 +    * ''yum --disablerepo "*" --enablerepo beegfs list available''
 +    * use ''yumdownloader''
  
 <code> <code>
cluster/151.1480436406.txt.gz · Last modified: 2016/11/29 11:20 by hmeij07