This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:151 [2016/11/22 20:27] hmeij07 [Resync Data #2] |
cluster:151 [2016/12/01 14:42] hmeij07 [upgrade] |
||
---|---|---|---|
Line 6: | Line 6: | ||
A document for me to recall and make notes of what I read in the manual pages and what needs testing. | A document for me to recall and make notes of what I read in the manual pages and what needs testing. | ||
- | Basically during the Summer of 2016 I investigated if the HPCC could afford enterprise level storage. I wanted 99.999% uptime, snapshots, high availability and other goodies such as parallel NFS. Netapp came the closest but, eh, still at $42K lots of other options show up. The story is detailed | + | Basically during the Summer of 2016 I investigated if the HPCC could afford enterprise level storage. I wanted 99.999% uptime, snapshots, high availability and other goodies such as parallel NFS. Netapp came the closest but, eh, still at $42K lots of other options show up. That story is detailed at [[cluster: |
This page is best read from the bottom up. | This page is best read from the bottom up. | ||
- | ==== cluster idea ==== | + | ==== beeGFS |
- | * Storage servers: buy 2 now 4k+4k then 3rd in July 4k? | + | * Storage servers: |
+ | * buy 2 with each 12x2TB slow disk, Raid 6, 20T usable (clustered, parallel file system) | ||
+ | * create 2 6TB volumes on each, quota at 2TB via XFS, 3 users/ | ||
+ | * only $HOME changes to ''/ | ||
+ | * create 2 buddymirrors; | ||
+ | * on UPS | ||
+ | * on Infiniband | ||
- | * move test users over on 2 nodes, test, only change is $HOME | + | * Client servers: |
+ | * all compute/ | ||
- | * Home cluster | + | * Meta servers: |
- | * cottontail (mngt+admingiu) | + | * cottontail2 (root meta, on Infiniband) plus n38-n45 nodes (on Infiniband) |
- | * 2-3 new units storage (+snapshots/meta backup) | + | * all mirrored (total=9) |
- | * cottontail2 meta + n38-n45 meta, all mirrored | + | * cottontail2 on UPS |
+ | |||
+ | * Management and Monitor servers | ||
+ | * cottontail (on UPS, on Infiniband) | ||
+ | |||
+ | * Backups (rsnapshot.org via rsync daemons [[cluster: | ||
+ | * sharptail:/ | ||
+ | * serverA:/mnt/ | ||
+ | * serverB:/ | ||
+ | |||
+ | * Costs (includes 3 year NBD warranty) | ||
+ | * Microway $12,500 | ||
+ | * CDW | ||
==== beegfs-admin-gui ==== | ==== beegfs-admin-gui ==== | ||
* '' | * '' | ||
+ | |||
+ | ==== upgrade ==== | ||
+ | |||
+ | * [[http:// | ||
+ | * New feature - High Availability for Metadata Servers (self-healing, | ||
+ | |||
+ | A bit complicated. | ||
+ | |||
+ | * Repo base URL baseurl=http:// | ||
+ | * [ ] beegfs-mgmtd-6.1-el6.x86_64.rpm | ||
+ | * '' | ||
+ | * beegfs-mgmtd.x86_64 | ||
+ | * '' | ||
+ | * http:// | ||
+ | |||
+ | |||
+ | So the wget/rpm approach (list all packages present on a particular node else you will get a dependencies failure!) | ||
+ | |||
+ | < | ||
+ | |||
+ | # get them all | ||
+ | wget http:// | ||
+ | |||
+ | # client and meta node | ||
+ | rpm -Uvh ./ | ||
+ | |||
+ | # updated? | ||
+ | [root@cottontail2 beegfs_6]# beegfs-ctl | head -2 | ||
+ | BeeGFS Command-Line Control Tool (http:// | ||
+ | Version: 6.1 | ||
+ | |||
+ | #Sheeesh | ||
+ | </ | ||
+ | |||
==== Resync Data #2 ==== | ==== Resync Data #2 ==== | ||
Line 87: | Line 140: | ||
3678 | 3678 | ||
- | # redefine | + | # with numtargets=1 beegfs still writes to all primary targets found in all buddygroups |
- | # resync - no results | + | |
- | # dropping hmeij/ into home2/ yields same amount of files | + | |
# rebuild test servers with from scratch with numparts=2 | # rebuild test servers with from scratch with numparts=2 | ||
Line 95: | Line 146: | ||
# /home/hmeij has 7808 files in it which gets split over primaries but numparts=2 would yield 15,616 files? | # /home/hmeij has 7808 files in it which gets split over primaries but numparts=2 would yield 15,616 files? | ||
# drop another copy in home2/ and file counts double to circa 7808 | # drop another copy in home2/ and file counts double to circa 7808 | ||
- | + | [root@cottontail2 ~]# beegfs-ctl --getentryinfo | |
- | Path: /home1/ | + | Path: /home1 |
Mount: /mnt/beegfs | Mount: /mnt/beegfs | ||
- | EntryID: | + | EntryID: |
Metadata node: cottontail2 [ID: 250] | Metadata node: cottontail2 [ID: 250] | ||
Stripe pattern details: | Stripe pattern details: | ||
+ Type: Buddy Mirror | + Type: Buddy Mirror | ||
+ Chunksize: 512K | + Chunksize: 512K | ||
- | + Number of storage targets: desired: 2; actual: 2 | + | + Number of storage targets: desired: 2 |
- | + Storage mirror buddy groups: | + | [root@cottontail2 ~]# beegfs-ctl --getentryinfo |
- | + 2 | + | Path: /home2 |
- | | + | Mount: / |
+ | EntryID: 1-583C50A1-FA | ||
+ | Metadata node: cottontail2 [ID: 250] | ||
+ | Stripe pattern details: | ||
+ | + Type: Buddy Mirror | ||
+ | + Chunksize: 512K | ||
+ | + Number of storage targets: desired: | ||
+ | |||
+ | Source: /home/hmeij 7808 files in 10G | ||
+ | |||
+ | TargetID | ||
+ | ======== | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | [root@cottontail2 ~]# rsync -ac --bwlimit=2500 /home/hmeij / | ||
+ | [root@cottontail2 ~]# rsync -ac --bwlimit=2500 /home/hmeij / | ||
+ | TargetID | ||
+ | ======== | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | # first rsync drops roughly 5G in both primaries which then get copied to secondaries. | ||
+ | # second rsync does the same so both storage servers loose 20G roughly | ||
+ | # now shut a storage server down and the whole filesystem can still be accessed (HA) | ||
</ | </ | ||
Line 445: | Line 522: | ||
* made easy [[http:// | * made easy [[http:// | ||
* rpms pulled from repository via petaltail in '' | * rpms pulled from repository via petaltail in '' | ||
+ | * '' | ||
+ | * use '' | ||
< | < |