This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:151 [2016/10/28 19:53] hmeij07 [Tuning] |
cluster:151 [2016/11/04 15:04] hmeij07 |
||
---|---|---|---|
Line 20: | Line 20: | ||
* 2-3 new units storage (+snapshots/ | * 2-3 new units storage (+snapshots/ | ||
* cottontail2 meta + n38-n45 meta, all mirrored | * cottontail2 meta + n38-n45 meta, all mirrored | ||
+ | |||
+ | ==== beegfs-admin-gui ==== | ||
+ | |||
+ | * '' | ||
+ | |||
+ | ==== Mirror Data ==== | ||
+ | |||
+ | When not all storage servers are up, client mounts will fail. This is just an optional " | ||
+ | |||
+ | In order to able able to take a storage server off line without any impact, all content needs to mirrored. | ||
+ | |||
+ | ** Before ** | ||
+ | |||
+ | < | ||
+ | [root@cottontail2 ~]# beegfs-df | ||
+ | METADATA SERVERS: | ||
+ | TargetID | ||
+ | ======== | ||
+ | 48 | ||
+ | 49 | ||
+ | | ||
+ | |||
+ | STORAGE TARGETS: | ||
+ | TargetID | ||
+ | ======== | ||
+ | | ||
+ | | ||
+ | </ | ||
+ | |||
+ | ** Before ** | ||
+ | |||
+ | < | ||
+ | |||
+ | # define buddygroup - these are storage target IDs | ||
+ | [root@n7 ~]# beegfs-ctl --addmirrorgroup --primary=13601 --secondary=21701 --groupid=101 | ||
+ | Mirror buddy group successfully set: groupID 101 -> target IDs 13601, 21701 | ||
+ | |||
+ | [root@n7 ~]# beegfs-ctl --listmirrorgroups | ||
+ | | ||
+ | | ||
+ | 101 | ||
+ | | ||
+ | # enable mirroring for data by directory -numTargets needs to be set to max nr of storage servers? | ||
+ | # changed to 11/02/2016: | ||
+ | [root@n7 ~]# beegfs-ctl --setpattern --buddymirror / | ||
+ | [root@n7 ~]# beegfs-ctl --setpattern --buddymirror / | ||
+ | New chunksize: 524288 | ||
+ | New number of storage targets: 2 | ||
+ | Path: / | ||
+ | Mount: /mnt/beegfs | ||
+ | |||
+ | # copy some contents in (~hmeij is 10G) | ||
+ | [root@n7 ~]# rsync -vac --bwlimit /home/hmeij / | ||
+ | |||
+ | </ | ||
+ | |||
+ | ** After ** | ||
+ | |||
+ | < | ||
+ | |||
+ | [root@n7 ~]# beegfs-df | ||
+ | |||
+ | METADATA SERVERS: (almost no changes...) | ||
+ | STORAGE TARGETS: (each target less circa 10G) | ||
+ | TargetID | ||
+ | ======== | ||
+ | | ||
+ | | ||
+ | |||
+ | # lets find an object | ||
+ | [root@n7 ~]# beegfs-ctl --getentryinfo / | ||
+ | Path: / | ||
+ | Mount: /mnt/beegfs | ||
+ | EntryID: 178-581797C8-30 | ||
+ | Metadata node: n38 [ID: 48] | ||
+ | Stripe pattern details: | ||
+ | + Type: Buddy Mirror | ||
+ | + Chunksize: 512K | ||
+ | + Number of storage targets: desired: 2; actual: 1 | ||
+ | + Storage mirror buddy groups: | ||
+ | + 101 | ||
+ | |||
+ | # original | ||
+ | [root@n7 ~]# ls -lh / | ||
+ | -rwxr-xr-x 1 hmeij its 4.9G 2014-04-07 13:39 / | ||
+ | |||
+ | # copy on primary | ||
+ | [root@petaltail chroots]# ls -lh / | ||
+ | -rw-rw-rw- 1 root root 4.9G Apr 7 2014 / | ||
+ | |||
+ | # copy on secondary | ||
+ | [root@swallowtail ~]# find / | ||
+ | / | ||
+ | [root@swallowtail ~]# ls -lh / | ||
+ | -rw-rw-rw- 1 root root 4.9G Apr 7 2014 / | ||
+ | |||
+ | # seems to work, notice the '' | ||
+ | |||
+ | </ | ||
+ | |||
+ | Here is an important note, from community list: | ||
+ | |||
+ | * " | ||
+ | * so the important line that tells you that this file is mirrored is "Type: Buddy Mirror" | ||
+ | * " | ||
+ | |||
+ | ==== Quota ==== | ||
+ | |||
+ | * [[http:// | ||
+ | * setup XFS | ||
+ | * enable beegfs quota on all clients | ||
+ | * enforce quota | ||
+ | * set quotas using a text file | ||
+ | * seems straightforward | ||
+ | * do BEFORE populating XFS file systems | ||
==== Mirror Meta ==== | ==== Mirror Meta ==== | ||
- | Definitely | + | Definitely |
+ | |||
+ | Content mirroring will require more disk space. Perhaps snapshots to another node is more useful, also solves backup issue. | ||
- | Content mirrorring will require more disk space. Perhaps snapshotting to another node is more useful, also solves backup issue. | + | |
- | - | + | |
< | < | ||
- | # enable | + | # enable |
+ | # change to 11/04/2016: used --createdir to make this home. | ||
+ | [root@n7 ~]# beegfs-ctl --mirrormd / | ||
[root@n7 ~]# beegfs-ctl --mirrormd / | [root@n7 ~]# beegfs-ctl --mirrormd / | ||
Mount: '/ | Mount: '/ | ||
Line 46: | Line 164: | ||
[root@sharptail ~]# ssh n38 find / | [root@sharptail ~]# ssh n38 find / | ||
/ | / | ||
+ | ^^^^^^ | ||
# and find | # and find | ||
[root@sharptail ~]# ssh n39 find / | [root@sharptail ~]# ssh n39 find / | ||
Line 54: | Line 172: | ||
</ | </ | ||
+ | |||
+ | Writing some initial content to both storage and meta servers; vanilla out of the box beegfs seems to balance the writes across both equally. Here are some stats. | ||
+ | |||
+ | |||
==== / | ==== / | ||
Line 78: | Line 200: | ||
Looks like: | Looks like: | ||
+ | |||
+ | * NOTE: failed to mount /mn/beegfs is the result of out of space storage servers. | ||
< | < | ||
Line 115: | Line 239: | ||
* global interfaces files ib0-> | * global interfaces files ib0-> | ||
* connInterfacesFile = / | * connInterfacesFile = / | ||
- | * set in / | + | * set in / |
* backup beeGFS EA metadata, see faq | * backup beeGFS EA metadata, see faq | ||
Line 160: | Line 284: | ||
* made easy [[http:// | * made easy [[http:// | ||
+ | * rpms pulled from repository via petaltail in '' | ||
< | < | ||
Line 168: | Line 293: | ||
============= | ============= | ||
cottontail [ID: 1] | cottontail [ID: 1] | ||
- | | + | |
meta_nodes | meta_nodes | ||
============= | ============= | ||
n38 [ID: 48] | n38 [ID: 48] | ||
- | | + | |
n39 [ID: 49] | n39 [ID: 49] | ||
- | | + | |
storage_nodes | storage_nodes | ||
============= | ============= | ||
swallowtail [ID: 136] | swallowtail [ID: 136] | ||
- | | + | |
petaltail [ID: 217] | petaltail [ID: 217] | ||
- | | + | |
</ | </ | ||
\\ | \\ | ||
**[[cluster: | **[[cluster: |