This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:151 [2016/10/28 17:41] hmeij07 [/mnt/beegfs/] |
cluster:151 [2016/10/31 18:45] hmeij07 [Quota Enable and Enforce] |
||
---|---|---|---|
Line 12: | Line 12: | ||
==== cluster idea ==== | ==== cluster idea ==== | ||
- | idea: buy 2 now 4k+4k then 3rd in july 4k? | + | * Storage servers: buy 2 now 4k+4k then 3rd in July 4k? |
- | move test users over on 2 nodes, test, only change is $HOME | + | |
- | ctt (mngt+admingiu), | + | |
- | make ctt2 master | + | * move test users over on 2 nodes, test, only change is $HOME |
+ | |||
+ | * Home cluster | ||
+ | * cottontail (mngt+admingiu) | ||
+ | * 2-3 new units storage (+snapshots/ | ||
+ | * cottontail2 meta + n38-n45 meta, all mirrored | ||
+ | |||
+ | ==== beegfs-admin-gui ==== | ||
+ | |||
+ | * '' | ||
+ | |||
+ | ==== Mirror Data ==== | ||
+ | |||
+ | < | ||
+ | [root@cottontail2 ~]# beegfs-df | ||
+ | METADATA SERVERS: | ||
+ | TargetID | ||
+ | ======== | ||
+ | 48 | ||
+ | 49 | ||
+ | | ||
+ | |||
+ | STORAGE TARGETS: | ||
+ | TargetID | ||
+ | ======== | ||
+ | | ||
+ | | ||
+ | </ | ||
+ | |||
+ | ==== Quota ==== | ||
+ | |||
+ | * [[http:// | ||
+ | * setup XFS | ||
+ | * enable beegfs quota on all clients | ||
+ | * enforce quota | ||
+ | * set quotas using a text file | ||
+ | * seems straightforward | ||
+ | * do BEFORE populating XFS file systems | ||
+ | |||
+ | ==== Mirror Meta ==== | ||
+ | |||
+ | Definitely want Meta content mirrored, that way I can use the n38-n45 nodes with local 15K disk, plus maybe cottontail2 (raid 1 with hot and cold spare). | ||
+ | |||
+ | Content mirroring will require more disk space. Perhaps snapshots to another | ||
+ | |||
+ | |||
+ | < | ||
+ | |||
+ | # enable meta mirroring, directory based | ||
+ | |||
+ | [root@n7 ~]# beegfs-ctl --mirrormd / | ||
+ | Mount: '/ | ||
+ | Operation succeeded. | ||
+ | |||
+ | # put some new content in | ||
+ | [root@n7 ~]# rsync -vac / | ||
+ | |||
+ | # lookup meta tag | ||
+ | [root@n7 ~]# beegfs-ctl --getentryinfo / | ||
+ | Path: / | ||
+ | Mount: / | ||
+ | EntryID: 3-581392E1-31 | ||
+ | |||
+ | # find | ||
+ | [root@sharptail ~]# ssh n38 find / | ||
+ | / | ||
+ | ^^^^^^ | ||
+ | # and find | ||
+ | [root@sharptail ~]# ssh n39 find / | ||
+ | / | ||
+ | |||
+ | # seems to work | ||
+ | |||
+ | </ | ||
+ | |||
+ | Writing some initial content to both storage and meta servers; vanilla out of the box beegfs seems to balance the writes across both equally. Here are some stats. | ||
Line 43: | Line 116: | ||
Looks like: | Looks like: | ||
+ | |||
+ | * NOTE: failed to mount /mn/beegfs is the result of out of space storage servers. | ||
< | < | ||
Line 79: | Line 154: | ||
* global interfaces files ib0-> | * global interfaces files ib0-> | ||
- | * priority order, seems useful | + | * connInterfacesFile = / |
- | * set in a file somewhere | + | * set in / |
* backup beeGFS EA metadata, see faq | * backup beeGFS EA metadata, see faq | ||
Line 125: | Line 200: | ||
* made easy [[http:// | * made easy [[http:// | ||
+ | * rpms pulled from repository via petaltail in '' | ||
+ | |||
+ | < | ||
+ | |||
+ | [root@cottontail ~]# ssh n7 beegfs-net | ||
+ | |||
+ | mgmt_nodes | ||
+ | ============= | ||
+ | cottontail [ID: 1] | ||
+ | | ||
+ | |||
+ | meta_nodes | ||
+ | ============= | ||
+ | n38 [ID: 48] | ||
+ | | ||
+ | n39 [ID: 49] | ||
+ | | ||
+ | |||
+ | storage_nodes | ||
+ | ============= | ||
+ | swallowtail [ID: 136] | ||
+ | | ||
+ | petaltail [ID: 217] | ||
+ | | ||
+ | |||
+ | |||
+ | </ | ||
\\ | \\ | ||
**[[cluster: | **[[cluster: |