This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:151 [2016/10/28 17:06] hmeij07 [INstalltion] |
cluster:151 [2016/10/31 13:44] hmeij07 [Installation] |
||
---|---|---|---|
Line 12: | Line 12: | ||
==== cluster idea ==== | ==== cluster idea ==== | ||
- | idea: buy 2 now 4k+4k then 3rd in july 4k? | + | * Storage servers: buy 2 now 4k+4k then 3rd in July 4k? |
- | move test users over on 2 nodes, test, only change is $HOME | + | |
- | ctt (mngt+admingiu), | + | |
- | make ctt2 master meta node? how? | + | * move test users over on 2 nodes, test, only change is $HOME |
+ | * Home cluster | ||
+ | * cottontail (mngt+admingiu) | ||
+ | * 2-3 new units storage (+snapshots/ | ||
+ | * cottontail2 meta + n38-n45 meta, all mirrored | ||
+ | ==== Mirror Meta ==== | ||
+ | |||
+ | Definitely wnat Meta content mirrored, that way you use the n38-n45 nodes with local 15K disk, plus maybe cottontail2 (raid 1 with hot and cold spare). | ||
+ | |||
+ | Content mirrorring will require more disk space. Perhaps snapshotting to another node is more useful, also solves backup issue. | ||
+ | - | ||
+ | < | ||
+ | |||
+ | # enable | ||
+ | [root@n7 ~]# beegfs-ctl --mirrormd / | ||
+ | Mount: '/ | ||
+ | Operation succeeded. | ||
+ | |||
+ | # put some new content in | ||
+ | [root@n7 ~]# rsync -vac / | ||
+ | |||
+ | # lookup meta tag | ||
+ | [root@n7 ~]# beegfs-ctl --getentryinfo / | ||
+ | Path: / | ||
+ | Mount: /mnt/beegfs | ||
+ | EntryID: 3-581392E1-31 | ||
+ | |||
+ | # find | ||
+ | [root@sharptail ~]# ssh n38 find / | ||
+ | / | ||
+ | |||
+ | # and find | ||
+ | [root@sharptail ~]# ssh n39 find / | ||
+ | / | ||
+ | |||
+ | # seems to work | ||
+ | |||
+ | </ | ||
==== / | ==== / | ||
- | source | + | * Source content |
- | / | + | |
+ | * / | ||
| | ||
- | | + | * File content |
- | 56G in beegfs-storage per storage server | + | * petaltail:/ |
- | | + | * swallowtail:/ |
- | | + | * 56G used in beegfs-storage per storage server |
+ | | ||
+ | | ||
- | | + | * Meta content |
- | 338MB per beegfs-meta server so 0.006% space wise for 2 servers | + | |
- | | + | |
- | | + | |
- | | + | * Client (n7 and n8) see 110G in / |
- | 110G in / | + | |
- | | + | |
- | | + | |
+ | Looks like: | ||
+ | |||
+ | < | ||
+ | |||
+ | # file content | ||
+ | |||
+ | [root@swallowtail ~]# ls -lR / | ||
+ | / | ||
+ | total 672 | ||
+ | -rw-rw-rw- 1 root root 289442 Jun 26 2015 D8-57E42E89-30 | ||
+ | -rw-rw-rw- 1 root root 3854 Jun 26 2015 D9-57E42E89-30 | ||
+ | -rw-rw-rw- 1 root root 16966 Jun 26 2015 DA-57E42E89-30 | ||
+ | -rw-rw-rw- 1 root root 65779 Jun 26 2015 DB-57E42E89-30 | ||
+ | -rw-rw-rw- 1 root root 20562 Jun 26 2015 DF-57E42E89-30 | ||
+ | -rw-rw-rw- 1 root root 259271 Jun 26 2015 E0-57E42E89-30 | ||
+ | -rw-rw-rw- 1 root root 372 Jun 26 2015 E1-57E42E89-30 | ||
+ | |||
+ | [root@petaltail ~]# ls -lR / | ||
+ | / | ||
+ | total 144 | ||
+ | -rw-rw-rw- 1 root root 40 Jun 26 2015 DC-57E42E89-30 | ||
+ | -rw-rw-rw- 1 root root 40948 Jun 26 2015 DD-57E42E89-30 | ||
+ | -rw-rw-rw- 1 root root 100077 Jun 26 2015 DE-57E42E89-30 | ||
+ | |||
+ | # meta content | ||
+ | |||
+ | [root@sharptail ~]# ssh n38 find / | ||
+ | / | ||
+ | / | ||
+ | |||
+ | [root@sharptail ~]# ssh n39 find / | ||
+ | (none, no mirror) | ||
+ | |||
+ | </ | ||
==== Tuning ==== | ==== Tuning ==== | ||
* global interfaces files ib0-> | * global interfaces files ib0-> | ||
- | * priority order, seems useful | + | * connInterfacesFile = / |
- | * set in a file somewhere | + | * set in / |
* backup beeGFS EA metadata, see faq | * backup beeGFS EA metadata, see faq | ||
Line 61: | Line 133: | ||
* no such values for sdb1 | * no such values for sdb1 | ||
* can only find min_free_kbytes, | * can only find min_free_kbytes, | ||
+ | * stripe and chunk size | ||
+ | |||
+ | < | ||
+ | |||
+ | [root@n7 ~]# beegfs-ctl --getentryinfo / | ||
+ | Path: | ||
+ | Mount: /mnt/beegfs | ||
+ | EntryID: root | ||
+ | Metadata node: n38 [ID: 48] | ||
+ | Stripe pattern details: | ||
+ | + Type: RAID0 | ||
+ | + Chunksize: 512K | ||
+ | + Number of storage targets: desired: 4 | ||
+ | |||
+ | </ | ||
+ | * The cache type can be set in the client config file (/ | ||
+ | * buffered is default, few 100k per file | ||
* tuneNumWorkers in all / | * tuneNumWorkers in all / | ||
Line 71: | Line 160: | ||
* made easy [[http:// | * made easy [[http:// | ||
+ | * rpms pulled from repository via petaltail in '' | ||
+ | |||
+ | < | ||
+ | |||
+ | [root@cottontail ~]# ssh n7 beegfs-net | ||
+ | |||
+ | mgmt_nodes | ||
+ | ============= | ||
+ | cottontail [ID: 1] | ||
+ | | ||
+ | |||
+ | meta_nodes | ||
+ | ============= | ||
+ | n38 [ID: 48] | ||
+ | | ||
+ | n39 [ID: 49] | ||
+ | | ||
+ | |||
+ | storage_nodes | ||
+ | ============= | ||
+ | swallowtail [ID: 136] | ||
+ | | ||
+ | petaltail [ID: 217] | ||
+ | | ||
+ | |||
+ | |||
+ | </ | ||
\\ | \\ | ||
**[[cluster: | **[[cluster: |