This is an old revision of the document!
A document for me to recall and make notes of what I read in the manual pages and what needs testing.
Basically during the Summer of 2016 I investigated if the HPCC could afford enterprise level storage. I wanted 99.999% uptime, snapshots, high availability and other goodies such as parallel NFS. Netapp came the closest but, eh, still at $42K lots of other options show up. The story is detailed here at The Storage Problem
This page is best read from the bottom up.
Definitely wnat Meta content mirrored, that way you use the n38-n45 nodes with local 15K disk, plus maybe cottontail2 (raid 1 with hot and cold spare).
Content mirrorring will require more disk space. Perhaps snapshotting to another node is more useful, also solves backup issue. -
# enable [root@n7 ~]# beegfs-ctl --mirrormd /mnt/beegfs/hmeij-mirror Mount: '/mnt/beegfs'; Path: '/hmeij-mirror' Operation succeeded. # put some new content in [root@n7 ~]# rsync -vac /home/hmeij/iozone-tests /mnt/beegfs/hmeij-mirror/ # lookup meta tag [root@n7 ~]# beegfs-ctl --getentryinfo /mnt/beegfs/hmeij-mirror/iozone-tests/current.tar Path: /hmeij-mirror/iozone-tests/current.tar Mount: /mnt/beegfs EntryID: 3-581392E1-31 # find [root@sharptail ~]# ssh n38 find /data/beegfs_meta -name 3-581392E1-31 /data/beegfs_meta/mirror/49.dentries/54/6C/0-581392F0-30/#fSiDs#/3-581392E1-31 # and find [root@sharptail ~]# ssh n39 find /data/beegfs_meta -name 3-581392E1-31 /data/beegfs_meta/dentries/54/6C/0-581392F0-30/#fSiDs#/3-581392E1-31 # seems to work
Looks like:
# file content [root@swallowtail ~]# ls -lR /data/beegfs_storage/chunks/u0/57E4/2/169-57E42E75-31 /data/beegfs_storage/chunks/u0/57E4/2/169-57E42E75-31: total 672 -rw-rw-rw- 1 root root 289442 Jun 26 2015 D8-57E42E89-30 -rw-rw-rw- 1 root root 3854 Jun 26 2015 D9-57E42E89-30 -rw-rw-rw- 1 root root 16966 Jun 26 2015 DA-57E42E89-30 -rw-rw-rw- 1 root root 65779 Jun 26 2015 DB-57E42E89-30 -rw-rw-rw- 1 root root 20562 Jun 26 2015 DF-57E42E89-30 -rw-rw-rw- 1 root root 259271 Jun 26 2015 E0-57E42E89-30 -rw-rw-rw- 1 root root 372 Jun 26 2015 E1-57E42E89-30 [root@petaltail ~]# ls -lR /var/chroots/data/beegfs_storage/chunks/u0/57E4/2/169-57E42E75-31 /var/chroots/data/beegfs_storage/chunks/u0/57E4/2/169-57E42E75-31: total 144 -rw-rw-rw- 1 root root 40 Jun 26 2015 DC-57E42E89-30 -rw-rw-rw- 1 root root 40948 Jun 26 2015 DD-57E42E89-30 -rw-rw-rw- 1 root root 100077 Jun 26 2015 DE-57E42E89-30 # meta content [root@sharptail ~]# ssh n38 find /data/beegfs_meta -name 169-57E42E75-31 /data/beegfs_meta/inodes/6A/7E/169-57E42E75-31 /data/beegfs_meta/dentries/6A/7E/169-57E42E75-31 [root@sharptail ~]# ssh n39 find /data/beegfs_meta -name 169-57E42E75-31 (none, no mirror)
[root@n7 ~]# beegfs-ctl --getentryinfo /mnt/beegfs/ Path: Mount: /mnt/beegfs EntryID: root Metadata node: n38 [ID: 48] Stripe pattern details: + Type: RAID0 + Chunksize: 512K + Number of storage targets: desired: 4
[root@cottontail ~]# ssh n7 beegfs-net mgmt_nodes ============= cottontail [ID: 1] Connections: TCP: 1 (10.11.103.253:8008); meta_nodes ============= n38 [ID: 48] Connections: TCP: 1 (10.11.103.48:8005); n39 [ID: 49] Connections: TCP: 2 (10.11.103.49:8005); storage_nodes ============= swallowtail [ID: 136] Connections: TCP: 1 (192.168.1.136:8003 [fallback route]); petaltail [ID: 217] Connections: <none>