This is an old revision of the document!
A document for me to recall and make notes of what I read in the manual pages and what needs testing.
Basically during the Summer of 2016 I investigated if the HPCC could afford enterprise level storage. I wanted 99.999% uptime, snapshots, high availability and other goodies such as parallel NFS. Netapp came the closest but, eh, still at $42K lots of other options show up. The story is detailed here at The Storage Problem
This page is best read from the bottom up.
idea: buy 2 now 4k+4k then 3rd in july 4k? move test users over on 2 nodes, test, only change is $HOME ctt (mngt+admingiu), 2 new units storage (+snapshots/meta backup), ctt2 meta + n38/39 backup meta
make ctt2 master meta node? how?
source 110G in XFS with ~100,000 files in ~2,000 dirs /home/hmeij (mix of files, nothing large) + /home/fstarr/filler (lots of tiny files)
storage spread across 2 storage servers 56G in beegfs-storage per storage server ~92,400 files per storage server ~1,400 dirs per storage server mostly in “chunks” dir
meta spread across 2 meta servers 338MB per beegfs-meta server so 0.006% space wise for 2 servers ~105,000 files per metadata server ~35,000 dirs almost spread evenly across “dentries” and “inodes”
client sees 110G in /mnt/beegfs 110G in /mnt/beegfs ~100,000 files ~2,000 dirs
[root@n7 ~]# beegfs-ctl --getentryinfo /mnt/beegfs/ Path: Mount: /mnt/beegfs EntryID: root Metadata node: n38 [ID: 48] Stripe pattern details: + Type: RAID0 + Chunksize: 512K + Number of storage targets: desired: 4