User Tools

Site Tools


cluster:151

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:151 [2016/10/27 18:48]
hmeij07 [beeGFS]
cluster:151 [2016/10/28 17:23]
hmeij07 [Tuning]
Line 6: Line 6:
 A document for me to recall and make notes of what I read in the manual pages and what needs testing. A document for me to recall and make notes of what I read in the manual pages and what needs testing.
  
-Basically during the Summer of 2016 I investigated if the HPCC could afford enterprise level storage. I wanted 99.999% uptime, snapshots, high availability and other goodies. Netapp came the closest but, eh, still at $42K lots of other options show up. The story is detailed here at [[cluster:150|The problem]]+Basically during the Summer of 2016 I investigated if the HPCC could afford enterprise level storage. I wanted 99.999% uptime, snapshots, high availability and other goodies such as parallel NFS. Netapp came the closest but, eh, still at $42K lots of other options show up. The story is detailed here at [[cluster:149|The Storage Problem]]
  
 +This page is best read from the bottom up.
 +
 +==== cluster idea ====
 +
 +idea: buy 2 now 4k+4k then 3rd in july 4k?
 + move test users over on 2 nodes, test, only change is $HOME
 + ctt (mngt+admingiu), 2 new units storage (+snapshots/meta backup), ctt2 meta + n38/39 backup meta
 +
 + make ctt2 master meta node? how?
 +
 +
 +==== /mnt/beegfs/ ====
 +
 +source 110G in XFS with ~100,000 files in ~2,000 dirs
 + /home/hmeij (mix of files, nothing large) + /home/fstarr/filler (lots of tiny files)
 +  
 + storage spread across 2 storage servers
 + 56G in beegfs-storage per storage server
 + ~92,400 files per storage server
 + ~1,400 dirs per storage server  mostly in "chunks" dir
 +
 + meta spread across 2 meta servers
 + 338MB per beegfs-meta server so 0.006% space wise for 2 servers
 + ~105,000 files per metadata server
 + ~35,000 dirs almost spread evenly across "dentries" and "inodes"
 +
 + client sees 110G in /mnt/beegfs
 + 110G in /mnt/beegfs
 + ~100,000 files
 + ~2,000 dirs
 +
 +==== Tuning ====
 +
 +  * global interfaces files ib0->eth1->eth0
 +    * priority order, seems useful
 +    * set in a file somewhere
 +
 +  * backup beeGFS EA metadata, see faq
 +    * attempt a restore
 +    * or just snapshot
 +
 +  * storage server tuning
 +    * set on cottontail on sdb, both values were 128  (seems to help -- late summer 2016)
 +    * echo 4096 > /sys/block/sd?/queue/nr_requests
 +    * echo 4096 > /sys/block/sd?/queue/read_ahead_kb
 +    * set on cottontail, was 90112 + /etc/rc.local
 +    * echo 262144 > /proc/sys/vm/min_free_kbytes
 +  * do same on greentail? (done late fall 2016)
 +    * all original values same as cottontail (all files)
 +    * set on c1d1 thru c1d6
 +  * do same on sharptail?
 +    * no such values for sdb1
 +    * can only find min_free_kbytes, same value as cottontail
 +  * stripe and chunk size
 +
 +<code>
 +
 +[root@n7 ~]# beegfs-ctl --getentryinfo /mnt/beegfs/
 +Path:
 +Mount: /mnt/beegfs
 +EntryID: root
 +Metadata node: n38 [ID: 48]
 +Stripe pattern details:
 ++ Type: RAID0
 ++ Chunksize: 512K
 ++ Number of storage targets: desired: 4
 +
 +</code>
 +  * The cache type can be set in the client config file (/etc/beegfs/beegfs-client.conf).
 +    * buffered is default, few 100k per file
 +
 +  * tuneNumWorkers in all /etc/beegfs/beggfs-C.conf file
 +    * for meta, storage and clients ...
 +
 +  * metadata server tuning
 +    * read in more detail
 +
 +==== Installation ====
 +
 +  * made easy [[http://www.beegfs.com/wiki/ManualInstallWalkThrough|External Link]]
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
cluster/151.txt ยท Last modified: 2016/12/06 20:14 by hmeij07