User Tools

Site Tools


cluster:151

This is an old revision of the document!



Back

beeGFS

A document for me to recall and make notes of what I read in the manual pages and what needs testing.

Basically during the Summer of 2016 I investigated if the HPCC could afford enterprise level storage. I wanted 99.999% uptime, snapshots, high availability and other goodies such as parallel NFS. Netapp came the closest but, eh, still at $42K lots of other options show up. The story is detailed here at The Storage Problem

This page is best read from the bottom up.

Tuning

  • global interfaces files ib0→eth1→eth0
    • priority order, seems useful
    • set in a file somewhere
  • backup beeGFS EA metadata, see faq
    • attempt a restore
    • or just snapshot
  • storage server tuning
    • set on cottontail on sdb, both values were 128 (seems to help – late summer 2016)
    • echo 4096 > /sys/block/sd?/queue/nr_requests
    • echo 4096 > /sys/block/sd?/queue/read_ahead_kb
    • set on cottontail, was 90112 + /etc/rc.local
    • echo 262144 > /proc/sys/vm/min_free_kbytes
  • do same on greentail?
    • all original values same as cottontail (all files)
    • set on c1d1 thru c1d6
  • do same on sharptail?
    • no such values for sdb1
    • can only find min_free_kbytes, same value as cottontail
  • tuneNumWorkers in all /etc/beegfs/beggfs-C.conf file
    • for meta, storage and clients …
  • metadata server tuning
    • read in more detail


Back

cluster/151.1477595763.txt.gz · Last modified: 2016/10/27 19:16 by hmeij07