User Tools

Site Tools


cluster:151

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:151 [2016/10/27 19:04]
hmeij07 [beeGFS]
cluster:151 [2016/10/28 18:12]
hmeij07 [cluster idea]
Line 10: Line 10:
 This page is best read from the bottom up. This page is best read from the bottom up.
  
 +==== cluster idea ====
 +
 +idea: buy 2 now 4k+4k then 3rd in july 4k?
 + move test users over on 2 nodes, test, only change is $HOME
 + ctt (mngt+admingiu), 2 new units storage (+snapshots/meta backup), ctt2 meta + n38/39 backup meta
 +
 + make ctt2 master meta node? how?
 +
 +==== Mirror Meta ====
 +
 +<code>
 +
 +# enable
 +[root@n7 ~]# beegfs-ctl --mirrormd /mnt/beegfs/hmeij-mirror
 +Mount: '/mnt/beegfs'; Path: '/hmeij-mirror'
 +Operation succeeded.
 +
 +# put some new content in 
 +[root@n7 ~]# rsync -vac /home/hmeij/iozone-tests /mnt/beegfs/hmeij-mirror/
 +
 +# lookup meta tag
 +[root@n7 ~]# beegfs-ctl --getentryinfo /mnt/beegfs/hmeij-mirror/iozone-tests/current.tar
 +Path: /hmeij-mirror/iozone-tests/current.tar
 +Mount: /mnt/beegfs
 +EntryID: 3-581392E1-31
 +
 +# find
 +[root@sharptail ~]# ssh n38 find /data/beegfs_meta -name 3-581392E1-31
 +/data/beegfs_meta/mirror/49.dentries/54/6C/0-581392F0-30/#fSiDs#/3-581392E1-31
 +
 +# and find
 +[root@sharptail ~]# ssh n39 find /data/beegfs_meta -name 3-581392E1-31
 +/data/beegfs_meta/dentries/54/6C/0-581392F0-30/#fSiDs#/3-581392E1-31
 +
 +# seems to work
 +
 +</code>
 +==== /mnt/beegfs/ ====
 +
 +  * Source content 110G in XFS with ~100,000 files in ~2,000 dirs
 +    * /home/hmeij (mix of files, nothing large) plus
 +    * /home/fstarr/filler (lots of tiny files)
 +  
 +  * File content spread across 2 storage servers
 +    * petaltail:/var/chroot/data/beegfs_storage
 +    * swallowtail:/data/beegfs_storage
 +    * 56G used in beegfs-storage per storage server
 +    * ~92,400 files per storage server
 +    * ~1,400 dirs per storage server  mostly in "chunks" dir
 +
 +  * Meta content spread across 2 meta servers (n37 and n38)
 +    * 338MB per beegfs-meta server so 0.006% space wise for 2 servers
 +    * ~105,000 files per metadata server
 +    * ~35,000 dirs almost spread evenly across "dentries" and "inodes"
 +
 +  * Client (n7 and n8) see 110G in /mnt/beegfs
 +    * 110G in /mnt/beegfs
 +    * ~100,000 files
 +    * ~2,000 dirs
 +
 +Looks like:
 +
 +<code>
 +
 +# file content
 +
 +[root@swallowtail ~]# ls -lR /data/beegfs_storage/chunks/u0/57E4/2/169-57E42E75-31
 +/data/beegfs_storage/chunks/u0/57E4/2/169-57E42E75-31:
 +total 672
 +-rw-rw-rw- 1 root root 289442 Jun 26  2015 D8-57E42E89-30
 +-rw-rw-rw- 1 root root   3854 Jun 26  2015 D9-57E42E89-30
 +-rw-rw-rw- 1 root root  16966 Jun 26  2015 DA-57E42E89-30
 +-rw-rw-rw- 1 root root  65779 Jun 26  2015 DB-57E42E89-30
 +-rw-rw-rw- 1 root root  20562 Jun 26  2015 DF-57E42E89-30
 +-rw-rw-rw- 1 root root 259271 Jun 26  2015 E0-57E42E89-30
 +-rw-rw-rw- 1 root root    372 Jun 26  2015 E1-57E42E89-30
 +
 +[root@petaltail ~]# ls -lR /var/chroots/data/beegfs_storage/chunks/u0/57E4/2/169-57E42E75-31
 +/var/chroots/data/beegfs_storage/chunks/u0/57E4/2/169-57E42E75-31:
 +total 144
 +-rw-rw-rw- 1 root root     40 Jun 26  2015 DC-57E42E89-30
 +-rw-rw-rw- 1 root root  40948 Jun 26  2015 DD-57E42E89-30
 +-rw-rw-rw- 1 root root 100077 Jun 26  2015 DE-57E42E89-30
 +
 +# meta content
 +
 +[root@sharptail ~]# ssh n38 find /data/beegfs_meta -name 169-57E42E75-31
 +/data/beegfs_meta/inodes/6A/7E/169-57E42E75-31
 +/data/beegfs_meta/dentries/6A/7E/169-57E42E75-31
 +
 +[root@sharptail ~]# ssh n39 find /data/beegfs_meta -name 169-57E42E75-31
 +(none, no mirror)
 +
 +</code>
 ==== Tuning ==== ==== Tuning ====
  
   * global interfaces files ib0->eth1->eth0   * global interfaces files ib0->eth1->eth0
 +    * priority order, seems useful
     * set in a file somewhere     * set in a file somewhere
  
Line 25: Line 120:
     * set on cottontail, was 90112 + /etc/rc.local     * set on cottontail, was 90112 + /etc/rc.local
     * echo 262144 > /proc/sys/vm/min_free_kbytes     * echo 262144 > /proc/sys/vm/min_free_kbytes
-  * do same on greentail?+  * do same on greentail? (done late fall 2016) 
 +    * all original values same as cottontail (all files) 
 +    * set on c1d1 thru c1d6
   * do same on sharptail?   * do same on sharptail?
 +    * no such values for sdb1
 +    * can only find min_free_kbytes, same value as cottontail
 +  * stripe and chunk size
 +
 +<code>
 +
 +[root@n7 ~]# beegfs-ctl --getentryinfo /mnt/beegfs/
 +Path:
 +Mount: /mnt/beegfs
 +EntryID: root
 +Metadata node: n38 [ID: 48]
 +Stripe pattern details:
 ++ Type: RAID0
 ++ Chunksize: 512K
 ++ Number of storage targets: desired: 4
 +
 +</code>
 +  * The cache type can be set in the client config file (/etc/beegfs/beegfs-client.conf).
 +    * buffered is default, few 100k per file
 +
 +  * tuneNumWorkers in all /etc/beegfs/beggfs-C.conf file
 +    * for meta, storage and clients ...
 +
 +  * metadata server tuning
 +    * read in more detail
  
- tuneNumWorkers in all /etc/beegfs/beggfs-C.conf file for meta, storage and clients+==== Installation ====
  
- metadata server tuning+  * made easy [[http://www.beegfs.com/wiki/ManualInstallWalkThrough|External Link]]
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
cluster/151.txt · Last modified: 2016/12/06 20:14 by hmeij07