User Tools

Site Tools


cluster:151

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:151 [2016/10/20 21:44]
hmeij07
cluster:151 [2016/11/10 19:24]
hmeij07 [Mirror Data]
Line 2: Line 2:
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
  
-===== SGI Altix 3000 =====+===== beeGFS =====
  
-The HPCC community has been offered a SGI Altix 3000 (purchased in 2003?/2006?), basically a half rack on wheels (20U or so).  The Altix has 4 IA-64 processors (family Itanium 2) which aren't particularly fast (1.3 Ghz), but The Altix has 96 GBytes of memory and so is useful for some large Gaussian jobs which aren't particularly suited to running on our current HPCC cluster design (More details to come on this issue)+A document for me to recall and make notes of what I read in the manual pages and what needs testing.
  
-  * Details about IA-64 [[https://en.wikipedia.org/wiki/IA-64]] +Basically during the Summer of 2016 I investigated if the HPCC could afford enterprise level storageI wanted 99.999% uptime, snapshots, high availability and other goodies such as parallel NFS. Netapp came the closest but, eh, still at $42K lots of other options show up. The story is detailed here at [[cluster:149|The Storage Problem]]
-  * Details about Altix [[https://en.wikipedia.org/wiki/Altix]]+
  
-It is running redhat AS2.1 (which definitely ages it), so basic Linux. The node has been configured to fit our environment and will provide+This page is best read from the bottom up.
  
-  * /home from file server sharptail, over ethernet +==== cluster idea ====
-  * /sanscratch from scratch server greentail, over ethernet +
-  * Openlava 2.2 stand alone installation +
-  * icc/ifort version 8.1 on local disk +
-  * Gaussian version ?.?? on local disk+
  
-In order to use the local compilers you must "source" the following files to override the default 2016 icc/ifort compilers in your path:+  * Storage servers: buy 2 now 4k+4k then 3rd in July 4k? 
 + 
 +  * move test users over on 2 nodes, test, only change is $HOME 
 + 
 +  * Home cluster 
 +    * cottontail (mngt+admingiu) 
 +    * 2-3 new units storage (+snapshots/meta backup) 
 +    * cottontail2 meta + n38-n45 meta, all mirrored 
 + 
 +==== beegfs-admin-gui ==== 
 + 
 +  * ''cottontail:/usr/local/bin/beegfs-admin-gui'' 
 + 
 +==== Resync Data ==== 
 + 
 +Testing out fail over and deletion of data on secondary then a full resync process: 
 + 
 +  
 +  * started a full --resyncstorage --mirrorgroupid=101 --timestamp=0 
 +  * got --getentryinfo EntryID for a file in my /mnt/beegfs/home/path/to/file and did the same for the directory the file was located in 
 +  * did a cat /mnt.beegfs/home/path/to/file on a client (just fine) 
 +  * brought primary storage down 
 +  * redid the cat above (it hangs for a couple of minutes, then displays the file content) 
 +  * while primary down, I ran rm -rf   /mnt/beegfs/home/path/to/ removing directory holding file 
 +  * a cat now generates the expected file not found error 
 +  * brought up primary and started a full --resyncstorage --mirrorgroupid=101 --timestamp=0 
 +  * the nr of files and dirs discovered is as expected lower by the correct values 
 +  * when I now search for the EntryIDs obtained before they are gone from /data/beegfs-storage (as expected). 
 + 
 +Nice that it works. 
 + 
 +==== Mirror Data ==== 
 + 
 +When not all storage servers are up, client mounts will fail. This is just an optional "sanity check" which the client performs when it is  mounted. Disable this check by setting "sysMountSanityCheckMS=0" in beegfs-client.conf. When the sanity check is disabled, the client mount will succeed even if no servers are running. 
 + 
 +In order to able able to take a storage server off line without any impact, all content needs to mirrored. 
 + 
 +** Before **
  
 <code> <code>
-  .  /opt/intel_cc_80.8.1-028/bin/iccvars.sh +[root@cottontail2 ~]# beegfs-df 
-  .  /opt/intel_fc_80.8.1-024/bin/ifortvars.sh+METADATA SERVERS: 
 +TargetID        Pool        Total         Free    %      ITotal       IFree    % 
 +========        ====        =====         ====    =      ======       =====    = 
 +      48         low      29.5GiB      23.3GiB  79%        1.9M        1.5M  82% 
 +      49         low      29.5GiB      23.1GiB  78%        1.9M        1.5M  82% 
 +     250         low     122.3GiB     116.7GiB  95%        7.8M        7.6M  98% 
 + 
 +STORAGE TARGETS: 
 +TargetID        Pool        Total         Free    %      ITotal       IFree    % 
 +========        ====        =====         ====    =      ======       =====    = 
 +   13601         low     291.4GiB      50.6GiB  17%       18.5M       18.4M 100% 
 +   21701         low     291.2GiB      61.8GiB  21%       18.5M       15.8M  85%
 </code> </code>
  
-You can also find MKL libraries at ''/opt/intel/mkl60''.+** Before **
  
-All Openlava commands work the same way as elsewhere in our HPCC environment. In order to use the SGI Altix you must SSH to the head/compute node from any of our "tail" login nodes (ie cottontail, swallowtail)+<code> 
 + 
 +# define buddygroup - these are storage target IDs 
 +[root@n7 ~]# beegfs-ctl --addmirrorgroup --primary=13601 --secondary=21701 --groupid=101 
 +Mirror buddy group successfully set: groupID 101 -> target IDs 13601, 21701 
 + 
 +[root@n7 ~]# beegfs-ctl --listmirrorgroups 
 +     BuddyGroupID   PrimaryTargetID SecondaryTargetID 
 +     ============   =============== ================= 
 +              101             13601             21701 
 +               
 +# enable mirroring for data by directory -numTargets needs to be set to max nr of storage servers? 
 +# changed to 11/02/2016: 
 +[root@n7 ~]# beegfs-ctl --setpattern --buddymirror /mnt/beegfs/home --chunksize=512k  
 +[root@n7 ~]# beegfs-ctl --setpattern --buddymirror /mnt/beegfs/hmeij-mirror-data --chunksize=512k --numtargets=2 
 +New chunksize: 524288 
 +New number of storage targets: 2 
 +Path: /hmeij-mirror-data 
 +Mount: /mnt/beegfs 
 + 
 +# copy some contents in (~hmeij is 10G) 
 +[root@n7 ~]# rsync -vac --bwlimit /home/hmeij /mnt/beegfs/hmeij-mirror-data/  
 + 
 +</code> 
 + 
 +** After **
  
 <code> <code>
  
-[root@hmeij ~]# ssh hmeij@cottontail +[root@n7 ~]# beegfs-df
-hmeij@cottontail's password:  +
-Last login: Thu Oct 20 09:38:40 2016 from 129.133.22.42 +
-[hmeij@cottontail ~]$+
  
-# then+METADATA SERVERS: (almost no changes...) 
 +STORAGE TARGETS: (each target less circa 10G) 
 +TargetID        Pool        Total         Free    %      ITotal       IFree    % 
 +========        ====        =====         ====    =      ======       =====    = 
 +   13601         low     291.4GiB      40.7GiB  14%       18.5M       18.4M  99% 
 +   21701         low     291.2GiB      51.9GiB  18%       18.5M       15.8M  85%
  
-[hmeij@cottontail ~]$ ssh enzo +# lets find an object 
-[hmeij@enzo hmeij]$ bqueues +[root@n7 ~]# beegfs-ctl --getentryinfo /mnt/beegfs/hmeij-mirror-data/hmeij/xen/bvm1.img 
-QUEUE_NAME      PRIO STATUS          MAX JL/U JL/P JL/H NJOBS  PEND   RUN  SUSP  +Path: /hmeij-mirror-data/hmeij/xen/bvm1.img 
-sgi96            50   Open:Active                       0         0+Mount: /mnt/beegfs 
 +EntryID: 178-581797C8-30 
 +Metadata node: n38 [ID: 48
 +Stripe pattern details: 
 ++ Type: Buddy Mirror 
 ++ Chunksize: 512K 
 ++ Number of storage targets: desired: 2; actual: 1 
 ++ Storage mirror buddy groups: 
 +  + 101 
 + 
 +# original 
 +[root@n7 ~]# ls -lh /mnt/beegfs/hmeij-mirror-data/hmeij/xen/bvm1.img 
 +-rwxr-xr-x 1 hmeij its 4.9G 2014-04-07 13:39 /mnt/beegfs/hmeij-mirror-data/hmeij/xen/bvm1.img 
 + 
 +# copy on primary 
 +[root@petaltail chroots]# ls -lh /var/chroots/data/beegfs_storage/buddymir/u2018/5817/9/60-58179513-30/178-581797C8-30 
 +-rw-rw-rw- 1 root root 4.9G Apr  7  2014 /var/chroots/data/beegfs_storage/buddymir/u2018/5817/9/60-58179513-30/178-581797C8-30 
 + 
 +# copy on secondary 
 +[root@swallowtail ~]# find /data/beegfs_storage -name 178-581797C8-30 
 +/data/beegfs_storage/buddymir/u2018/5817/9/60-58179513-30/178-581797C8-30 
 +[root@swallowtail ~]# ls -lh /data/beegfs_storage/buddymir/u2018/5817/9/60-58179513-30/178-581797C8-30 
 +-rw-rw-rw- root root 4.9G Apr  7  2014 /data/beegfs_storage/buddymir/u2018/5817/9/60-58179513-30/178-581797C8-30 
 + 
 +# seems to work, notice the ''buddymir'' directory on primary/secondary
  
 </code> </code>
  
-===== Gaussian =====+Here is an important note, from community list:
  
-In order to run Gaussian jobs you must be a member of the unix group ''gaussian'' and agree to accept ''/share/apps/gaussian/License.pdf''.+  * "actual: 1" means "1 buddy mirror group
 +    * so the important line that tells you that this file is mirrored is "Type: Buddy Mirror". 
 +  * "desired: 2" means you would like to stripe across 2 buddy groups(targets are buddygroups here)
  
-(Details on how to leverage this global shared memory block to come ....)+Another note: I changed paths for mirrormd and buddymirror to ''/mnt/beegfs/home'' and now I see connectivity data for meta node cottontail2 which was previously missing because I working on sub directory level.
  
-===== Example Job =====+<code>
  
-(Provide sample job submit script...)+[root@cottontail2 ~]# beegfs-net 
 +meta_nodes 
 +============= 
 +cottontail2 [ID: 250] 
 +   Connections: RDMA: 1 (10.11.103.250:8005);
  
 +[root@cottontail2 ~]# beegfs-ctl --listnodes --nodetype=meta --details
 +cottontail2 [ID: 250]
 +   Ports: UDP: 8005; TCP: 8005
 +   Interfaces: ib1(RDMA) ib1(TCP) eth1(TCP) eth0(TCP)
 +               ^^^
  
 +</code>
 +
 +==== Quota ====
 +
 +  * [[http://www.beegfs.com/wiki/EnableQuota|External Link]]
 +  * setup XFS
 +  * enable beegfs quota on all clients
 +  * enforce quota
 +    * set quotas using a text file
 +    * seems straightforward
 +  * do BEFORE populating XFS file systems
 +
 +==== Mirror Meta ====
 +
 +Definitely want Meta content mirrored, that way I can use the n38-n45 nodes with local 15K disk, plus maybe cottontail2 (raid 1 with hot and cold spare).
 +
 +Content mirroring will require more disk space. Perhaps snapshots to another node is more useful, also solves backup issue.
 +
 + 
 +<code>
 +
 +# enable meta mirroring, directory based
 +# change to 11/04/2016: used --createdir to make this home.
 +[root@n7 ~]# beegfs-ctl --mirrormd /mnt/beegfs/home
 +[root@n7 ~]# beegfs-ctl --mirrormd /mnt/beegfs/hmeij-mirror
 +Mount: '/mnt/beegfs'; Path: '/hmeij-mirror'
 +Operation succeeded.
 +
 +# put some new content in 
 +[root@n7 ~]# rsync -vac /home/hmeij/iozone-tests /mnt/beegfs/hmeij-mirror/
 +
 +# lookup meta tag
 +[root@n7 ~]# beegfs-ctl --getentryinfo /mnt/beegfs/hmeij-mirror/iozone-tests/current.tar
 +Path: /hmeij-mirror/iozone-tests/current.tar
 +Mount: /mnt/beegfs
 +EntryID: 3-581392E1-31
 +
 +# find
 +[root@sharptail ~]# ssh n38 find /data/beegfs_meta -name 3-581392E1-31
 +/data/beegfs_meta/mirror/49.dentries/54/6C/0-581392F0-30/#fSiDs#/3-581392E1-31
 +                  ^^^^^^
 +# and find
 +[root@sharptail ~]# ssh n39 find /data/beegfs_meta -name 3-581392E1-31
 +/data/beegfs_meta/dentries/54/6C/0-581392F0-30/#fSiDs#/3-581392E1-31
 +
 +# seems to work
 +
 +</code>
 +
 +Writing some initial content to both storage and meta servers; vanilla out of the box beegfs seems to balance the writes across both equally. Here are some stats. 
 +
 +
 +==== /mnt/beegfs/ ====
 +
 +  * Source content 110G in XFS with ~100,000 files in ~2,000 dirs
 +    * /home/hmeij (mix of files, nothing large) plus
 +    * /home/fstarr/filler (lots of tiny files)
 +  
 +  * File content spread across 2 storage servers
 +    * petaltail:/var/chroot/data/beegfs_storage
 +    * swallowtail:/data/beegfs_storage
 +    * 56G used in beegfs-storage per storage server
 +    * ~92,400 files per storage server
 +    * ~1,400 dirs per storage server  mostly in "chunks" dir
 +
 +  * Meta content spread across 2 meta servers (n37 and n38)
 +    * 338MB per beegfs-meta server so 0.006% space wise for 2 servers
 +    * ~105,000 files per metadata server
 +    * ~35,000 dirs almost spread evenly across "dentries" and "inodes"
 +
 +  * Client (n7 and n8) see 110G in /mnt/beegfs
 +    * 110G in /mnt/beegfs
 +    * ~100,000 files
 +    * ~2,000 dirs
 +
 +Looks like:
 +
 +  * NOTE: failed to mount /mn/beegfs is the result of out of space storage servers.
 +
 +<code>
 +
 +# file content
 +
 +[root@swallowtail ~]# ls -lR /data/beegfs_storage/chunks/u0/57E4/2/169-57E42E75-31
 +/data/beegfs_storage/chunks/u0/57E4/2/169-57E42E75-31:
 +total 672
 +-rw-rw-rw- 1 root root 289442 Jun 26  2015 D8-57E42E89-30
 +-rw-rw-rw- 1 root root   3854 Jun 26  2015 D9-57E42E89-30
 +-rw-rw-rw- 1 root root  16966 Jun 26  2015 DA-57E42E89-30
 +-rw-rw-rw- 1 root root  65779 Jun 26  2015 DB-57E42E89-30
 +-rw-rw-rw- 1 root root  20562 Jun 26  2015 DF-57E42E89-30
 +-rw-rw-rw- 1 root root 259271 Jun 26  2015 E0-57E42E89-30
 +-rw-rw-rw- 1 root root    372 Jun 26  2015 E1-57E42E89-30
 +
 +[root@petaltail ~]# ls -lR /var/chroots/data/beegfs_storage/chunks/u0/57E4/2/169-57E42E75-31
 +/var/chroots/data/beegfs_storage/chunks/u0/57E4/2/169-57E42E75-31:
 +total 144
 +-rw-rw-rw- 1 root root     40 Jun 26  2015 DC-57E42E89-30
 +-rw-rw-rw- 1 root root  40948 Jun 26  2015 DD-57E42E89-30
 +-rw-rw-rw- 1 root root 100077 Jun 26  2015 DE-57E42E89-30
 +
 +# meta content
 +
 +[root@sharptail ~]# ssh n38 find /data/beegfs_meta -name 169-57E42E75-31
 +/data/beegfs_meta/inodes/6A/7E/169-57E42E75-31
 +/data/beegfs_meta/dentries/6A/7E/169-57E42E75-31
 +
 +[root@sharptail ~]# ssh n39 find /data/beegfs_meta -name 169-57E42E75-31
 +(none, no mirror)
 +
 +</code>
 +==== Tuning ====
 +
 +  * global interfaces files ib0->eth1->eth0
 +    * connInterfacesFile = /home/tmp/global/beegfs.connInterfacesFile
 +    * set in /etc/beegfs-[storage|client|meta|admon|mgmtd].conf and restart services
 +
 +  * backup beeGFS EA metadata, see faq
 +    * attempt a restore
 +    * or just snapshot
 +
 +  * storage server tuning
 +    * set on cottontail on sdb, both values were 128  (seems to help -- late summer 2016)
 +    * echo 4096 > /sys/block/sd?/queue/nr_requests
 +    * echo 4096 > /sys/block/sd?/queue/read_ahead_kb
 +    * set on cottontail, was 90112 + /etc/rc.local
 +    * echo 262144 > /proc/sys/vm/min_free_kbytes
 +  * do same on greentail? (done late fall 2016)
 +    * all original values same as cottontail (all files)
 +    * set on c1d1 thru c1d6
 +  * do same on sharptail?
 +    * no such values for sdb1
 +    * can only find min_free_kbytes, same value as cottontail
 +  * stripe and chunk size
 +
 +<code>
 +
 +[root@n7 ~]# beegfs-ctl --getentryinfo /mnt/beegfs/
 +Path:
 +Mount: /mnt/beegfs
 +EntryID: root
 +Metadata node: n38 [ID: 48]
 +Stripe pattern details:
 ++ Type: RAID0
 ++ Chunksize: 512K
 ++ Number of storage targets: desired: 4
 +
 +</code>
 +  * The cache type can be set in the client config file (/etc/beegfs/beegfs-client.conf).
 +    * buffered is default, few 100k per file
 +
 +  * tuneNumWorkers in all /etc/beegfs/beggfs-C.conf file
 +    * for meta, storage and clients ...
 +
 +  * metadata server tuning
 +    * read in more detail
 +
 +==== Installation ====
 +
 +  * made easy [[http://www.beegfs.com/wiki/ManualInstallWalkThrough|External Link]]
 +  * rpms pulled from repository via petaltail in ''greentail:/sanscratch/tmp/beegfs''
 +
 +<code>
 +
 +[root@cottontail ~]# ssh n7 beegfs-net
 +
 +mgmt_nodes
 +=============
 +cottontail [ID: 1]
 +   Connections: TCP: 1 (10.11.103.253:8008);
 +
 +meta_nodes
 +=============
 +n38 [ID: 48]
 +   Connections: TCP: 1 (10.11.103.48:8005);
 +n39 [ID: 49]
 +   Connections: TCP: 1 (10.11.103.49:8005);
 +
 +storage_nodes
 +=============
 +swallowtail [ID: 136]
 +   Connections: TCP: 1 (192.168.1.136:8003 [fallback route]);
 +petaltail [ID: 217]
 +   Connections: TCP: 1 (192.168.1.217:8003 [fallback route]);
 +
 +
 +</code>
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
cluster/151.txt · Last modified: 2016/12/06 20:14 by hmeij07