This is an old revision of the document!
Notes. Mainly for me but might be useful/of interest to users.
# from outside via VPN $ ssh hpc21@hpcstore.wesleyan.edu hpc21@hpcstore.wesleyan.edu's password: FreeBSD 11.2-STABLE (TrueNAS.amd64) (banner snip ...) Welcome to TrueNAS # note we ended up on node "B" [hpc21@hpcstore2 ~]$ pwd /mnt/tank/zfshomes/hpc21 [hpc21@hpcstore2 ~]$ echo $HOME /mnt/tank/zfshomes/hpc21 # quota check [hpc21@hpcstore2 ~]$ zfs userspace tank/zfshomes | egrep -i "quota|$USER" TYPE NAME USED QUOTA POSIX User hpc21 282K 500G # from inside HPCC with ssh keys properly set up [hpc21@cottontail ~]$ ssh hpcstore Last login: Mon Mar 23 10:58:27 2020 from 129.133.52.222 [hpc21@cottontail ~]$ echo $HOME /zfshomes/hpc21 [hpc21@hpcstore2 ~]$ df -h . Filesystem Size Used Avail Capacity Mounted on tank/zfshomes 177T 414G 177T 0% /mnt/tank/zfshomes
[hmeij@ThisPC]$ rsync -vac --dry-run --whole-file --bwlimit=4096 \ c:\Users\hmeij\ hpcstore:/mnt/tank/zfshomes/hmeij/ sending incremental file list ...
# windows command line C:\Users\hmeij07>net use W: \\hpcstore.wesleyan.edu\hmeij /user:hmeij Enter the password for 'hmeij' to connect to 'hpcstore.wesleyan.edu': The command completed successfully. # or ThisPC > Map Network Drive \\hpcstore.wesleyan.edu\username # user is hpcc username, password is hpcc password
netcli
, chmage/reset root passsword herenetcli
)
High Availability. Two controllers hpcstore1
(also known as A) and hpcstore2
(also known as B).
Virtual IP hpcstore.wesleyan.edu
floats back and forth seamlessly (tested, some protocols will loose connectivity). In a split brain situation (no response, both controllers think they are it), disconnect one controller from power then reboot. Then reconnect and wait a few minutes for HA icon to turn green when controller comes online.
An update goes like this and is not an interruption. Check for and apply updates. They are applied to partner and partner is rebooted. When partner comes back up it becomes the primary. Now you need apply updates to other partner. When it comes back up it remains secondary node. Check that nodes run the same version.
Allowed for large content transfers using scp
or sftp
or just checking things out.
TODO: rsync?
Home directories are located in /mnt/tank/zfshomes
. When users get cut over their location will be updated in the /etc/passwd
file and $HOME becomes /zfshomes/username
. So we can keep track of that. Followed by an rsync process that will from TrueNAS/ZFS appliance rsync to sharptail:/home
.
TODO: write script. \
TODO: add disksold sharptail:/home, enlarge and merge LVMs.
TODO: backup target
# create user, no new but set primary + auxillary groups, full-name # set shell, set permissions, some random passwd date +%N with symbols # then move all dot files into ~/._nas scp ~/.ssh over # copy content over from sharptail, @hpcstore... rsync -ac --bwlimit=4096 --whole-file --stats sharptail:/home/hmeij/ /mnt/tank/zfshomes/hmeij/ # SSH keys in place so should be passwordless, test ssh username@hpcstore.wesleyan.edu # go to $HOME cd /mnt/tank/zfshomes/username # this will be mounted HPC wide at /zfshomes/username
# for users zfs allow everyone userquota,userused tank/zfshomes # as user zfs userspace tank/zfshomes zfs groupspace tank/zfshomes # hpc100 TYPE NAME USED QUOTA POSIX User hpc100 14.9G 100G POSIX User root 1K none # set quota zfs set userquota@hpc100=100g tank/zfshomes zfs set groupquota@hpc100=100g tank/zfshomes # get userused zfs get userused@hpc100 tank/zfshomes # list snapshots zfs list -t snapshot # output NAME USED AVAIL REFER MOUNTPOINT freenas-boot/ROOT/default@2019-12-17-22:04:34 2.10M - 827M - tank/zfshomes@auto-20200309.1348-1y 210K - 558K - tank/zfshomes@auto-20200310.1348-1y 219K - 14.8G - tank/zfshomes@auto-20200311.1348-1y 165K - 14.9G - # health zpool status -v tank pool: tank state: ONLINE scan: scrub repaired 0 in 0 days 00:00:02 with 0 errors on Sun Feb 2 03:00:04 2020 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gptid/104a748f-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 gptid/10d0c16e-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 gptid/115414b8-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 gptid/11dd105d-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 gptid/12636cff-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 gptid/12e6d913-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 gptid/13676269-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 gptid/13ee7fb2-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 gptid/14706a76-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 gptid/1504c334-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 gptid/1592a623-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 raidz2-1 ONLINE 0 0 0 gptid/16250571-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 gptid/16b4a392-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 gptid/173e4974-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 gptid/17cb4efb-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 gptid/1861c750-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 gptid/18ef1edd-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 gptid/197d9fc9-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 gptid/1a09eebb-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 gptid/1a99e25d-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 gptid/1b2dd0b5-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 gptid/1bbaa252-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 raidz2-2 ONLINE 0 0 0 gptid/1c60422c-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 gptid/1cedf16e-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 gptid/1d807f27-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 gptid/1e0d0a20-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 gptid/1e9dec87-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 gptid/1f603e96-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 gptid/1ff8b82e-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 gptid/2087c210-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 gptid/21128be3-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 gptid/21ab0c6c-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 gptid/2241e3e2-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 logs gptid/238a4161-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 cache gptid/23426c62-211a-11ea-bbd5-b496915e40c8 ONLINE 0 0 0 spares gptid/22f36c47-211a-11ea-bbd5-b496915e40c8 AVAIL errors: No known data errors
SMB/CIFS (Samba) shares are also created once the homedir is up.
/usr/local/etc/smb4.conf
Note At user creation a random password is set. Please ask to have it reset to access SMB shares. (there should be some self-serve password reset functionality with email confirmation but I cannot find it for now. Any passwords changed outside of database will not be persistent across boots.
# windows, map network drive \\hpcstore.wesleyan.edu\username # credentials, one or all of these may work WORKGROUP\username localhost\username username
Change $HOME location in /etc/passwd
and propagate.
Note remove access to old $HOME … chown root:root + chmod o-rwx
END OF USER ACCOUNT SETUP
root@hpcstore1[~]# cat /etc/exports /mnt/tank/zfshomes -maproot="root":"wheel" -network 192.168.0.0/16 /mnt/tank/zfshomes -maproot="root":"wheel" -network 10.10.0.0/16 /mnt/tank/zfshomes-auto-20200310.1348-1y-clone -ro \ -maproot="root":"wheel" -network 192.168.0.0/16 /mnt/tank/zfshomes-auto-20200310.1348-1y-clone -ro \ -maproot="root":"wheel" -network 10.10.0.0/16
Rollback is a potentially dangerous operation
Instead restore via snapshots. See Guide.
root:wheel
(also for mnt/tankl/zfshomes)cottontail2:/mnt/clone“date”
# mountpoints (maproot=root:wheel) drwxr-xr-x 2 root root 4096 Mar 10 14:08 /mnt/clone0310 drwxr-xr-x 2 root root 4096 Mar 6 14:01 /zfshomes # /etc/fstab examples (either private network) #192.168.102.245:/mnt/tank/zfshomes \ /zfshomes nfs rw,tcp,soft,intr,bg,vers=3 #10.10.102.245:/mnt/tank/zfshomes \ /zfshomes nfs rw,tcp,soft,intr,bg,vers=3 #192.168.102.245:/mnt/tank/zfshomes-auto-20200310.1348-1y-clone \ /mnt/clone0310 nfs ro,tcp,soft,intr,bg,vers=3 10.10.102.245:/mnt/tank/zfshomes-auto-20200310.1348-1y-clone \ /mnt/clone0310 nfs ro,tcp,soft,intr,bg,vers=3