This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
cluster:194 [2021/08/18 12:58] hmeij07 [Update 11] |
cluster:194 [2024/09/10 20:14] (current) hmeij07 |
||
---|---|---|---|
Line 2: | Line 2: | ||
**[[cluster: | **[[cluster: | ||
- | ===== TrueNAS/ZFS ===== | + | ===== TrueNAS/ |
Notes. Mainly for me but might be useful/of interest to users. | Notes. Mainly for me but might be useful/of interest to users. | ||
Line 151: | Line 151: | ||
</ | </ | ||
+ | |||
+ | ==== certs ==== | ||
+ | |||
+ | * Go to System > CA and certs | ||
+ | * Add a cert | ||
+ | * name is " | ||
+ | * FQDN for hpstore.wesleyan.edu, | ||
+ | * fill in just the basics, no contraints or advance settings | ||
+ | * Add, then view CSR section, copy and provide to inCommon admin | ||
+ | |||
+ | * Once you get email back Available formats: | ||
+ | * as Certificate only, PEM encoded ... download, open in notepad, copy to clipboard | ||
+ | * Go to System > CA and certs | ||
+ | * Select type " | ||
+ | * check CSR exitsson this system | ||
+ | * caertficate authrority (pick CSR from dropdown list) | ||
+ | * paste in public key in certificate field | ||
+ | * paste in private key from CSR | ||
+ | * or Csr checkbox on system (this option) | ||
+ | * System > General, switch certs,Save (this will restart web services) | ||
+ | * check in new browser | ||
+ | |||
==== ZFS ==== | ==== ZFS ==== | ||
Line 338: | Line 360: | ||
==== Snapshots ==== | ==== Snapshots ==== | ||
+ | |||
+ | Snapshots made easier in new releases ... traverse to the hidden directory ''/ | ||
+ | |||
+ | < | ||
+ | |||
+ | 129.133.52.245:/ | ||
+ | 251T | ||
+ | |||
+ | </ | ||
+ | |||
* Daily snapshots, one per day, kept for a year (for now) | * Daily snapshots, one per day, kept for a year (for now) | ||
Line 535: | Line 567: | ||
</ | </ | ||
+ | |||
+ | |||
+ | ==== Console hangs ==== | ||
+ | |||
+ | 12.7 | ||
+ | |||
+ | As for the issue of the " | ||
+ | |||
+ | service middlewared stop\\ | ||
+ | service middlewared start | ||
+ | |||
+ | " | ||
+ | |||
Line 584: | Line 629: | ||
Storage > Pool > " | Storage > Pool > " | ||
+ | ** 12.0-U4.1 ** | ||
+ | |||
+ | * ditto above, see major release upgrade below | ||
+ | * but old active did not come up, reset controller | ||
+ | * click on " | ||
+ | * hmm something about failed to connect failoverscratchdisk? | ||
** 12.0-U5.1** | ** 12.0-U5.1** | ||
Line 592: | Line 643: | ||
* this version went fine | * this version went fine | ||
- | Not applied yet ...\\ | + | __Not created/ |
While the underlying issues have been fixed, this setting continues to be disabled by default for additional performance investigation. To manually reactivate persistent L2ARC, log in to the TrueNAS Web Interface, go to System > Tunables, and add a new tunable with these values: | While the underlying issues have been fixed, this setting continues to be disabled by default for additional performance investigation. To manually reactivate persistent L2ARC, log in to the TrueNAS Web Interface, go to System > Tunables, and add a new tunable with these values: | ||
< | < | ||
Line 600: | Line 651: | ||
</ | </ | ||
+ | From support: In an HA environment, | ||
+ | |||
+ | ** 12.0-U6 ** | ||
+ | |||
+ | * same as 5.1, went fine, | ||
+ | * new standby reboot 5 mins | ||
+ | |||
+ | |||
+ | ** 12.0-U6.1 ** | ||
+ | |||
+ | * same as 6, went fine, | ||
+ | * little flakiness on failover, apply pending appeared twice | ||
+ | * let it go 10 mins, use ping hostname to test | ||
+ | * new standby reboot 5 mins | ||
+ | |||
+ | ** 12.0-U7 ** | ||
+ | |||
+ | * major OpenZFS update | ||
+ | * same as update 12.0 | ||
+ | * no problems | ||
+ | * cpu was unusually busy before upgrade | ||
+ | * terminated some rsyncs | ||
+ | |||
+ | ** 12.0-U8 ** | ||
+ | |||
+ | * 02/23/2022 | ||
+ | * no problems | ||
+ | |||
+ | ** 12.0-U8.1 ** | ||
+ | |||
+ | * 05/03/2022 | ||
+ | * failover success at 10 mins | ||
+ | * then no Pending box, just a Continue button | ||
+ | * watch console messages, at 17 mins HA enabled | ||
+ | |||
+ | ==== Update 13 ==== | ||
+ | |||
+ | System > Update > Select (new train 13.0-STABLE) | ||
+ | |||
+ | < | ||
+ | |||
+ | # in shell | ||
+ | |||
+ | | ||
+ | |||
+ | | ||
+ | |||
+ | …10%…20%…30%…40%…50%…60%…70%…80%…90%…100% | ||
+ | |||
+ | beadm list | ||
+ | # (Active N = 12.0-U8.1 and R = 13.0-U3.1) | ||
+ | |||
+ | </ | ||
+ | |||
+ | once both have finished, reboot passive, web gui log back in | ||
+ | |||
+ | once passive back up, reboot active | ||
+ | |||
+ | web gui log back into new active, wait for HA to be enabled | ||
+ | |||
+ | debug plus screenshots for snapshot visibility which is visible (working in 13.0-U3.1) but database setting is still invisble | ||
+ | |||
+ | took less than 35 mins | ||
+ | |||
+ | < | ||
+ | |||
+ | bstop 0 | ||
+ | bresume 0 | ||
+ | # manual, one at a time | ||
+ | scontrol hold joblist | ||
+ | # one at a time | ||
+ | # for i in `squeue | grep ' | ||
+ | # then grep ' | ||
+ | scontrol suspend joblist | ||
+ | scontrol resume | ||
+ | scontrol release joblist | ||
+ | |||
+ | </ | ||
+ | |||
+ | |||
+ | ** 13.0-U4 | ||
+ | |||
+ | * apply pending update | ||
+ | * 10 mins, standby on new update | ||
+ | * initiate fail over 1 mins | ||
+ | * look for the icon in top bar, moving back and forth | ||
+ | * finish upgrade | ||
+ | * wait for HA to be enabled | ||
+ | * check versions | ||
+ | |||
+ | |||
+ | ** 13.0-U5.1 | ||
+ | |||
+ | * apply pending update | ||
+ | * 10 mins, standby on new update | ||
+ | * initiate fail over 1 mins | ||
+ | * look for the icon in top bar, moving back and forth | ||
+ | * finish upgrade | ||
+ | * wait for HA to be enabled | ||
+ | * check versions | ||
+ | |||
+ | |||
+ | ** 13.0-U5.3 ** 08/25/2023 | ||
+ | |||
+ | * no problems | ||
+ | |||
+ | **Next support ticket: Ask if you ever need to reboot the disk shelves?** Full power off? | ||
+ | |||
+ | Hi, I'm archiving content from my TrueNAS appliance to another platform, then deleting the files migrated. I'm observing directories like this; 7.5 million files in 990 GB or 15 million files in 7 TB. Should I be concerned that the disk shelves have never be cold rebooted? Like XFS replaying the log journal for a clean mount? My HA nodes reboot on upgrade but I realize the disk arrays keep running, always. | ||
+ | |||
+ | Tier 1 Support: The 2 ES24 shelves do not require to be rebooted as they just house the drives themselves and provide power to them. There shouldn' | ||
+ | |||
+ | Rsync stats (after decompressing) | ||
+ | < | ||
+ | sod1/ | ||
+ | Number of files: 18,691,764 | ||
+ | Total transferred file size: 13072322138140 bytes | ||
+ | arnt_rosetta/ | ||
+ | Number of files: 8, | ||
+ | Total transferred file size: 1, | ||
+ | </ | ||
+ | |||
+ | |||
+ | |||
+ | ** 13.0-U6.1 ** 12/12/2023 | ||
+ | |||
+ | * no problems | ||
+ | |||
+ | ** 13.0-U6.2 ** 08/20/2024 | ||
+ | |||
+ | * no problems | ||
+ | * start update to initiate failover, inside 10 mins | ||
+ | * initiate fail over to HA enabled, inside 8 mins | ||