User Tools

Site Tools


cluster:226

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
cluster:226 [2024/09/10 20:16]
hmeij07 created
cluster:226 [2024/09/17 14:53] (current)
hmeij07
Line 1: Line 1:
 +\\
 +**[[cluster:0|Back]]**
 +
 ===== TrueNAS/ZFS m40ha ===== ===== TrueNAS/ZFS m40ha =====
 +
 +Notes on the deployment and production changes on our 500T IXsystem m40ha storage appliance.
 +
 +Fixed the date on controllers by pointing ntpd to 129.133.1.1
 +
 +ES 60 middle amber light blinking which is ''ok'', green health check on right
 +
 +  * SAS ports A1 and B1 green led
 +
 +Verifies read and write caches (but do not remeber where...)
 +
 +As opposed to the X series the 1G truenas mgnt port is not used
 +
 +cyclic message was duplicate VHIDs
 +
 +==== summary ====
 +
 + To recap we were able to verify that the issue with the x20 webUI not loading after the M40 was plugged in was due to duplicate VHIDs. Once that was changed we saw the issue resolve. We also verified that read and write drives showed as healthy as well as the pool and HA were all enabled and healthy. From there we setup our other network interface and then created our NFS share. Next we configured our NFS share to be locked down to certain hosts and Networks and then setup a replication task from your X series to your new M series for the NFS shares. 
 +
 +
 +==== replication ====
 +
 +First-time replication tasks can take a long time to complete as the entire snapshot must be copied to the destination system. Replicated data is not visible on the receiving system until the replication task completes. Later replications only send the snapshot changes to the destination system. Interrupting a running replication requires the replication task to restart from the beginning.
 +
 +The target dataset on the receiving system is automatically created in read-only mode to protect the data. To mount or browse the data on the receiving system, create a clone of the snapshot and use the clone. We set IGNORE so should be read/write on M40.  Enable SSH on **target**
 +
 +On **source** System > SSH Connections > Add
 +
 +  * name replication
 +  * host IP or FQDN of target
 +  * username root
 +  * generate new key
 +
 +On ** source ** Tasks > Replication Tasks
 +
 +  * name zfshomesrepel
 +  * direction PUSH
 +  * transport SSH
 +  * check enabled
 +  * ssh connection replication (from above)
 +  * on source side
 +    * tank/zfshomes
 +    * recursive not checked 
 +    * include dataset properties
 +    * tank/zfsshome-0auth-%Y%m%d.%H%M-1y-6 MONTHS(S)- Enabled
 +    * check run automatically
 +  * on target side
 +    * ssh connection replication
 +    * tank/zfshomes
 +    * IGNORE
 +    * snapshot retention None
 +
 +Save
 +
 +On ** source ** Replication Tasks shows enabled and PENDING
 +
 +You could kick this off with Run NOW in in Edit menu of task.
 +
  
 ==== CAs & Certs ==== ==== CAs & Certs ====
  
-  * download         +  * Generate a CSR, insert year in Name 
 +  * then... 
 +  * download inCommon     
     * Issuing CA certificates only:     * Issuing CA certificates only:
-    * as Root/Intermediate(s) only, PEM encoded+    * as Root/Intermediate(s) only, PEM encoded (first one in chain section)
   * menu CAs > import CA   * menu CAs > import CA
-  * copy all info  +  * copy all info into public 
-  * menu certs > import a cert +  * menu Certs > import a cert 
-    * signing ath the CSR+    * signing auth, point to the CSR 
 +    * as Certificate only, PEM encoded (first one in certs format)
     * copy in public     * copy in public
 +  * don't click http -> https s0 you don't get locked out
 +  * when cert expires on you, just access https://
  
 +\\
 +**[[cluster:0|Back]]**
cluster/226.1725999393.txt.gz · Last modified: 2024/09/10 20:16 by hmeij07