User Tools

Site Tools


cluster:194

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:194 [2021/03/23 13:32]
hmeij07 [fndebug]
cluster:194 [2021/07/06 18:19]
hmeij07 [Snapshots]
Line 164: Line 164:
 zfs userspace  tank/zfshomes zfs userspace  tank/zfshomes
 zfs groupspace tank/zfshomes zfs groupspace tank/zfshomes
 +
 +# uttlerly bizarre in v12 these commands change
 +
 +root@hpcstore2[~]# su - hmeij07
 +hpcstore2%
 +hpcstore2% zfs get userused@hmeij07 tank/zfshomes
 +NAME           PROPERTY          VALUE             SOURCE
 +tank/zfshomes  userused@hmeij07  718K              local
 +hpcstore2% zfs get userquota@hmeij07 tank/zfshomes
 +NAME           PROPERTY           VALUE              SOURCE
 +tank/zfshomes  userquota@hmeij07  500G               local
 +hpcstore2% zfs get userspace@hmeij07 tank/zfshomes
 +bad property list: invalid property 'userspace@hmeij07'
 +
  
 # hpc100 # hpc100
Line 332: Line 346:
     * check permissions on cloned volume, not windows!     * check permissions on cloned volume, not windows!
     * NOTE: once had mnt/tank/zfshomes also reset to windows, nasty, permissions denied errors     * NOTE: once had mnt/tank/zfshomes also reset to windows, nasty, permissions denied errors
-    * when cloning grant access to 192.168.0.0/16 and 10.10.0.0/16+    * when cloning grant access to <del>192.168.0.0/16 and 10.10.0.0/16</del> greentail52 129.133.52.226
     * NFS mount, read only     * NFS mount, read only
     * maproot ''root:wheel'' (also for mnt/tankl/zfshomes)     * maproot ''root:wheel'' (also for mnt/tankl/zfshomes)
-  * Clone mounted on say ''cottontail2:/mnt/clone"date"''+  * Clone mounted on say ''cottontail2:/mnt/hpc_store_snapshot"''
   * Restore actions by user   * Restore actions by user
   * Delete clone when done   * Delete clone when done
Line 448: Line 462:
 /dev/da10 HGST:7200:HUS728T8TAL4201:VAKM187L C:30 dR:2 dW:2503 dL:55 uR:0 uW:0 SMART Status:OK **!!!** /dev/da10 HGST:7200:HUS728T8TAL4201:VAKM187L C:30 dR:2 dW:2503 dL:55 uR:0 uW:0 SMART Status:OK **!!!**
 /dev/da9 HGST:7200:HUS728T8TAL4201:VAKL26ML C:30 dR:3 dW:0 dL:0 uR:0 uW:39 SMART Status:OK **!!!** /dev/da9 HGST:7200:HUS728T8TAL4201:VAKL26ML C:30 dR:3 dW:0 dL:0 uR:0 uW:39 SMART Status:OK **!!!**
-# these drives have not failed yet ut have write errors, offline/replace, see below+# these drives have not failed yet but have write errors, offline/replace, see below
  
 # next look at output of zpool status -x in fndebug/ZFS/dump.txt # next look at output of zpool status -x in fndebug/ZFS/dump.txt
Line 495: Line 509:
 </code> </code>
  
-** Pool Unhealthy **+** Pool Unhealthy but not Degraded status**
  
-No failed disks, no deploy of spare, but pool unhealthy.  The ''dump.txt'' files for SMART and ZFS show nothing remarkable. But in the console log we observe that disk da11 has problems. RMA issued. 3rd replacement disk in a year.+No failed disks, no deploy of spare, but pool unhealthy.  The ''dump.txt'' files for SMART and ZFS show nothing remarkable. But in the console log we observe that disk //da11// has problems. RMA issued. 3rd replacement disk in a year.
  
 <code> <code>
Line 507: Line 521:
 Mar 21 04:03:57 hpcstore2 (da11:mpr0:0:21:0): Descriptor 0x80: f7 72 Mar 21 04:03:57 hpcstore2 (da11:mpr0:0:21:0): Descriptor 0x80: f7 72
 Mar 21 04:03:57 hpcstore2 (da11:mpr0:0:21:0): Error 5, Unretryable error Mar 21 04:03:57 hpcstore2 (da11:mpr0:0:21:0): Error 5, Unretryable error
 +
 +1) Storage > Pools. Click gear icon next to the pool and press the "Status" option.
 +2) Find da11 and press the three-dot options button next to it, then press "Offline".
 +3) System > View Enclosure, find&select da11, press "Identify".
 +4) Physically swap the drive on the rack with its replacement.
 +5) Storage > Pool > Status page, bring up three-dot options for the removed drive, 
 +5a) Select member disk from drop down, and press "Replace". Success popup, click Close.
 +6) Wait till resilver finishes.
  
 </code> </code>
Line 552: Line 574:
 Result: personality switch active vs standby, took 35 mins Result: personality switch active vs standby, took 35 mins
  
-In two months: ZFS feature updates pathch, not interruptive, do around 04/09/2021+In two months: ZFS feature updates pathch, not interruptive, <del>do around 04/09/2021</del>\\ 
 +Upgrade done 
 + --- //[[hmeij@wesleyan.edu|Henk]] 2021/06/07 07:40//
  
 Storage > Pool > "wheel" > Upgrade Pool Storage > Pool > "wheel" > Upgrade Pool
cluster/194.txt · Last modified: 2024/01/03 18:50 by hmeij07