Advice on ZFS Pool Modification - Removing Mirrored vdev and hdd upgrade

impovich

Explorer
Joined
May 12, 2021
Messages
72
Hello dear community, please advise on how I can remove one mirrored vdev from a pool with two mirrors:

Code:
pool: tank
state: ONLINE
scan: scrub repaired 0B in 07:21:05 with 0 errors on Wed Nov 29 03:29:35 2023
config:
  NAME        STATE     READ WRITE CKSUM
  tank        ONLINE       0     0     0
    mirror-0  ONLINE       0     0     0
      3203ffa0-f669-4c3a-bcc6-0e36ab5a0a97  ONLINE  0 0 0
      5b10cee6-f980-4065-8350-f44377caa304  ONLINE  0 0 0
    mirror-1  ONLINE       0     0     0
      c6dde47b-030e-412c-95aa-3713074238a2  ONLINE  0 0 0
      c0f955d1-9c7c-452a-b517-3ec104abc096  ONLINE  0 0 0


As I understand, I have to make free space for data to be copied to a single vdev. After doing so, I can execute:
zpool remove tank mirror-1 Once this is done, I can simply change the remaining disks one by one with bigger ones ?
Or change the disks in mirror-0 with bigger ones and then do zpool remove tank mirror-1?
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
You'll get less fragmentation if you increase the size of the VDEV that will remain in the pool first, then remove the "unwanted" VDEV once done.

If you don't care too much about your fragmentation, both options you mentioned will work.
 
Joined
Oct 22, 2019
Messages
3,641
Do I understand it correctly, that it will copy all the data from mirror-1 to mirror-0 and all the data will remain in tact?
Yes, but like @sretalla explained, if you're going to expand the remaining mirror vdev anyways, you should do that first before removing the unwanted mirror vdev.

Regardless, you cannot issue a remove operation if there is insufficient available space in the remaining vdev to house all of the pool's data.

It's wise to have a backup of the entire pool. Murphy's Law, and all that...
 

impovich

Explorer
Joined
May 12, 2021
Messages
72
Yes, but like @sretalla explained, if you're going to expand the remaining mirror vdev anyways, you should do that first before removing the unwanted mirror vdev.

Regardless, you cannot issue a remove operation if there is insufficient available space in the remaining vdev to house all of the pool's data.

It's wise to have a backup of the entire pool. Murphy's Law, and all that...
I'm almost there :)
 

Attachments

  • Screenshot at Jan 14 16-40-26.png
    Screenshot at Jan 14 16-40-26.png
    97.5 KB · Views: 56

impovich

Explorer
Joined
May 12, 2021
Messages
72
can't delete mirror-1, getting
Code:
cannot remove mirror-1: out of space


How can i overcome the issue?
 

Attachments

  • Screenshot at Jan 14 18-35-50.png
    Screenshot at Jan 14 18-35-50.png
    63.9 KB · Views: 44
  • Screenshot at Jan 14 18-38-58.png
    Screenshot at Jan 14 18-38-58.png
    78.2 KB · Views: 32
  • Screenshot at Jan 14 18-42-26.png
    Screenshot at Jan 14 18-42-26.png
    130.5 KB · Views: 36

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Do you have snapshots? If so - they may be using a lot of space. You may need to delete the snapshots first
 

impovich

Explorer
Joined
May 12, 2021
Messages
72
Seems like I solved the issue by freeing more space, didn't expect this would be needed after switching mirror-0 to bigger hdds.
 

Attachments

  • Screenshot at Jan 14 18-48-18.png
    Screenshot at Jan 14 18-48-18.png
    652.8 KB · Views: 35

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
And now you have an issue that your pool is way too full. You need bigger disks / more disks
 

impovich

Explorer
Joined
May 12, 2021
Messages
72
Unfortunately, not everything went flawlessly. For unknown reasons pool degraded. One of the new disks is unavailable, the second one reports the wrong size in the pool


Code:
  pool: tank
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
    invalid.  Sufficient replicas exist for the pool to continue
    functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
  scan: resilvered 2.31T in 05:41:53 with 0 errors on Sun Jan 14 18:20:01 2024
remove: Removal of vdev 1 copied 892G in 2h17m, completed on Sun Jan 14 21:03:33 2024
    13.5M memory used for removed device mappings
config:

    NAME                                      STATE     READ WRITE CKSUM
    tank                                      DEGRADED     0     0     0
      mirror-0                                DEGRADED     0     0     0
        11639043010271858211                  UNAVAIL      0     0     0  was /dev/disk/by-partuuid/5c1c2b32-b4b7-4bba-a8ae-645dd4a2a800
        0bf1d930-893e-400d-a691-657b724cb634  ONLINE       0     0     0

errors: No known data errors



Code:
tank                                      3.62T  2.80T   848G        -         -    27%    77%  1.00x  DEGRADED  /mnt
  mirror-0                                3.62T  2.80T   848G        -         -    27%  77.2%      -  DEGRADED
    11639043010271858211                  7.28T      -      -        -         -      -      -      -   UNAVAIL
    0bf1d930-893e-400d-a691-657b724cb634  3.64T      -      -        -         -      -      -      -    ONLINE
  indirect-1



Code:
truenas# lsblk
NAME                                                                                                    MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS                                                                                                   8:2    0 221.6G  0 part
sdb                                                                                                       8:16   0   7.3T  0 disk
└─sdb1                                                                                                    8:17   0   3.6T  0 part                                                                                                  8:66   0   3.6T  0 part
sdf                                                                                                       8:80   0   7.3T  0 disk
└─sdf1                                                                                                    8:81   0   7.3T  0 part



Code:
truenas# zpool get autoexpand tank
NAME  PROPERTY    VALUE   SOURCE
tank  autoexpand  on      local
 

Attachments

  • Screenshot at Jan 14 21-42-46.png
    Screenshot at Jan 14 21-42-46.png
    57.6 KB · Views: 31
Last edited:

impovich

Explorer
Joined
May 12, 2021
Messages
72
Weird things happened, so after the disk replacement with the same disk and resilvering, the pool became healthy, the space didn't expand automatically even after reboot so I tried to do it manually via console using
Code:
zpool online -e tank gptid/ for both disks
- nothings changed.
So I hit the EXPAND button in UI and got some feedback:

Code:
Command partprobe /dev/sdf failed (code 1): Error: Partition(s) 1 on /dev/sdf have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use. As a result, the old partition(s) will remain in use. You should reboot now before making further changes.


after a while disk was just kicked off the pool...

P.S. i think that is enough of games, as it seems there is a bug in SCALE-23.10.1, will backup data and recreate the pool
 

Attachments

  • Screenshot at Jan 15 18-15-27.png
    Screenshot at Jan 15 18-15-27.png
    62.3 KB · Views: 34
Last edited:
Top