Question on zPool vDev retriement

seb101

Contributor
Joined
Jun 29, 2019
Messages
142
Hi all,

The scenario:

1. You have an existing zPool containing one mirrored vDev of two 4TB disks. Total zPool storage 4TB.
2. You later add another mirrored vDev of two 12TB disks to the pool. Total zPool storage 16TB.

Now at a future date, you decide that due to over-provisioning of space, running the original pair of 4TB disks is a waste of energy and want to remove them from the zPool, but have the data on that 4TB vDev transfer over to the 12TB vDev mirror.

Is there any way to acheive this inside the ZFS system without copying the data out to another location and destroying the pool?

Thanks.
 
Joined
Oct 22, 2019
Messages
3,641
Is there any way to acheive this inside the ZFS system without copying the data out to another location and destroying the pool?
Not that I'm aware, unless someone knows of a way to force some (likely dangerous) low-level data manipulation. Even with "zfs split" you're only removing a physical disk from a mirrored vdev, leaving another disk behind: essentially "downgrading" your mirror to a non-redundant stripe. Either way, that 4TB of "extra" capacity will still remain on your original pool, but without the safeguard of redundancy.

I believe you have to take the long approach by replicating all your datasets/snapshots/clones (basically your entire zpool) to a new "temporary pool", and from there you replicate it back to your newly-created 12TB pool, sans the 4TB drives.

This obviously requires extra drives on hand, and there's a moment in time where you could potentially lose everything if you make a mistake in the order of steps or accidentally run a one-way destruction of data before you have a pristine and safe copy of everything.

You could technically use two large USB drives as your temporary pool; enough to hold everything that currently exists, while you re-create and then replicate everything back to the the smaller 12TB-capacity pool. (There's even a potential risk of bumping into I/O errors when copying everything from the temporary pool to the new pool, if the temporary disks were near failing.) :eek:

As for saving energy, isn't it only like 1-2 watts for a 5400-RPM drive to constantly spin (on idle)? Give or take, depending where you live, that should only hit your electricity bill at less than $1 per month.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Well, you could use the new "zpool remove" function. Don't know when it was introduced, and it's not as nice as the Oracle version. The OpenZFS version only works on mirrored vDevs & pools, unlike the Solaris ZFS version which seems to work on RAID-Zx vDevs and pools.

Since this is a highly sought after feature, it was implemented. However, what it does is create a virtual vDev on the remaining physical vDev and then move the data off the vDev to be removed. This IS a supported feature, though I don't know which version of TrueNAS it became available. Nor do I know the syntax.

From what I can guess, whence this "virtual vDev" is copied over, it's mostly read only. As you delete files, this virtual vDev takes up less and less space, to eventually it's empty.
 
Joined
Oct 22, 2019
Messages
3,641
Since this is a highly sought after feature, it was implemented. However, what it does is create a virtual vDev on the remaining physical vDev and then move the data off the vDev to be removed. This IS a supported feature, though I don't know which version of TrueNAS it became available. Nor do I know the syntax.

Is there anywhere to read about this? I could only find articles and documentation in regards to "zpool remove" that work specifically on spares and cache drives, but not data vdevs. Any such references to shrinking a pool instruct the user to "destroy it and rebuild it" using fewer drives for the new pool, and then copying data over from a backup.

Does this new "zpool remove" feature use free space on the larger vdevs to create a virtual vdev to temporarily have data copied over to it from the soon-to-be removed vdev?

As far as a "virtual vdev", wouldn't that technically make it "virtual virtual device? Vdevception? :tongue:
 
Last edited:

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
@winnielinnie It's a new feature in OpenZFS that was not available in the FreeBSD native implementation up to FreeBSD 12.2.
OpenZFS is standard in FreeBSD 13 and TrueNAS uses the port implementation on FreeBSD 12.2, so the feature should be available. You can remove single disk or mirror vdevs even if they carry data. The downside is - as @Arwen explained - that the removed vdevs are "emulated" on the remaining ones. So you get another rather intransparent level of indirection.

How do I know? Listening to the bsdnow.tv podcast. I don't have a pointer to some documentation from the top of my head, sorry.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Is there anywhere to read about this?
The zpool-remove manpage would be the place to start:
Code:
ZPOOL-REMOVE(8)         FreeBSD System Manager's Manual        ZPOOL-REMOVE(8)

NAME
     zpool-remove – Remove a device from a ZFS storage pool

SYNOPSIS
     zpool remove [-npw] pool device...
     zpool remove -s pool

DESCRIPTION
     zpool remove [-npw] pool device...
             Removes the specified device from the pool.  This command
             supports removing hot spare, cache, log, and both mirrored and
             non-redundant primary top-level vdevs, including dedup and
             special vdevs.  When the primary pool storage includes a top-
             level raidz vdev only hot spare, cache, and log devices can be
             removed.  Note that keys for all encrypted datasets must be
             loaded for top-level vdevs to be removed.

...
 
Joined
Oct 22, 2019
Messages
3,641
OpenZFS is standard in FreeBSD 13 and TrueNAS uses the port implementation on FreeBSD 12.2, so the feature should be available.
Oh nice! (Though I'd still be nervous to shrink an existing zpool in place.) But that it's available in TrueNAS Core (FreeBSD-based) is a good sign.


The downside is - as @Arwen explained - that the removed vdevs are "emulated" on the remaining ones. So you get another rather intransparent level of indirection.
The zpool-remove manpage would be the place to start:
While it gives a brief summary, I'd like to read (or listen to) something that goes into more detail, and might explain if there's a performance penalty with the "emulated" vdev, while it still exists within the pool, such as what was mentioned above by @Patrick M. Hausen and @Arwen. I enjoy reading about someone's experience with something in the real-world before I end up using it if/when applicable. :smile:


How do I know? Listening to the bsdnow.tv podcast. I don't have a pointer to some documentation from the top of my head, sorry.
Do you remember which episode it was? I searched through the descriptions of recent episodes, but saw no mention of this new feature.
 
Top