The weird thing about this device removal, is that whence all the data is migrated, and now using in-direct pointers, it can be remapped.
If I understand it correctly, the command is zpool remap POOL
. This causes the in-direct pointer table, (static, and in memory at all times), to shrink and the blocks start owning "concrete" blocks, (normal, non-indirected blocks). Don't quote me, I don't clearly understand it. Just parroting what I vaguely remember.
As for removing a single disk from a RAID-Zx pool, I can see that it would have to re-write the data using whatever parity level is in the remaining vDevs.
However, I just thought of something. (Oh, I know, I'm not supposed to think!) Sun Microsystems added a ZFS Pool feature called RAID-Z/mirror hybrid allocator
in pool version 29. What if we added a similar feature, (maybe not pool version 29 compatible...), and used it for removing the single disks?
Meaning in a RAID-Z1, we would have to have 2 copies of the original singleton blocks, and RAID-Z2, 3 copies. And 4 copies for RAID-Z3. It would meet the redundancy requirements, and we could still remove single disks. It's not perfect, but we may consider it phase 2. Plus, we get the neat RAID-Z/Mirror Hybrid allocator feature out of the work. We simply abuse the feature for single disk removals.
Now that I have written that, I am wondering if I should open an OpenZFS feature request, (2 part), on the issue?
Concurance?
Or am I completely off my rocker? (And I don't even own a rocker any more!)