Ok, I tested the procedure out in a VM running FreeNAS.
Assuming we have an existing one-drive stripe volume called
tank and an extra drive of the same size, the steps are outlined here.
GUI = The web interface.
CLI = Command-line interface / shell
1. GUI: Create a new encrypted volume
fake
2. CLI: zpool status tank fake
This lists the GEOM IDs of the two encrypted GELI devices used by the two volumes
fake and
tank
3. CLI: zpool export fake
Detaches the volume
fake so we can attach the encrypted disk to volume
tank
4. CLI: zpool attach -f tank <GEOM-ID-tank-dev> <GEOM-ID-fake-dev>
This converts the volume
tank from a one-drive stripe to a two-drive mirror
5. GUI: Detach volume
fake
The volume still existed in the GUI but could not find its disk so we had to detach it manually
6. GUI: Reboot
After the reboot the encrypted device for the second disk is gone so the volume
tank is degraded.
7. GUI: In “Volume Status” for volume
tank we replace the missing disk with the second disk
FreeNAS will now create a new encrypted device from the second disk, resilver the mirror and all is good.
If you had a passphrase and a recovery key for volume
tank please remember to set the passphrase and add a recovery key as this was invalidated by FreeNAS when replacing the drive in step 7.
To explain the procedure a bit, steps 1. to 4. simply converts the volume
tank from stripe to mirror.
Was it not for FreeNAS we would have been done by now. However, FreeNAS still knows the volume
fake in its configuration even though it no longer exists and reports an error for the volume
fake. To get rid of that error we must detach the volume
fake in the GUI. Unfortunately, this has the side effect of also removing the encrypted device for the second disk. This is why, after a reboot, the volume
tank is degraded, missing one of its encrypted disks. However, it is still a mirror and 6. and 7. brings us back into a state with two healthy devices in our mirror.
---
For those that want all the gory details, they are here:
Initially, we have a volume,
tank, which is a stripe set with one encrypted disk. In the CLI we can see:
Code:
root@freenas:~ # zpool status tank
pool: tank
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
gptid/66136357-4b22-11e8-a57c-001c424e6846.eli ONLINE 0 0 0
errors: No known data errors
From the GUI we create a second volume,
fake, which ensures that FreeNAS partitions the disk and setup the encrypted GELI device. In the CLI we now have:
Code:
root@freenas:~ # zpool status fake
pool: fake
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
fake ONLINE 0 0 0
gptid/15b03546-4b24-11e8-a57c-001c424e6846.eli ONLINE 0 0 0
errors: No known data errors
From the CLI we detach (export) the volume (pool)
fake. (If we do it from the GUI, the GELI device will disappear.)
Code:
root@freenas:~ # zpool export fake
root@freenas:~ # zpool status fake
cannot open 'fake': no such pool
Now, using the two GEOM ids we attach the second encrypted disk to the original volume (pool)
tank thus transforming it to a mirror.
Code:
root@freenas:~ # zpool attach -f tank gptid/66136357-4b22-11e8-a57c-001c424e6846.eli gptid/15b03546-4b24-11e8-a57c-001c424e6846.eli
Option '-f' is necessary to override the warning we would get without '-f':
Code:
/dev/gptid/15b03546-4b24-11e8-a57c-001c424e6846.eli is part of exported pool 'fake'
We now have a mirrored volume:
Code:
root@freenas:~ # zpool status tank
pool: tank
state: ONLINE
scan: resilvered 1.27M in 0 days 00:00:00 with 0 errors on Sat Apr 28 13:39:57 2018
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/66136357-4b22-11e8-a57c-001c424e6846.eli ONLINE 0 0 0
gptid/15b03546-4b24-11e8-a57c-001c424e6846.eli ONLINE 0 0 0
errors: No known data errors
If we go to the GUI we will see, that volume
tank is HEALTHY and looking at the volume status we also see that it is now a mirror.
However, the
fake volume still exists there but has status LOCKED. We simply detach it in the GUI.
Now, for a reboot.
After the reboot, in the GUI we will find the volume
tank in a DEGRADED state, the new disk being UNAVAIL. From the CLI it looks like this:
Code:
# zpool status tank
pool: tank
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
see: http://illumos.org/msg/ZFS-8000-2Q
scan: resilvered 1.13M in 0 days 00:00:00 with 0 errors on Sat Apr 28 14:47:44 2018
config:
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
gptid/2e5e391d-4b2d-11e8-b078-001c424e6846.eli ONLINE 0 0 0
3280147069974124757 UNAVAIL 0 0 0 was /dev/gptid/3a75cb49-4b2d-11e8-b078-001c424e6846.eli
errors: No known data errors
The problem is that the GELI device from the second disk no longer exists.
From the GUI we select “Volume Status” for volume
tank, click the UNAVAIL device and click “Replace” to replace it with our second disk.
Now, the volume is HEALTHY and remains so after a reboot.
Again, remember to create a new recovery key (if you had one) as the existing has been invalidated by replacing the drive.