Hence the recommendation to read into how ZFS works :).
https://arstechnica.com/information...01-understanding-zfs-storage-and-performance/ is a good one.
You had a pool with one vdev, and you added a second vdev. These vdevs are "single disk" vdevs, which ZFS treats as a special case of a mirror vdev. In ZFS terms, you used the "zpool add" command. In FreeNAS terms, you likely used "Add vdevs" in the Pools -> Storage UI.
You now want to attach a second disk to each existing single-disk vdev, to make them into mirror vdevs. To do this from UI requires TrueNAS 12.0 Core, which will be available as an RC1 in one week.
In TrueNAS Core, the option to attach a disk to a single-disk or mirror vdev is named a little oddly, it's called "Extend". You get to it thusly:
Pools -> Storage
Use the gear icon for "Status"
Use the three-dot menu next to an individual disk and choose "Extend"
You get an "Extend vdev" window that prompts you for "New Disk". Select from the drop-down and choose "Extend"
TrueNAS Core will then "zpool attach" the new disk to your existing vdev, changing it from a single-disk vdev into a mirror vdev.
All these operations can also be done from CLI, but that'll require fiddling with gpart and getting disk gptids and so on. There's a decent chance to make a misstep. My recommendation is to use the UI in TrueNAS Core 12.0 for this.
To illustrate this, here's a pool with two single-disk vdevs as shown by
zpool status
:
Code:
pool: lonely
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
lonely ONLINE 0 0 0
/mnt/Gion/VMs/sparse1 ONLINE 0 0 0
/mnt/Gion/VMs/sparse2 ONLINE 0 0 0
I'll now
zpool attach
a second disk to that first vdev, and run
zpool status
again:
Code:
pool: lonely
state: ONLINE
scan: resilvered 72K in 00:00:01 with 0 errors on Thu Sep 10 07:36:44 2020
config:
NAME STATE READ WRITE CKSUM
lonely ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
/mnt/Gion/VMs/sparse1 ONLINE 0 0 0
/mnt/Gion/VMs/sparse3 ONLINE 0 0 0
/mnt/Gion/VMs/sparse2 ONLINE 0 0 0
My first vdev is now called mirror-0! Success. And I still have a single-disk vdev as my second vdev. If I now
zpool attach
another disk to that second vdev as well, the pool looks like this:
Code:
pool: lonely
state: ONLINE
scan: resilvered 171K in 00:00:00 with 0 errors on Thu Sep 10 07:38:05 2020
config:
NAME STATE READ WRITE CKSUM
lonely ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
/mnt/Gion/VMs/sparse1 ONLINE 0 0 0
/mnt/Gion/VMs/sparse3 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
/mnt/Gion/VMs/sparse2 ONLINE 0 0 0
/mnt/Gion/VMs/sparse4 ONLINE 0 0 0
Not so lonely any more, those vdevs. This pool now has redundancy. All redundancy exists at vdev level, never at pool level.
I was using sparse files for this example. FreeNAS/TrueNAS partitions a drive and then uses the gptid of the second partition, which you can see with
zpool status
. This is vital to the way it functions: Because of the partitions, a replacement drive can be a few sectors smaller and still work (not all 4TB drives are created precisely equal), and because of the gptid, the drive can be moved to a different controller or port and still be part of the pool without issue. Doing that correctly from CLI is possible, and this forum has guides on how to: But why struggle with that when it is now available in the UI.|
A mirror can have any number of disks. If I assume that the first vdev is similar to the one you have with a failed disk, I can attach two disks and wait for resilver:
Code:
pool: lonely
state: ONLINE
scan: resilvered 180K in 00:00:00 with 0 errors on Thu Sep 10 07:47:16 2020
config:
NAME STATE READ WRITE CKSUM
lonely ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
/mnt/Gion/VMs/sparse1 ONLINE 0 0 0
/mnt/Gion/VMs/sparse3 ONLINE 0 0 0
/mnt/Gion/VMs/sparse5 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
/mnt/Gion/VMs/sparse2 ONLINE 0 0 0
/mnt/Gion/VMs/sparse4 ONLINE 0 0 0
I'll pretend that sparse1 is the failed disk, and after resilver completes, I'll
zpool detach
it - in TrueNAS Core that'll be a drop-down next to the disk to remove it, though I don't have a second disk handy right now to see what the UI calls it. I'll have that in a week or so. Here's the pool with the "defective" sparse1 removed:
Code:
pool: lonely
state: ONLINE
scan: resilvered 180K in 00:00:00 with 0 errors on Thu Sep 10 07:47:16 2020
config:
NAME STATE READ WRITE CKSUM
lonely ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
/mnt/Gion/VMs/sparse3 ONLINE 0 0 0
/mnt/Gion/VMs/sparse5 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
/mnt/Gion/VMs/sparse2 ONLINE 0 0 0
/mnt/Gion/VMs/sparse4 ONLINE 0 0 0
Playing around with these concepts using sparse files, which you can create with "truncate -s 1T <filename>", can be helpful for becoming familiar with ZFS concepts. If you do, I recommend using SSH to connect to the CLI, not the built-in web CLI.
All changes to a pool's data-carrying vdevs are permanent, whether done from CLI or UI. Keep that in mind when you work on your real pool. You want to make sure you know the steps you are taking quite well. If, for example, you were to "add vdev" to the existing pool instead of using the "extend" command on a single vdev (disk), then you'd end up with three single-disk vdevs, still no redundancy, and no closer to a solution.