This tutorial will show you how to use geom concat to concatinate smaller disks to appear as one larger disk which can participate in a raidz or zfs mirror with other disk. Steps marked (Encryption) are optional and only necessary if you want an encrpyted pool.
For this example we have 2 2TB disks and 2 1TB disks.
If we just create a raidz of all of these disks, each disk is shrunk down to the size of the smallest disk, giving us only 2TB of usable space.
1. We will start by creating a zpool with all disks in the web gui. In my example it will be created as a pool with two mirrors. Your pool may be different, it doesn't really matter because this is not the pool we will keep. We create it here so that FreeNAS will parition the disks with swap space and add an entry for our zpool into the database that contains all of our physical disks. When we're done we will have this same zpool name but it will have different vdevs. And that's okay, because FreeNAS database contains only a mapping of physical disks to zpools, the vdev members are all loaded dynamically by zfs metadata on the disks.
2. (Encryption) Optionally encrypt the disk at this stage.
5. Add a pre init command
6. In the CLI destroy the zpool you created in the web gui.
7. Determine the UUIDs of your partitions to determine which ones need to be combined. I know from creating it in the webgui that my 1TB disks are da3 and da4, so I'll find the UUIDs for the large data partitions on those disks:
Now I know the two 1TB partition I need to concat are cb9aef83-6b2d-11e3-9a2c-000c296ed231 and cbb8b608-6b2d-11e3-9a2c-000c296ed231.
8. Concatinate the disks with gconcat. Use the label option rather than the init option. The label option writes metadata to the end of each volume so that it can be automatically loaded when it's detected. If you are using encrypted disks, concatinate the .eli volumes as I am in this example. If you are not, the commands are the same except without the '.eli'.
Now you might think that you could just use /dev/concat/concat1 as a member of your zpool, but you shouldn't. The reason is because the metadata for gconcat is stored at the very end of the device, while the metadata for zfs is stored at the beginning of the device. This means that zfs will see the first device in your concatinated device and think that is a member of your zpool rather than the concatinated device itself. To get around this we put a partition table on the concatinated device and partition it, then add the partition as a member of the zpool.
9. Partition the concat device:
10. Determine the UUIDs of all of your partitions:
In my case my two larger disks are: cb492a04-6b2d-11e3-9a2c-000c296ed231 and cb654af1-6b2d-11e3-9a2c-000c296ed231, and the uuid for the concat partition is b05227bf-6b2f-11e3-9a2c-000c296ed231. I'm going to create my zpool using UUIDs because that's what FreeNAS does when it creates zpools in the GUI and I want as much consistency as possible bewteen the zfs metadata and what FreeNAS expects.
11. Create your zpool. In my case becasue I'm using encryption I'm going to add .eli to the partitions on the actual disks, but not to the concat parition, since the encryption for that partition is provided at the lower layer on the disk partitions that are concatinated together. You will also likely need to use -f because the disks won't be exaclty the same size, they are probably off by a few kilobytes.
That's it, you should now be able to check the status of your volume in the webgui and see members concat/concat1p1 da2p2 and da1p1 and it should be listed as healthy.
Reboot to test that your zpool is mounted automatically and shows up healthy, possibly after entering a passphrase if you're using encryption.
In my testing, this works most of the time, but sometimes fails. The webgui shows "unable to get volume information". Running fixes the issue. I suspect there may be a race condition somewhere where zpool import is trying to run before the concat device shows up as available. Personally it works well enough for me since I very rarely reboot the server.
Beware that if you ever need to detach and auto-import this zpool into the webgui, your physical disks will not be associated with the volume in the database anymore. To fix this you will need to manually edit the FreeNAS database.
For this example we have 2 2TB disks and 2 1TB disks.
If we just create a raidz of all of these disks, each disk is shrunk down to the size of the smallest disk, giving us only 2TB of usable space.
1. We will start by creating a zpool with all disks in the web gui. In my example it will be created as a pool with two mirrors. Your pool may be different, it doesn't really matter because this is not the pool we will keep. We create it here so that FreeNAS will parition the disks with swap space and add an entry for our zpool into the database that contains all of our physical disks. When we're done we will have this same zpool name but it will have different vdevs. And that's okay, because FreeNAS database contains only a mapping of physical disks to zpools, the vdev members are all loaded dynamically by zfs metadata on the disks.
2. (Encryption) Optionally encrypt the disk at this stage.
5. Add a pre init command
Code:
gconcat load
6. In the CLI destroy the zpool you created in the web gui.
Code:
zpool destroy tank
7. Determine the UUIDs of your partitions to determine which ones need to be combined. I know from creating it in the webgui that my 1TB disks are da3 and da4, so I'll find the UUIDs for the large data partitions on those disks:
Code:
# gpart list da3 | grep 'Name\|Mediasize\|rawuuid' 1. Name: da3p1 Mediasize: 2147483648 (2.0G) rawuuid: cb966fbf-6b2d-11e3-9a2c-000c296ed231 2. Name: da3p2 Mediasize: 1071594257920 (998G) rawuuid: cb9aef83-6b2d-11e3-9a2c-000c296ed231 1. Name: da3 Mediasize: 1073741824000 (1T) # gpart list da4 | grep 'Name\|Mediasize\|rawuuid' 1. Name: da4p1 Mediasize: 2147483648 (2.0G) rawuuid: cbb435c7-6b2d-11e3-9a2c-000c296ed231 2. Name: da4p2 Mediasize: 1071594257920 (998G) rawuuid: cbb8b608-6b2d-11e3-9a2c-000c296ed231 1. Name: da4 Mediasize: 1073741824000 (1T)
Now I know the two 1TB partition I need to concat are cb9aef83-6b2d-11e3-9a2c-000c296ed231 and cbb8b608-6b2d-11e3-9a2c-000c296ed231.
8. Concatinate the disks with gconcat. Use the label option rather than the init option. The label option writes metadata to the end of each volume so that it can be automatically loaded when it's detected. If you are using encrypted disks, concatinate the .eli volumes as I am in this example. If you are not, the commands are the same except without the '.eli'.
Code:
# gconcat label concat1 /dev/gptid/cb9aef83-6b2d-11e3-9a2c-000c296ed231.eli /dev/gptid/cbb8b608-6b2d-11e3-9a2c-000c296ed231.eli
Now you might think that you could just use /dev/concat/concat1 as a member of your zpool, but you shouldn't. The reason is because the metadata for gconcat is stored at the very end of the device, while the metadata for zfs is stored at the beginning of the device. This means that zfs will see the first device in your concatinated device and think that is a member of your zpool rather than the concatinated device itself. To get around this we put a partition table on the concatinated device and partition it, then add the partition as a member of the zpool.
9. Partition the concat device:
Code:
# gpart create -s gpt /dev/concat/concat1 concat/concat1 created # gpart add -t freebsd-zfs concat/concat1 concat/concat1p1 added
10. Determine the UUIDs of all of your partitions:
Code:
# gpart list | grep 'Name\|Mediasize\|rawuuid' 1. Name: da0s1 Mediasize: 988291584 (942M) 2. Name: da0s2 Mediasize: 988291584 (942M) 3. Name: da0s3 Mediasize: 1548288 (1.5M) 4. Name: da0s4 Mediasize: 21159936 (20M) 1. Name: da0 Mediasize: 4294967296 (4.0G) 1. Name: da0s1a Mediasize: 988283392 (942M) 1. Name: da0s1 Mediasize: 988291584 (942M) 1. Name: da1p1 Mediasize: 2147483648 (2.0G) rawuuid: cb449ef2-6b2d-11e3-9a2c-000c296ed231 2. Name: da1p2 Mediasize: 2145336081920 (2T) rawuuid: cb492a04-6b2d-11e3-9a2c-000c296ed231 1. Name: da1 Mediasize: 2147483648000 (2T) 1. Name: da2p1 Mediasize: 2147483648 (2.0G) rawuuid: cb610284-6b2d-11e3-9a2c-000c296ed231 2. Name: da2p2 Mediasize: 2145336081920 (2T) rawuuid: cb654af1-6b2d-11e3-9a2c-000c296ed231 1. Name: da2 Mediasize: 2147483648000 (2T) 1. Name: da3p1 Mediasize: 2147483648 (2.0G) rawuuid: cb966fbf-6b2d-11e3-9a2c-000c296ed231 2. Name: da3p2 Mediasize: 1071594257920 (998G) rawuuid: cb9aef83-6b2d-11e3-9a2c-000c296ed231 1. Name: da3 Mediasize: 1073741824000 (1T) 1. Name: da4p1 Mediasize: 2147483648 (2.0G) rawuuid: cbb435c7-6b2d-11e3-9a2c-000c296ed231 2. Name: da4p2 Mediasize: 1071594257920 (998G) rawuuid: cbb8b608-6b2d-11e3-9a2c-000c296ed231 1. Name: da4 Mediasize: 1073741824000 (1T) 1. Name: concat/concat1p1 Mediasize: 2143188455424 (2T) rawuuid: b05227bf-6b2f-11e3-9a2c-000c296ed231 1. Name: concat/concat1 Mediasize: 2143188500480 (2T)
In my case my two larger disks are: cb492a04-6b2d-11e3-9a2c-000c296ed231 and cb654af1-6b2d-11e3-9a2c-000c296ed231, and the uuid for the concat partition is b05227bf-6b2f-11e3-9a2c-000c296ed231. I'm going to create my zpool using UUIDs because that's what FreeNAS does when it creates zpools in the GUI and I want as much consistency as possible bewteen the zfs metadata and what FreeNAS expects.
11. Create your zpool. In my case becasue I'm using encryption I'm going to add .eli to the partitions on the actual disks, but not to the concat parition, since the encryption for that partition is provided at the lower layer on the disk partitions that are concatinated together. You will also likely need to use -f because the disks won't be exaclty the same size, they are probably off by a few kilobytes.
Code:
# zpool create -m /mnt/tank -f tank raidz gptid/cb492a04-6b2d-11e3-9a2c-000c296ed231.eli gptid/cb654af1-6b2d-11e3-9a2c-000c296ed231.eli gptid/b05227bf-6b2f-11e3-9a2c-000c296ed231
That's it, you should now be able to check the status of your volume in the webgui and see members concat/concat1p1 da2p2 and da1p1 and it should be listed as healthy.
Reboot to test that your zpool is mounted automatically and shows up healthy, possibly after entering a passphrase if you're using encryption.
In my testing, this works most of the time, but sometimes fails. The webgui shows "unable to get volume information". Running
Code:
zpool import tank
Beware that if you ever need to detach and auto-import this zpool into the webgui, your physical disks will not be associated with the volume in the database anymore. To fix this you will need to manually edit the FreeNAS database.