Seeking Guide for Creating RAIDZ1 from Different Capacity Drives on FreeNAS 11.2-U7

amp88

Explorer
Joined
May 23, 2019
Messages
56
Hi all. Sorry for the somewhat clumsy title. I'm in the process of testing out FreeNAS version 11.2-U7 and looking to migrate to it for my homelab's storage server. I have 2 questions relating to the mixing of disks with different capacities in VDEVs. I haven't decided on which drives I'm going to use yet, but to provide a concrete example, let's say I have two 4TB disks and two 8TB disks. I want to make a RAIDZ1 VDEV containing all the disks using the full capacity of the 4TB disks and 4TB of the capacity of each of the 8TB drives, then I want to create a mirror VDEV containing the remaining capacity of the 8TB disks. I understand this isn't possible to do from the GUI of FreeNAS 11.2-U7.

The first question is whether there exists a user guide on the creation of such a VDEV for version 11.2-U7. I had a look in this subforum and found a few threads on this question from 2013-2014, but I don't know if the procedure outlined in them still applies for 11.2-U7 or not. One example thread: Setting up a raidz1 with different sized disks - FreeNAS 9.1 (November 2013). The procedure there:

Code:
gpart create -s gpt ada0
gpart add -b 128 -i 1 -t freebsd-swap -s 2G ada0
gpart add -i 2 -t freebsd-zfs ada0
gpart create -s gpt ada1
gpart add -b 128 -i 1 -t freebsd-swap -s 2G ada1
gpart add -i 2 -t freebsd-zfs ada1
gpart create -s gpt ada2
gpart add -b 128 -i 1 -t freebsd-swap -s 2G ada2
gpart add -i 2 -t freebsd-zfs ada2
gpart create -s gpt ada3
gpart add -b 128 -i 1 -t freebsd-swap -s 2G ada3
gpart add -i 2 -t freebsd-zfs ada3

zpool create -f /mnt raidz ada0p2 ada1p2 ada2p2 ada3p2

That is, use gpart from the command line to create gpt partition schemes on each of the member disks, then create a swap and a data partition of the desired capacity for each member disk. Then create a new storage pool using the p2 (data) partitions of each of the member disks. Is this still the recommended method for creating a VDEV using disks of different capacities in FreeNAS version 11.2-U7, or is there an updated guide somewhere?

The second question is about the future expansion of the hypothetical RAIDZ1 VDEV created above. If I were to replace the 4TB disks for 8TBs (one at a time, naturally), would I be able to delete the mirror VDEV which existed on the two original 8TB disks and resize the data partitions created on them above, allowing me to expand the RAIDZ1 VDEV (which was originally created with two 4TB drives and two 8TB drives) in capacity to use all of the 8TB now available on each disk? My understanding is that expanding a storage pool by replacing each of the member disks for ones with higher capacity would normally allow expansion, but I wonder if the process of manually creating the partitions interferes with this in any way.

My system specs are in my signature. Thanks.
 
Joined
Jan 4, 2014
Messages
1,644
With all due respect, I wonder if you're overcomplicating this?

Option 1: Let's see. If this were even doable, quick math 4x4TB=12TB RAID-Z1 & 2x4TB=4TB mirror giving you a total of 16TB free.

Option 2: Now if you go for an 8TB and a 4TB mirror, you'll get 12TB free. Is the extra 4TB free in Option1 worth the complication? A bonus with Option 2 is that you can do and manage everything in the GUI.

Notwithstanding the complication (I'm not even sure it's possible), a failure of one of the 8TB disks has an impact on both pools in Option 1 and only one pool in Option 2. Seems to me, in your desire to use all the available disk space, your risk profile has taken a nosedive.

Mirrors also perform better than ZFS. Refer to this resource guide.
 
Last edited:

amp88

Explorer
Joined
May 23, 2019
Messages
56
With all due respect, I wonder if you're overcomplicating this?

Option 1: Let's see. If this were even doable, quick math 4x4TB=12TB RAID-Z1 & 2x4TB=4TB mirror giving you a total of 16TB free.

Option 2: Now if you go for an 8TB and a 4TB mirror, you'll get 12TB free. Is the extra 4TB free in Option1 worth the complication? A bonus with Option 2 is that you can do everything in the GUI.

Notwithstanding the complication (I'm not even sure it's possible), a failure of one of the 8TB disks has an impact on both pools in Option 1 and only one pool in Option 2. Seems to me, in your desire to use all the available disk space, your risk profile has taken a nosedive.

Mirrors also perform better than ZFS. Refer to this resource guide.
Thanks for your reply. I think it's worth it for the extra capacity, yes (as long as the procedure above will work, or someone can point me to an updated version). I am aware of the increased impact a disk failure would have, but that's not a concern for me, and neither is the potential performance loss of a RAIDZ1 over a mirror for this particular application.
 

amp88

Explorer
Joined
May 23, 2019
Messages
56
I thought I'd just take a stab at this and see what happened. For testing I used three disks (two 1TB and one 4TB).

Code:
root@freenas[~]# camcontrol devlist
<QEMU QEMU DVD-ROM 0.10>           at scbus1 target 1 lun 0 (pass0,cd0)
<NETAPP DS424IOM6 0180>            at scbus2 target 14 lun 0 (pass1,ses0)
<ATA HITACHI HUA72201 NS01>        at scbus2 target 15 lun 0 (pass2,da0)
<ATA HITACHI HUA72201 NS01>        at scbus2 target 18 lun 0 (pass3,da1)
<ATA ST4000DM004-2CV1 0001>        at scbus2 target 56 lun 0 (pass4,da2)

Created GPT partition scheme, 2GB swap and 500GB data partitions on each disk:

Code:
root@freenas[~]# gpart create -s gpt da0
da0 created
root@freenas[~]# gpart add -b 128 -i 1 -t freebsd-swap -s 2G da0
da0p1 added
root@freenas[~]# gpart add -i 2 -s 500G -t freebsd-zfs da0
da0p2 added
root@freenas[~]# gpart create -s gpt da1
da1 created
root@freenas[~]# gpart add -b 128 -i 1 -t freebsd-swap -s 2G da1
da1p1 added
root@freenas[~]# gpart add -i 2 -s 500G -t freebsd-zfs da1
da1p2 added
root@freenas[~]# gpart create -s gpt da2
da2 created
root@freenas[~]# gpart add -b 128 -i 1 -t freebsd-swap -s 2G da2
da2p1 added
root@freenas[~]# gpart add -i 2 -s 500G -t freebsd-zfs da2
da2p2 added

Free space left on one of the 1TB disk (da1) and 4TB disk (da2):

Code:
root@freenas[~]# gpart show da1
=>        40  1953525088  da1  GPT  (932G)
          40          88       - free -  (44K)
         128     4194304    1  freebsd-swap  (2.0G)
     4194432  1048576000    2  freebsd-zfs  (500G)
  1052770432   900754696       - free -  (430G)

root@freenas[~]# gpart show da2
=>        40  7814037088  da2  GPT  (3.6T)
          40          88       - free -  (44K)
         128     4194304    1  freebsd-swap  (2.0G)
     4194432  1048576000    2  freebsd-zfs  (500G)
  1052770432  6761266696       - free -  (3.1T)

Created RAIDZ1 from 500GB partitions from each of the three disks.
Code:
root@freenas[~]# zpool create tank raidz da0p2 da1p2 da2p2

root@freenas[~]# zpool list tank
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
tank  1.46T  1.70M  1.46T        -         -     0%     0%  1.00x  ONLINE  -

root@freenas[~]# zpool status tank
  pool: tank
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            da0p2   ONLINE       0     0     0
            da1p2   ONLINE       0     0     0
            da2p2   ONLINE       0     0     0

errors: No known data errors

Exported tank:

Code:
root@freenas[~]# zpool export tank

Pool appeared to import in Pools -> Add.

Adding 400GB partition to da1 and da2, creating mirror, exporting it:

Code:
root@freenas[~]# gpart add -i 3 -s 400G -t freebsd-zfs da1
da1p3 added
root@freenas[~]# gpart add -i 3 -s 400G -t freebsd-zfs da2
da2p3 added
root@freenas[~]# zpool create dozer mirror da1p3 da2p3
root@freenas[~]# zpool export dozer

Imported dozer through GUI.

I added datasets to each of the two pools and created shares. I copied across 100GB to each of the shares, and everything seems to be working properly.

I don't know if the above is the 'correct' procedure, or if I'd be inadvertently setting myself up for disaster with some misconfiguration. I'd still appreciate any input from anyone if there's anything I've done incorrectly here.
 
Top