Mirror to Raid-2?

mogget

Cadet
Joined
Feb 11, 2023
Messages
5
Hello and apologies on if this is a repost. I did a few searches and did not pull anything up so hopefully it's not.

Background:
I can currently order two 12T drives and have another 4 on back order ETA TBD. I only need about 10T of space currently and no concerns about going over 12T in the next 3-4 months.

Question:
Is it possible to build a 6 drive RAID-2Z initially with only 5 drives with TrueNAS Scale?

Theory:
  1. Start with the 2 drives and set them up as a mirror.
  2. When the 4 new drives arrive
    1. Shut down the mirror
    2. Take 5 drives (the 4 new and one of the previous mirrors) and build a 6 drive RAID-2Z vdev
    3. Use the remaining mirror to restore the new RAID-2Z
    4. Once restored, convert the old mirror to the 6th drive in the RAID-2Z and resilver
  3. Assume that a cloud backup of the mirror is available if the one drive fails during the restoration process.
My initial though is that I could use a smaller capacity 6th drive to initiate the RAID-2Z but then the initial pool would be smaller and I think I can only free up a 4T unit which is not large enough. Pulling a backup from the cloud is possible but painful when all the data is local.

Thoughts?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
It's not great, it's error-prone and vulnerable to hardware failures. But it's technically possible.
 

mogget

Cadet
Joined
Feb 11, 2023
Messages
5
Thanks Ericloewe for confirmation that this is possible. Everything will be backed up in the cloud, just having to draw it down is painful so if there is a drive failure during the build I can recover.

Two followups:
  1. Error prone - as in setting it up or long term? I can manage fiddling around with it to get it going
  2. How do you trick ZFS/TrueNAS into thinking there are 6 drives when I will only have 5 installed. I don't see an option, unless I've missed something
Thanks again!
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Error prone - as in setting it up or long term? I can manage fiddling around with it to get it going
The process itself. The end result should be rather normal and boring.

How do you trick ZFS/TrueNAS into thinking there are 6 drives when I will only have 5 installed. I don't see an option, unless I've missed something
This is a big catch. You'd need to create a sparse volume to use it when creating the pool, then remove it (and operate witb the degraded pool) before you actually place a meaningful amount of data on said pool.
This, as you can see, is fiddly and not recommended.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
How do you trick ZFS/TrueNAS into thinking there are 6 drives when I will only have 5 installed.
 

mogget

Cadet
Joined
Feb 11, 2023
Messages
5
danb35 and Ericloewe thanks! I also found This thread as well Reading through them and doing some testing on a mini-lab I created. What I was not aware of was the sparse command so just reading through the manuals and these two threads. Then will test/play in the lab.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
It's a valid, but somewhat risky, concept. But, if I might be forgiven for saying so, the method I present in my resource is much better than that in the thread you linked to, and avoids most of Cyberjock's hyperventilating in the other thread (other than messing with a pool at the CLI at all). Note, though, that my resource is for FreeNAS, which became TrueNAS CORE, and thus uses some FreeBSD-isms. I don't doubt they could be translated to Linux, but I'm not familiar with the relevant commands off the top of my head.
 

mogget

Cadet
Joined
Feb 11, 2023
Messages
5
Hello danb35. So I've built it twice and so far it looks like it works well. I'm going clean up my scrpit. When I'm done, would you be interested in reviewing it before I post it as I've taken your script and ported it to Linux/Scale
 

Cybernetika

Cadet
Joined
Mar 23, 2023
Messages
2
Hello danb35. So I've built it twice and so far it looks like it works well. I'm going clean up my scrpit. When I'm done, would you be interested in reviewing it before I post it as I've taken your script and ported it to Linux/Scale
Hey. I'm in similar situation and could really use your ported scripts
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Maybe I'm missing which "scripts" are referred to here, but that process works almost exactly the same on both CORE and SCALE.

The only difference would be manually formatting/partitioning a disk and listing out partitions.

for glabel status, the SCALE equivalent is lsblk --fs --json | jq, allowing you to see the uuids of partitions.

for gpart add/create, etc, the SCALE equivalent is a little more complex... here are the steps:

fdisk /dev/sdx (whatever disk you're trying to format)

Select g to create a new GPT partition table.

Select n to create a new partition, accept the suggested partition number and start block and set the size to +2G (save yourself the calculation)

Select t, then select type 19, which is Linux Swap (0657FD6D-A4AB-43C4-84E5-0933C84B4F4F)

Select n again to create the second partition, accept the suggested partition number, start block and accept the proposed end block (the rest of the disk)

Select, then select partition 2 and type 67, which is Solaris /usr & Apple ZFS (6A898CC3-1DD2-11B2-99A6-080020736631)

Select p just to have a last look at what you did, you should see something like this:

Code:
Disk /dev/sdc: XX GiB, XXXXXXX bytes, XXXXXXX sectors
Disk model: Storage Media  
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 41187BAB-EA86-ED45-A9DA-XXXXXXXXXXXXX

Device       Start      End  Sectors  Size Type
/dev/sdc1     2048  4196351  4194304    2G Linux swap
/dev/sdc2  4196352 XXXXXXXX XXXXXXXX XXXXG Solaris /usr & Apple ZFS


If you're happy, write it to disk:

Select w (write and exit)
 
Top