Using striped disks as RAID members

Status
Not open for further replies.

ChristopherN

Cadet
Joined
Aug 22, 2017
Messages
7
I'm setting up a FreeNAS box as a backup device. As there will be no data there that isn't also online on other drives, I'm currently planning to make it a RAIDZ1 array.

The impetus for setting up this system has been a MythTV box that was running low on space. It had three 3TB drives. My main desktop machine had 3 more 3TB drives for the backup of the media files. With the Myth box needing more space, and limited case space, I decided it was time to put 6TB drives in the Myth box. So, now I had 6 working 3TB drives that don't suit the purpose anymore. I bought one more 6 TB drive with the intention of setting up a RAID array like this:
logical device 1 : two 3TB drives striped into a single 6TB volume
logical device 2 : as above
logical device 3 : as above
logical device 4 : one 6TB drive

Then I would put these 4 logical devices into a single RAIDZ1 with a capacity of 18TB.
If a 3TB drive failed, I would remove it, and leave its partner in the case (but unused), and put in a new 6TB. If another 3 TB drive failed, I could build a new 6TB logical device from the two surviving partners. Over time, the 3TB drives would fail out to be replaced by 6TB drives. In the mean time, it will be years before there's any danger of the Myth backup reaching close to 18 TB, so all the other computers in my home can use the unused NAS space for their own nightly backups.

So far, though, I have not figured out how to set up the RAID device as described above.

I understand that the simultaneous loss of any two drives spanning different logical devices (including loss of a drive during the rebuild) would result in loss of all data stored on the NAS, but I can accept that. MythTV files will still be stored on their box. Important personal files are already stored on 3 other physical drives, one of which is kept in a fireproof/waterproof safe.

So, how might I go about setting up my drives as described?

Thank you.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
The alternate approach, would be to partition the 6TB drive into two 3TB drives...

Then you can make a RaidZ2 with all the drives.

If the 6TB fails, you won't lose everything, because RaidZ2. And you would have to lose 2 of the 3TB drives to lose everything.

Doesn't help with your upgrade plans.

If you want to stripe, to create a volume, to use as a device for ZFS, then you want to use the low level GEOM mirror capability, gmirror
 

ChristopherN

Cadet
Joined
Aug 22, 2017
Messages
7
If you want to stripe, to create a volume, to use as a device for ZFS, then you want to use the low level GEOM mirror capability, gmirror

OK, I've been exploring this. I'm coming from Linux, not FreeBSD, so I've got things to learn.

I used gstripe to stripe two 3TB disks into a single 6TB logical device. I used bare devices, rather than partitions. The command was:
gstripe label -v -s 65536 concat1 /dev/ada0 /dev/ada1
This created a volume that I could format and mount. According to the manual, the use of the 'label' option causes this to be a setting that persists across reboots. I found that it does not. It appears that the metadata on the first drive, but not the second, is erased some time during the shutdown/startup sequence. After a reboot, gstripe dump /dev/ada0 returns an error, while gstripe dump /dev/ada1 is unchanged. Naturally, the logical device is not reassembled after the reboot.

Using the FreeNAS administrative interface in the GUI, I can stripe these two devices together. In that case, it uses partitions rather than bare devices, and the metadata is not written as it is with gstripe; the details of the logical device are stored in a file called /data/freenas-v1.db.

So, am I doing something wrong? Why are my devices with gstripe being cleared? If I do all the assembly from the command line (assuming I prevent it from being deleted), does that mean that the administrative GUI has little or no control of the resulting zfs, because nothing is in its database?

Thank you.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Sorry about the partial bum-steer, I of course meant stripe and gstripe and not mirror and gmirror

If your purpose is to combine two disks so that you can use them as one big disk to in turn use as a component of a ZFS vdev, you're not going to be able to do that with ZFS/FreeNAS (okay, hypothetically, you could use a data file on each disk to assemble a vdev)

If you're doing it through the GUI I suspect you've made a pool with both disks as stripes... that's great, but you can't stack vdevs in ZFS, so you can't then use that pool to be a component of a RaidZ vdev.

The alternate approach I suggested does work.
 

ChristopherN

Cadet
Joined
Aug 22, 2017
Messages
7
If you're doing it through the GUI I suspect you've made a pool with both disks as stripes... that's great, but you can't stack vdevs in ZFS, so you can't then use that pool to be a component of a RaidZ vdev.

The alternate approach I suggested does work.
OK, thank you. So, within the GUI, I can partition the 6TB drive in half, and use the whole disk as two logical entries in a RAIDZ2? What's the procedure for that? I tried partitioning the disk on the command line, but the GUI still seems to be interested in the bare devices, not individual partitions, and I don't see an option to partition the disk within the GUI.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
OK, thank you. So, within the GUI, I can partition the 6TB drive in half, and use the whole disk as two logical entries in a RAIDZ2? What's the procedure for that? I tried partitioning the disk on the command line, but the GUI still seems to be interested in the bare devices, not individual partitions, and I don't see an option to partition the disk within the GUI.

You would need to partition the disks and create the pool in the CLI. But it can be done, as opposed to making a stacked zfs vdev, which can't.

Its best to use gptids (use glabel status to see them) when adding the disks/partitions, rather than device ids
 

ChristopherN

Cadet
Joined
Aug 22, 2017
Messages
7
OK, so I did a test run with the 6TB drive and 3 of the 3TB drives (the other 3 are still online providing backup for the MythTV box until I get my NAS working). I partitioned the 6TB drive in half, put single partitions on the 3TB drives, and created a ZFS pool with the zpool command.

That all worked, I created a pool that I could write to, and that pool can survive a reboot, with files written there still being visible after the reboot and re-import.

The pool that I created this way cannot be seen by the GUI. It seems, then, that the GUI doesn't do anything for me. All my operations with the NAS will have to be at the command line. The GUI can't schedule scrubs or snapshots of a volume it doesn't know exists.

Is that about right? Thank you.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
You need to export the pool,

zpool export <pool>

then import it in the gui

Think that's right.
 

ChristopherN

Cadet
Joined
Aug 22, 2017
Messages
7
You need to export the pool,

zpool export <pool>

then import it in the gui

Think that's right.

Thank you for your help, and your patience. I've test-run through the entire procedure I'm planning to use, so when I have some time I'll do the final hardware swap and set up the NAS. I'll write a short article about the system and the setup, and supply a link in this thread in case anybody is interested in how all the steps come together.
 
Status
Not open for further replies.
Top