Creating 2 partitions of the correct size for 2 different pools?

fierarul

Dabbler
Joined
May 6, 2019
Messages
29
Hello,

I have a 6TB disk in one ZFS mirror and some 2TB disks in another ZFS mirror.

I'm about to receive an 8TB disk and I was thinking that I could, in fact, use that disk for both ZFS pools.

My plan is just to create 6TB + 2TB partitions on the 8TB disk and add each partition to each separate ZFS pool.

I previously manually created partitions based on https://www.ixsystems.com/community/threads/create-zfs-mirror-by-adding-a-drive.14880/#post-81348 (as I'm on FreeNAS 9.1.1).

So, I *think* what I need to do is use `gpart show` to figure out the size of the existing partitions and then `gpart add` to create new partitions on the 8TB disk. (I won't create any swap on the 8TB disk so that saves me 2GB).

Does this sound correct? Anything I should look after? Any help with the actual gpart incantation appreciated.
 

garm

Wizard
Joined
Aug 19, 2017
Messages
1,556
I have a 6TB disk in one ZFS mirror
This makes no sense , please show the printout of zpool status

If you partition a drive and add the partitions to two different mirror pools you gain nothing but added redundancy. If you add the drive as a new vdev to an existing mirror you will lose both pools when this drive dies. Managing this in the UI will most likely not work or break things. And managing resilvering and other tasks will be complex and risk breaking things.

If you rely on instructions on the internet you should probably not be doing this..
 

fierarul

Dabbler
Joined
May 6, 2019
Messages
29
I have 2 mirror pools, one of which has a 3TB and 6TB disk (and I hope it will autogrow to 6TB when I replace the 3TB one) and another mirror which has 2x2TB disks.

I was thinking on using the 8TB disk for *both* of those mirrors.

I don't see how I would lose both pools since I have 2 other drives mirroring in each pool. This would be the 3rd "drive" in each.

Following the instructions here https://www.ixsystems.com/community/threads/create-zfs-mirror-by-adding-a-drive.14880/#post-81348 the steps to add another disk seem to be :


  1. gpart create -s gpt /dev/ada1
  2. gpart add -b 128 -t freebsd-swap -s 2G /dev/ada1
  3. gpart add -t freebsd-zfs /dev/ada1
  4. Run zpool status and note the gptid of the existing disk
  5. Run gpart list and find the gptid of the newly created partition. It is the rawuuid field. In this example it would be rawuuid of ada1p1
  6. zpool attach tank /dev/gptid/[gptid_of_the_existing_disk] /dev/gptid/[gptid_of_the_new_partition]


so I expect something similar to work in this case too, except I would use the IDs of each partition for each pool.

My main question is how to create the partitions of proper size since it seems tight (8TB = 6TB + 2TB and I don't know how much room for maneuver I have even if I skip that 2GB swap partition that FreeNAS created by default on the other disks).

> If you rely on instructions on the internet you should probably not be doing this..

Perhaps... But I did manage to attach another disk with the above steps. This seems really similar.
 

garm

Wizard
Joined
Aug 19, 2017
Messages
1,556
I said, if you add it as a new mirror vdev member you add redundancy by cost of complexity. I didn’t say that would lead to losing the pools.. all you need to do is to ensure the partitions are exactly equal or bigger then the other members.
 

fierarul

Dabbler
Joined
May 6, 2019
Messages
29
> all you need to do is to ensure the partitions are exactly equal or bigger then the other members.

Yes, I assumed so. Now, how do I ensure that? Just manually use gpart show and gpart add and make sure I copy-paste the same numbers?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I guess what you're suggesting will work. (I don't think anybody here in the forums will agree it's a good idea though).

Also be aware you may reduce the speed of both pools if you have concurrent access with this one disk in common.

If your only aim is to increase redundancy, you will get that, but with the complexity you're adding, the risk that you do something wrong when you need to recover from a failure is increased.

It's your choice, so all the best with what you're trying to do.
 

fierarul

Dabbler
Joined
May 6, 2019
Messages
29
Worse case scenario both pools will run in a degraded state if the 3rd disk is missing. But I don't see how I am increasing risk.

Speed is not an issue, I'm only using the FreeNAS machine for backups and I'm the only user.

I hope that once I configure this I'm not touching it except perhaps to detach a disk.

So I assume you would recommend zfs send / recv instead for this disk if I want all the data there too?
 

fierarul

Dabbler
Joined
May 6, 2019
Messages
29
So I see

# gpart show
=> 34 3907029101 ada2 GPT (1.8T)
34 94 - free - (47k)
128 4194304 1 freebsd-swap (2.0G)
4194432 3902834696 2 freebsd-zfs (1.8T)
3907029128 7 - free - (3.5k)

=> 34 11721045101 ada3 GPT (5.5T)
34 94 - free - (47k)
128 4194304 1 freebsd-swap (2.0G)
4194432 11716850696 2 freebsd-zfs (5.5T)
11721045128 7 - free - (3.5k)

which means I have to create a partition of size 3902834696 (1.8TB) at -b 128 then the rest will hopefully be larger than 5.5TB.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
So I assume you would recommend zfs send / recv instead for this disk if I want all the data there too?
A replication job with localhost as the destination and the new disk as a new pool to be the target would be something folks around here might recommend.

Added benefit would be the possibility of storing the snapshots longer as your target disk has more space than your two pools.
 

fierarul

Dabbler
Joined
May 6, 2019
Messages
29
A replication job with localhost as the destination and the new disk as a new pool to be the target would be something folks around here might recommend.

Maybe I trust ZFS too much?! It seemed to me like what I want is precisely what ZFS is made for: add new drives to the pool, detach them, replace them, maybe the fail in time but ZFS doesn't care since there is enough redundancy and allows me to remove/replace them.

zfs send / recv seems less risky only if there are bugs in ZFS itself. By the same logic, I could create a mirror pool by just adding a rsync task among two drives...

Added benefit would be the possibility of storing the snapshots longer as your target disk has more space than your two pools.

This sounds intriguing. How can I incrementally zfs send/recv while also keep more snapshots then the source pools?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
How can I incrementally zfs send/recv while also keep more snapshots then the source pools?
Using a replication task, don't tick the "delete stale snapshots" option, then use a script like the ones from @fracai to prune the snapshots on the backup pool.

(a replication tasks uses zfs send/recv in the background incrementally)
 
Top