Adding a new mirror of different size disks, new or existing zvol?

Status
Not open for further replies.

markw78

Dabbler
Joined
Oct 17, 2012
Messages
22
I am adding 2 new 3TB disks to my system.

Given this current layout, what would people recommend?

Data = 2x 500 GB Disks ("RAID1")
vol1 = 4x 1TB disks ("RAID10")

root@nas:~ # zpool status -v
pool: data
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/feadb553-61d8-11e7-b2f1-000c29ac4f63 ONLINE 0 0 0
gptid/fff86fde-61d8-11e7-b2f1-000c29ac4f63 ONLINE 0 0 0

errors: No known data errors

pool: freenas-boot
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
da0p2 ONLINE 0 0 0

errors: No known data errors

pool: vol1
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
vol1 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/b8952a27-5e0e-11e7-8f85-000c29ac4f63 ONLINE 0 0 0
gptid/b9a99d08-5e0e-11e7-8f85-000c29ac4f63 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
gptid/bab21417-5e0e-11e7-8f85-000c29ac4f63 ONLINE 0 0 0
gptid/bbca52a6-5e0e-11e7-8f85-000c29ac4f63 ONLINE 0 0 0
logs
gptid/bc90cde1-5e0e-11e7-8f85-000c29ac4f63 ONLINE 0 0 0
cache
gptid/bc3bf5b1-5e0e-11e7-8f85-000c29ac4f63 ONLINE 0 0 0

errors: No known data errors



Add the new 3TB disks as a new mirror vdev of vol1, making it a 3 vdev stripe?
Add the new 3TB disks to "Data" and turn the "RAID1" into a "RAID10" (note the existing disks in this mirror are only 500GB)
Add the new 3TB disks as a new mirrored zpool "RAID1" style? I believe this is the worst option...?

Does it even matter? I was leaning to adding it as a 3rd mirror vdev to "vol1". The upgrade path here involves 4 new 3TB disks to make them match then. The 500 GB disks are my oldest, if I convert that single "RAID1" into a "RAID10" then it gives me an upgrade path to swap out the 2 500 G disks with new 3TB disks...

I'm sure I'm over thinking this, there are just so many options, I can't decide which might be best, if any...

I guess one benefit of adding the new disks to the vol1 is that's the volume where my ZIL and L2ARC cache drives are... maybe that is the answer, grow vol1 into a 3 vdev "RAID10" to take advantage of the log/cache disks in it?

Should I break down the 500GB data volume and add those disks to "vol1" also? I'm a bit confused how the disks are used in this type of setup when they are different sizes, it's not like a real RAID10 where the data is stripped across all disks, in a traditional RAID10, I would be limited to 500 GB on all 6 disks if I did this, but I don't think that's the case with ZFS... What happens then if I lost 1 500G, and 1 1TB disk from this pool? Would the data survive or not?

edit - It's probably worth noting that the new 3TB disks are 7200 RPM and the 4 existing 1TB disks are all only 5400 RPM, though I'm not sure I care if the 3TB run at 5400 if I put them in the same pool. Also the reason I went with mirrors over RAIDz2 is resliver time and this bog post: http://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/

Thanks!
Mark
 
Last edited:

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
The easy answer would be to extend vol1 with another mirrored vdev. And, if you want to destroy data and add it to vol1 as another mirror you could.

Yes, your pool would survive. As long as you don't lose two disks in the same mirrored vdev, you'd be fine.

What happens then if I lost 1 500G, and 1 1TB disk from this pool? Would the data survive or not?

What's your use case? Please provide detailed hardware information and FreeNAS version. Depending on this information, your cache and slog may not be necessary.
 

markw78

Dabbler
Joined
Oct 17, 2012
Messages
22
A mix of NFS shares mounted on ESXi for VMs (main use case for the ZIL) and CIFS shares for random storage (probably not making much use of ZIL)

The idea of putting 3 different size drives in the said disk pool just feels so weird and foreign to me, obviously the data won't be evenly spaced out across all disks, so the idea of being able to lose 1 disk of different sizes from different vdev's just seems weird and I can't quite wrap my head around how that could work..

As for the setup, at a high level... Supermicro board, ECC RAM, Xeon CPU, 32GB...

FreeNas 11. FreeNas runs as a VM with Direct IO configured for my LSI card in IT mode and all the onboard ports, FreeNas boot volume is on VMFS that resides on a pair of SSD's in RAID1 via PERC6 card with writeback cache (and battery). I have delays set on my VM boots to make it work well, the only real issue I have now is getting my domain controller to boot before freeNas so it can bind to AD for SMB to work right without having to re-enable it later. Obviously when the FreeNas VM shuts down ESX hangs for a few minutes, but otherwise no major issues so far, pretty happy with the setup.

According to the graphs my ZIL and Cache are definitely getting hits. The system runs fine, just curious the best way to add these disks
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I'd add all drives to the same pool. Both the 3TB and 500GB drives.

When/if a 500GB drive dies replace it with a 4TB ;)

Writes to the pool will auto-balance across the vdevs based on whichever vdev is least busy. Generally that'll be the fastest one which is the least full.
 
Status
Not open for further replies.
Top