Upgrade time and need help picking new pool layout

GrandpaRick

Cadet
Joined
Apr 29, 2019
Messages
3
Hi guys,

I've read through the forums on this topic and think this is the proper place to post this question so here goes...

Currently I have a 6x4TB z2 pool (in a Fractal R4 case) that's quickly approaching 90% capacity and have decided it's time to upgrade. I'm going directly to a 4U 24-bay SM case but can't decide on the pool layout.

I already have an additional 7x4TB drives (so 13 in total) and plan on buying more depending on the pool layout I choose but can't decide between the following layouts:
  1. 4x6x4TB z2 (uses all the bays so no room for spares) = ~67 TiB usable / ~54 TiB @ 80%
  2. 3x8x4TB z2 (uses all the bays so no room for spares) = ~70 TiB usable / ~56 TiB @ 80% <-- sacrifice redundancy for a few more TiBs
  3. 3x7x4TB z2 (leaves 3 bays unused which I could use for spares) = ~61 TiB usable / ~49 TiB @ 80%
Either way, I'm going to do a complete rebuild, so my existing 6x4TB z2 will get destroyed.

I'm leaning more towards the 4x6 to max out the storage but the 3x7(+3 spares) layout appeals to me as I travel a lot and want to sleep well knowing that my system will auto resilver in the even of a HDD failure even if I'm gone.

Additionally, I want to position myself for a future expansion when the time comes As I see it, my options would be:
  1. Buy a 4U 36-bay case and add drives (the 3x7 layout would become 5x7 + 1 spare)
  2. Buy a 4U 24-bay JBOD and expand my existing pool
  3. Replace 4TB HDDs with 8TB HDDs
If you were in my shoes and could start from scratch (leaving yourself set for future upgrades), what would you do?

Thanks in advance!

GR
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
will auto resilver in the even of a HDD failure even if I'm gone.
Correct me if I'm wrong but I've read somewhere recently (but I don't know if the information is up to date) that even a hot spare needs some user action (remote user action)

Sent from my phone
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I think it works like this (although I'm going by the oracle doc, which is clearly not the OpenZFS implementation used in FreeNAS, so may be a little different)

https://docs.oracle.com/cd/E19253-01/819-5461/gcvcw/index.html

  • Automatic replacement – When a fault is detected, an FMA agent examines the pool to determine if it has any available hot spares. If so, it replaces the faulted device with an available spare.
    If a hot spare that is currently in use fails, the FMA agent detaches the spare and thereby cancels the replacement. The agent then attempts to replace the device with another hot spare, if one is available. This feature is currently limited by the fact that the ZFS diagnostic engine only generates faults when a device disappears from the system.
    If you physically replace a failed device with an active spare, you can reactivate the original device by using the zpool detach command to detach the spare. If you set the autoreplace pool property to on, the spare is automatically detached and returned to the spare pool when the new device is inserted and the online operation completes.
A faulted device is automatically replaced if a hot spare is available.



Watch out for the underlined bit... I suppose it may be the case that you could have multiple failures where the disk is faulted, but not completely dead, so no automatic replacement.
 
Top