Pool/vDEV Reconfiguration

riggieri

Dabbler
Joined
Aug 24, 2018
Messages
42
Hey Everyone

Want to doubecheck myself on this before I move forward.

I have a very large pool of 42 8TB drives. They are configured in 7 RAIDz2 vdevs. I feel like I am being overly cautious on the parity, as everything on this is also backup to LTO. This is mainly used as a WORM vault, so I don't need tons of IOPS.

I have enough space on some other pools and could move all these datasets around so that I can completely wipe out this pool. I just put in 6x14TB drives, that can act as a temporary holding pool. Was just going to add these 6 drives to the pool, but losing 33% to parity, got me thinking.

I woud like to be more space efficient, so thinking about moving this to 4x10 RAIDz2. I will get about 10% more storage with 2 less drives. 5x8 RAIDz2 only sees a small 1% increase. I could go 4x12, but I don't like to have the server 100% full, as prevents you from replacing failing disks without a full resliver.

My questions are:

What am I missing by reconfiguring this?
Some drives are SMR drives in the pool, I know, unfortunate. Should I put all of these in one vdev or spread them out over the 4 pools?
Since this pool has tons of files, would I gain anything by setting up a special vdev for Metadata? I let these drives spin down after 120minutes, because mostly the pool is inactive. Would this quicken the time it takes to parse directories? Would add 3x256GB SATA SSD in a 3-way mirror if I do this.

Thanks in Advance
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Some drives are SMR drives in the pool, I know, unfortunate. Should I put all of these in one vdev or spread them out over the 4 pools?
You should place those into another project which does not use ZFS and replace with CMR drives, but I think you already know this.
 
Top