Replace Bad Drive and Upgrade Pool at the Same Time

TMCPayton

Cadet
Joined
Dec 15, 2019
Messages
4
Hello all!

I'm a first time poster. I'll do my best but let me know how I can improve.
I've been using FreeNAS only for a few months and I'm loving it so far!
I am fairly new to Linux/Unix so my skills are limited.

OS Info: FreeNAS-11.2-U7

Current Hardware:
MoBo: ASUS Z9PE-D16/2L
CPU: 2x Intel Xeon E5-2650 V2
RAM: 256GB (8x 32GB DDR3 1866 LRDIMMs in 8 channels)
- I know its way overkill but I each 32GB stick was only $30
Disks:
Boot: Inland(Micro Center) 480GB TLC SATA III SSD
Data: 1 vdev of 5x 3TB drives in RAIDZ2
- 2x Seagate Constellation ES.2
- 1x Hitachi Ultrastar 7K4000
- 2x Hitachi Deskstar 7K3000
One of the Seagate drives has bad sectors
All of the drives are old and are due to be replaced
L2ARC: Inland(Micro Center) 1TB SATA III SSD
- All drives are attached directly to the motherboard

New Hardware:
5x WD Easystore 14TB
- The 14TB drives are not yet shucked. I'm currently doing some stress tests on them with Hard Disk Sentinel Pro on a separate Windows 10 box before shucking them and putting them into production.

Getting to the actual point:
I like the RAIDZ2. I think it is the right balance of redundancy, performance, storage efficiency, and expandability for my environment.
Plex is my only Plugin/Jail currently running. I have a few SMB shares and no VMs.

I have no backup of my data but 90% what I have in my pool is not critical and I can backup the other 10%

Should I replace the drives in my pool or should I just migrate to a new pool?
Would replacing the drives in the pool be more or less stressful?
Would replacing the drives have more or less configuration to get back to my current config?
Which one takes more time?

I've seen people replacing/upgrading individual drives in their pools. I've also seen people create a new pool with their new drives and migrate the data onto the new pool.

Consider that I want to keep the same vdev configuration and minimize downtime & risk.
The L2ARC is not being used much due to the RAM config so it's not essential.

Your help is greatly appreciated and happy holid
 

blueether

Patron
Joined
Aug 6, 2018
Messages
259
I think I would this way:
  • install the new drives as a new pool
  • then replicate that data across (zfs send/recv)
  • stop all share services
  • change the name of the old pool to <OrigPoolName>_BACKUP
  • change the new pool name to <OrigPoolName>
  • restart shares
I've not tested this so do your own reasearch.
 

TMCPayton

Cadet
Joined
Dec 15, 2019
Messages
4
Is the purpose of that to theoretically have my shares & jails stay in tact? If the shares & jails work off of the name of the pool somehow, that could potentially save some time trying to get them back up and running.
 

blueether

Patron
Joined
Aug 6, 2018
Messages
259
I believe they all work off the mount points

I'm sure someone else will comment on other ways.
 
Top