Replace a drive in an array by mirroring (avoiding a degraded state)

Status
Not open for further replies.
Joined
Jan 25, 2015
Messages
26
Hi,
I run a RAID5 array as RAID6 doesn't make much sense on a server with few disks, especially when the data isn't critical. That being said, I'd still *rather not* lose my data if it can be helped so if I ever need to replace a drive (if it fails, is about to fail, or I want to replace it with a higher capacity disk) resilvering makes me very nervous. Why? Because when you remove the drive, the array is in a degraded state so if anything goes wrong with another drive while resilvering (which is likely due to the intensity of the process) you could lose all of your data.

It would be really good if, rather than having to remove a drive to replace it, you could add the new drive in (either into an additional bay if you have one spare or to an esata port) and mirror the drive you want to replace, byte for byte, to the new drive (effectively creating a RAID-1 like mirror between the two devices), then once that's complete, you can remove the old drive and replace it with the new drive, preventing the need to a resilver (although a parity check against the new drive would probably be a good idea). This should also theoretically be possible while the array is online, with any new write operations to the disk to be replaced also applied to the new disk.

Obviously this wouldn't work for already failed drives, but it would be great if you want to replace a drive that's going to fail some time or if you want to replace it with a higher capacity disk.

The advantages of this are the following:

1) The array wouldn't go into a degraded state, meaning if a drive failed while mirroring you'd still have enough drives left to maintain the array
2) This would only stress the disk to be replaced and the disk it's going to be replaced with, isolating the risk of a failed drive
3) This could potentially allow you to replace more than one drive at once... maybe.. probably best not.

Does such a feature already exist in FreeNAS, and if not, is it likely to ever be implemented? I have heard of this being done on some enterprise storage solutions so figure it must be possible.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
This is a ZFS feature request, not a FreeNAS feature request.

And you want the advantages of RAID-Z2 or 3 when you use RAID-Z1. Well, if RAID-Z1 is too risky for you then simply use RAID-Z2 or 3 ;)
 
Joined
Jan 25, 2015
Messages
26
Thanks for the quick response. I understand that Raid Z2 would be more advantageous in terms of reducing the risk of a failed drive, however as stated above the data isn't *that* critical on these disks - losing the data would be pretty inconvenient, but I could live with the consequences if it happened.

While clearly Z1 is more risky than the alternatives, that doesn't mean that we shouldn't try to make it more reliable if we can, and I think the solution I proposed would be very helpful in reducing the risk to those of us with Z1 arrays.

Either way, I get your point, and also didn't think to raise this as a ZFS feature request rather than a FreeNAS one (although I would have thought this would be possible to implement in FreeNAS without having to make changes to ZFS itself). Also whereabouts would I raise a feature request for OpenZFS? I had a look around but I don't see anywhere to raise them.
 
Last edited:

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
You can't implement this in FreeNAS as it touches directly the RAID and it's ZFS who handle how data is read/write, how it's organized, the resilvering, the scrubs, etc... :)
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
This is already possible. And please use proper terminology. ZFS has RAIDZ1, 2 or 3, not RAID5 or 6.
 
Joined
Jan 25, 2015
Messages
26
This is already possible. And please use proper terminology. ZFS has RAIDZ1, 2 or 3, not RAID5 or 6.

It is? Can you point me to anywhere that would tell me how to do this? I had a look around before making the thread but couldn't find any information on it (I was probably using the wrong search terms given that I don't know the feature name..)

And yes I'll be sure to do that in the future - I realised my mistake when the first response came in. I'm only a recent ZFS user and haven't quite broken the habit yet.
 
Last edited:

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Not exactly this, but you can connect the new drive, replace it in the GUI, let it resilver and then take out the old one ;)
 
Joined
Jan 25, 2015
Messages
26
Not exactly this, but you can connect the new drive, replace it in the GUI, let it resilver and then take out the old one ;)
Yeah I've already done that a couple of times on my array (I had a couple of dodgy drives) but the reason I was raising this request was to propose a solution to replace disks while avoiding ever putting the array into a degraded state. It wouldn't work for failed drives, obviously, but would be really good for replacing drives when all of the disks are still operational.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
A resilver is always desirable in this case, to make sure no crap gets written to the new drive. Both drives are kept in sync until the process is finished, from what I heard.

It should be in the manual already - if not, search the forum for the exact process.
 
Joined
Jan 25, 2015
Messages
26
Ah I see what you mean now, after having re-read the docs, it seems like you can replace the drive without having to degrade the array. I thought resilvering always involved removing and replacing, seems that's not the case. For anyone reading this in the future, it's covered under "Replacing Drives to Grow a ZFS Pool" in the docs. While it's not exactly what I described (as it's not doing the mirror and therefore working all of the disks pretty hard rather than just the 2) it's fairly similar and has most of the advantages of my original feature request.

So I guess you can mark this topic as "mostly redundant" and move on. Either way, thanks for the help.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
It's existed a while, but the docs were only recently updated, so if anything pops up, please report it so it can be fixed.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Yes this has been a zfs feature for quite a long time now.
 
Status
Not open for further replies.
Top