Unable to detach volume

Status
Not open for further replies.

john-woods

Dabbler
Joined
Jul 22, 2012
Messages
17
Hi everyone,

I recently had a drive fail on my RAID 5 array, running on FreeNAS 9.1.1. I followed the interactions in the manual and successfully replaced the drive to restore the RAID array. However the drive re-appeared in the Volume Status list, but with it's status as UNAVAIL. I have tried again, using the GUI, to Detach it but it doesn't work. I am given a notification that it has been detached but it just stays there. A second hard drive failed more recently, I replaced it in the same way and now I have the same problem again.

So basically I would like to know if there are any alternative ways to force these old volumes to be detached? I have attached a picture of my setup for reference.

Thanks,
John
 

Attachments

  • UNAVAIL Volumes.png
    UNAVAIL Volumes.png
    258.4 KB · Views: 257

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
That's because you have 2 disks labeled as unavailable. RAIDZ1 (not RAID5) only allows for a single disk to be lost from the vdev before you lose the pool. You are currently missing 2 disks.

There are no ways to detach the old volumes. In fact, the problem isn't with volumes, its with the vdev and your disks.
 

john-woods

Dabbler
Joined
Jul 22, 2012
Messages
17
You are correct, it is RAIDZ1, previously I had RAID5 set up before my server was running FreeNAS.

Anyway, surely when I put the disk into Offline mode and then Replaced it using the Volume management GUI this entry for the old drive should have been removed? My data is still intact and only one disk was lost at a time. i.e. one disk failed and was replaced, then about a month later a second disk failed and was replaced. So there were never two disks offline at the same time as I understand it.

So there is no way to remedy this problem? In this current set up does that mean that the redundancy of the array is compromised?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
That's not what appears to be the case. Can you pastebin the output of "zpool status" and "zpool import". Please use pastebin as the formatting of the text is crucial.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Aha. Ok. Things are different than what I originally thought.

So scratch all that.

If you go to the FreeNAS documentation and look at the steps for replacing a failed disk, there's a step you missed. It's the last step. If you had followed the documentation you wouldn't have the extra entries. ;)

It's not "a big deal". Just go do the last step and the degraded status will go away.

Now for part 2 of this conversation....

See this...

errors: 20626 data errors, use '-v' for a list

That means you've got lost/corrupted data. Your pool is almost certainly not in a "good" condition. See my sig where it says not to use RAIDZ1 because it's "dead". You just became another example of why I have that in my sig.

At some point in the past (or present) you had more than 1 disk failing simultaneously and ZFS didn't have enough redundancy to correct it.

At this point your pool might suddenly crash your FreeNAS server and you never be able to access the data in the pool ever again. This might happen in 5 minutes or maybe never. You are basically relying on luck to not cause a sudden total loss of the pool. If you do a "zpool status -v" you'll see files that are corrupt. In essence ZFS knows they are corrupt but can't fix them. If you ever hit corrupted metadata (which you probably won't know about until you try to access it) you might be kissing that whole pool goodbye.

So I'd strongly recommend you look at destroying the pool and recreating a pool from scratch. I'd also strongly recommend you read my threads on SMART testing and monitoring and take them to heart. Clearly RAIDZ1 isn't a good option (as you are seeing firsthand) and you need better protection.

So you have two problems to resolve. One will take about 2 minutes, the other will take significantly longer.
 

john-woods

Dabbler
Joined
Jul 22, 2012
Messages
17
The last step about Detaching the disk if it continues to show up after removal? I tried that several times and it doesn't work, I also scrubbed and resilvered the array before trying again but it didn't work either. That's when it started to make me think it could be caused by the array actually being corrupted.

So I'm not overly surprised by the fact the pool isn't in good shape, but thanks for confirming that for me. Luckily I don't have any actual critical data on it, just my media collection, but I think I might be inclined to back it all up and start from scratch as you suggest. The issue is finding such spare parts where I live, which is currently in the middle of nowhere so it's not so convenient for me to replace failed drives.
 
Status
Not open for further replies.
Top