Safe to return same drive to mirror?

bethankful

Cadet
Joined
Dec 19, 2019
Messages
7
In a failed attempt to migrate to a new server, I removed one of the mirrored drives in my FreeNAS pool. I did not take it offline first and it is not a boot drive. I have read the instructions on replacing a failed drive with a new drive but, since I am feeling less bold from my failed migration and because it would be a personal disaster if this goes poorly, I thought I would ask the experts. Are there any potential issue with returning the same drive to the pool? How does FreeNAS know which drive to update?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
It's not very risky to put it back, resilvering should be automatic from the disk that has continued to operate as long as you didn't write to it in the other box, but if you're worried, you could instead wipe the disk and replace it with itself.
 

bethankful

Cadet
Joined
Dec 19, 2019
Messages
7
Thank you. I used another computer to delete the disk partitions and reinserted the drive. I tried to follow the manual but what I see on Freenas GUI is not what is in the manual. The manual says the replaced disk should show OFFLINE but mine shows UNAVAIL. The manual says to choose REPLACE but when I choose REPLACE, it returns the error "Disk is not clear, partitions or ZFS labels were found ... [something about being a member of pool1]. I wiped using FreeNAS GUI and tried it again but still receive an error "Disk is not clear, partitions or ZFS labels were found. ". Searching for the errors online I found "zpool labelclear [-f] device" but I am not familiar with how to find the device or if this is a good idea. I also found discussion that states "it's still being recognized as a part of the pool. All you need to do in this case is to zpool online". Is either of these the better solution?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
If you share the output from glabel status and zpool status -v we can work it all out.
 

bethankful

Cadet
Joined
Dec 19, 2019
Messages
7
root@McNAS[~]# glabel status
Name Status Components
gptid/812cdd53-49b9-11e9-8f75-000c29d35e80 N/A da0p1
gptid/56e877c8-5dac-11e9-9147-000c29d35e80 N/A da2p2
gptid/3c358cb9-7311-11ea-bad1-0050568ea1c3 N/A da1p2
gptid/3c24a426-7311-11ea-bad1-0050568ea1c3 N/A da1p1

root@McNAS[~]# zpool status -v
pool: freenas-boot
state: ONLINE
scan: scrub repaired 0 in 0 days 00:00:13 with 0 errors on Thu Mar 26 03:45:13 2020
config:

NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
da0p2 ONLINE 0 0 0

errors: No known data errors

pool: pool1
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
see: http://illumos.org/msg/ZFS-8000-2Q
scan: scrub repaired 0 in 0 days 09:37:47 with 0 errors on Mon Feb 17 13:37:47 2020
config:

NAME STATE READ WRITE CKSUM
pool1 DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
3639743818433426128 UNAVAIL 0 0 0 was /dev/gptid/562af8d1-5dac-11e9-9147-000c29d35e80
gptid/56e877c8-5dac-11e9-9147-000c29d35e80 ONLINE 0 0 0

errors: No known data errors
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
was /dev/gptid/562af8d1-5dac-11e9-9147-000c29d35e80
That gptid of the missing disk isn't on any of the disks we listed with glabel status, so we're good to assume it isn't da0, da1 or da2, so it's probably da3

You can't online it any more after the erasing you did, so don't worry about that suggestion from the zpool status.

What devices do you see under /dev/ (ls -l /dev/ | grep da) ?

We should remove (detach) the broken replica from the mirror in the meantime, which will leave you with a healthy pool of a single disk.

zpool detach pool1 gptid/562af8d1-5dac-11e9-9147-000c29d35e80

We can then attach the new/old disk back with a few steps that you can already see in the thread I link below once we are sure we're working with /dev/da3.

 

bethankful

Cadet
Joined
Dec 19, 2019
Messages
7
Before I removed it, the disk was "da1".

root@McNAS[~]# ls -l /dev/ | grep da
crw-r----- 1 root operator 0x64 Mar 31 00:22 da0
crw-r----- 1 root operator 0x65 Mar 31 00:22 da0p1
crw-r----- 1 root operator 0x66 Mar 31 00:22 da0p2
crw-r----- 1 root operator 0x67 Mar 31 00:34 da1
crw-r----- 1 root operator 0x78 Mar 31 00:34 da1p1
crw-r----- 1 root operator 0x7a Mar 31 00:34 da1p2
crw-r----- 1 root operator 0x68 Mar 31 00:22 da2
crw-r----- 1 root operator 0x6b Mar 31 00:22 da2p1
crw-r----- 1 root operator 0x6d Mar 31 00:23 da2p1.eli
crw-r----- 1 root operator 0x6c Mar 31 00:22 da2p2
lrwxr-xr-x 1 root wheel 10 Mar 31 00:23 dumpdev -> /dev/da2p1
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
So for sure da1 has a strange gptid (contains the characters bad1)... perhaps this isn't a mistake/coincidence.

It also isn't in use in your boot or data pools, so I guess we have it there.

I assume you already detached the unavailable disk with the command I gave in the last post.

We can start with wiping the disk:
gpart destroy -F /dev/da1

gpart create -s gpt /dev/da1

Then we create the swap and data partitions:

gpart add -s 2G -t freebsd-swap /dev/da1

gpart add -t freebsd-zfs /dev/da1

now we will find the gptid of the data partition

gpart list

Look for the rawuuid of da1p2 and note that for use with the command below:

zpool attach pool1 gptid/56e877c8-5dac-11e9-9147-000c29d35e80 gptid/<rawuuid of da1p2>

zpool status to show it's back to normal.
 
Last edited:

bethankful

Cadet
Joined
Dec 19, 2019
Messages
7
Outstanding! I am very grateful for your expertise and willingness to help me. Thank you.

On the trivial side; in the GUI under Pool Status, under Pool1, under MIRROR it list "da2p2" and "/dev/gptid/6fe1d8d9-738f-11ea-bad1-0050568ea1c3" as the two devices. Will that change to "da1p2" when it finishes resilvering? Can it be changed?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Have a look at this thread and we can discuss if you really want to get rid of the gptid

 
Top