detach drive from detached pool after replacing

Status
Not open for further replies.

raviburb

Cadet
Joined
Mar 12, 2012
Messages
5
Hi, I replaced a drive of my zfs striped setup and remembered, that I have to detach the old drive, else FreeNAS 8 would not recognize it. but I sadly detached the whole pool. By now I am not able to import the pool, nor detach the drive. What can I do now?

This is from the console:

[root@Flicka /root]# zpool import -f
pool: Datenraid
id: 10736740739045864935
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

Datenraid ONLINE
da1p2 ONLINE
replacing ONLINE
gptid/f68dc95d-65ed-11e1-a9d2-485b3949f851 ONLINE
ada0p2 ONLINE
[root@Flicka /root]# zpool detach Datenraid gptid/f68dc95d-65ed-11e1-a9d2-485b39
49f851
cannot open 'Datenraid': no such pool
[root@Flicka /root]# zpool import Datenraid
cannot mount '/Datenraid': failed to create mountpoint
[root@Flicka /root]# zpool import -f
[root@Flicka /root]#
 

raviburb

Cadet
Joined
Mar 12, 2012
Messages
5
This is, what I get then:

[root@Flicka /root]# zpool status
pool: Datenraid
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKS
UM
Datenraid ONLINE 0 0
0
da1p2 ONLINE 0 0
0
replacing ONLINE 0 0
0
gptid/f68dc95d-65ed-11e1-a9d2-485b3949f851 ONLINE 0 0
0
ada0p2 ONLINE 0 0
0

errors: No known data errors
[root@Flicka /root]# zpool import -f Datenraid
cannot import 'Datenraid': no such pool available
[root@Flicka /root]# zpool import -F Datenraid
cannot import 'Datenraid': no such pool available
[root@Flicka /root]#
 

Ultfris101

Cadet
Joined
Feb 20, 2012
Messages
7
I had something similar happen recently when i was upgrading a drive. I ended up rebooting and it auomatically imported the voliume again. Not sure if i did something wrong but i'm also new to freenas and zfs tho not to unix. This was the first drive i had tried replacing in a raidz2 volume.

Fwiw
 

Ultfris101

Cadet
Joined
Feb 20, 2012
Messages
7
Yes, this is definitely what happened to me. Running 8.0.4. Maybe it has more to do with the inability to mount on the original mountpoint than the volume itself since the volume was just fine. OS might have some conflict with the /mnt/<mountpoint> thinking something is still there. I didn't try umount from command line but if it happens again I might.

raviburb, if you're adventurous, maybe try umount? Otherwise rebooting may be the resolution for you.

Is this some sort of bug? Or maybe the steps to do this in the documentation leave something out? I don't have a test box setup right now to experiment. Maybe in the near future.
 

raviburb

Cadet
Joined
Mar 12, 2012
Messages
5
[root@Flicka /root]# zpool status
pool: Datenraid
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKS
UM
Datenraid ONLINE 0 0
0
da1p2 ONLINE 0 0
0
replacing ONLINE 0 0
0
gptid/f68dc95d-65ed-11e1-a9d2-485b3949f851 ONLINE 0 0
0
ada0p2 ONLINE 0 0
0

errors: No known data errors
[root@Flicka /root]# cd /mnt
[root@Flicka /mnt]# ls
.snap md_size
[root@Flicka /mnt]#

Ultfris as you can see, there is nothing to umount.

I tested it, but Rebooting does not solve the problem. When I do zpool import Datenraid nothing happens, but zpool import -f wont show Datenraid anymore, after a reboot I can see Datenraid again.
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
Raviburb,

Your pool is fine, you need to Auto Import it from the GUI now and then it will be mounted.
 

raviburb

Cadet
Joined
Mar 12, 2012
Messages
5
protosd

yes, it is and no it wasnt. auto import just did not work, it was the first thing I tried.

OK, this did the job so far:

mkdir /mnt/Datenraid
zpool import -f Datenraid

But I still need to remove the old drive, from the pool, how can I do that?
 

raviburb

Cadet
Joined
Mar 12, 2012
Messages
5
solution:

after that, I was able to do this:
zpool detach Datenraid gptid/f68dc95d-65ed-11e1-a9d2-485b3949f851
reboot

and then auto import via webgui worked again ( It seems zfs pools in a replaced state cannot be auto imported)
 

Ultfris101

Cadet
Joined
Feb 20, 2012
Messages
7
glad you got it working. I know you didn't have anything mounted but the OS or some component thought you did or for some other reason couldn't create the mountpoint and bring it online. You should not have had to create the mount point /mnt/Datenraid by hand. I'm pretty sure the exact same thing happened to me and the resolution looks very similar although I never had to actually make the mountpoint manually. I haven't looked under the covers to see but zfs or some freenas script should be managing that.

In any case, very glad you got it working.
 
Status
Not open for further replies.
Top