Freenas 11.2 Unable to Import Freenas 11.3 Pools

blackhat840

Dabbler
Joined
Sep 22, 2017
Messages
16
Good Evening,

Somehow...I upgraded to Freenas 11.3. It has a lot of issues, understandably since it's a nightly release and not a stable. However, I changed back to Freenas 11.2 and it's unable to see or import my existing pools. I thought it was an issue with my pools so I decided to upgrade back to 11.3 and voila, my pools are detected and able to be imported back into Freenas. Is there any possible way for me to import my pools back into my 11.2 setup?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
If you can show us the output of zpool history <poolname> we may be able to see if the pools were upgraded and if so, maybe look at using one of the new options to roll it back (not sure if that would have required you to do something first to enable the checkpoint feature to be accessible).
 

blackhat840

Dabbler
Joined
Sep 22, 2017
Messages
16
Actually, not able to do that for 1 pool. I'm getting an error Device Busy. See below for the pool I can import into 11.3

Code:
History for 'ExtBackup':
2019-03-16.05:44:24  zpool create -o feature@lz4_compress=enabled -o altroot=/mnt -o cachefile=/data/zfs/zpool.cache -o failmode=continue -o autoexpand=on -o feature@async_destroy=enabled -o feature@empty_bpobj=enabled -o feature@multi_vdev_crash_dump=enabled -o feature@spacemap_histogram=enabled -o feature@enabled_txg=enabled -o feature@hole_birth=enabled -o feature@extensible_dataset=enabled -o feature@embedded_data=enabled -o feature@bookmarks=enabled -o feature@filesystem_limits=enabled -o feature@large_blocks=enabled -o feature@sha512=enabled -o feature@skein=enabled -o feature@device_removal=enabled -o feature@obsolete_counts=enabled -o feature@zpool_checkpoint=enabled -o feature@spacemap_v2=enabled -O compression=lz4 -O aclmode=passthrough -O aclinherit=passthrough -O mountpoint=/ExtBackup ExtBackup /dev/gptid/383f948e-47e9-11e9-b7ef-001018e4add8
2019-03-16.05:44:29  zfs inherit  ExtBackup
2019-03-16.11:13:59  zpool import 7093109554462849016 ExtBackup
2019-03-16.11:13:59  zpool set cachefile=/data/zfs/zpool.cache ExtBackup
2019-03-16.11:14:04  zfs set aclmode=restricted ExtBackup
2019-03-17.15:03:45  zpool import 7093109554462849016 ExtBackup
2019-03-17.15:03:45  zpool set cachefile=/data/zfs/zpool.cache ExtBackup
2019-03-17.16:49:20  zpool import 7093109554462849016 ExtBackup
2019-03-17.16:49:20  zpool set cachefile=/data/zfs/zpool.cache ExtBackup
2019-03-17.18:22:58  zpool import 7093109554462849016 ExtBackup
2019-03-17.18:22:58  zpool set cachefile=/data/zfs/zpool.cache ExtBackup
2019-03-18.17:57:49  zpool import 7093109554462849016 ExtBackup
2019-03-18.17:57:49  zpool set cachefile=/data/zfs/zpool.cache ExtBackup
2019-03-19.18:28:01  zpool import 7093109554462849016 ExtBackup
2019-03-19.18:28:01  zpool set cachefile=/data/zfs/zpool.cache ExtBackup
2019-03-20.16:07:44 <iocage> zfs set org.freebsd.ioc:active=yes ExtBackup
2019-03-20.16:08:12 <iocage> zfs set org.freebsd.ioc:active=no ExtBackup
2019-03-20.20:45:10  zpool import 7093109554462849016 ExtBackup
2019-03-20.20:45:10  zfs set aclmode=passthrough ExtBackup
2019-03-20.20:45:11  zfs inherit -r ExtBackup
2019-03-20.20:50:31  zpool import 7093109554462849016 ExtBackup
2019-03-20.20:50:31  zpool set cachefile=/data/zfs/zpool.cache ExtBackup
2019-03-20.20:50:32  zfs set aclmode=restricted ExtBackup
2019-03-20.20:50:37  zfs set aclmode=passthrough ExtBackup/.system
2019-03-20.20:51:15 zfs set org.freebsd.ioc:active=yes ExtBackup
2019-03-21.09:40:03  zpool import 7093109554462849016 ExtBackup
2019-03-21.09:40:03  zpool set cachefile=/data/zfs/zpool.cache ExtBackup



This is the error I'm getting when attempting to import my second pool

Code:
  File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/zfs.py", line 562, in inherit
    zprop.inherit(recursive=recursive)
  File "libzfs.pyx", line 385, in libzfs.ZFS.__exit__
  File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/zfs.py", line 562, in inherit
    zprop.inherit(recursive=recursive)
  File "libzfs.pyx", line 1175, in libzfs.ZFSProperty.inherit
libzfs.ZFSException: Device busy

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/middlewared/job.py", line 339, in run
    await self.future
  File "/usr/local/lib/python3.7/site-packages/middlewared/job.py", line 368, in __run_body
    rv = await self.method(*([self] + args))
  File "/usr/local/lib/python3.7/site-packages/middlewared/schema.py", line 911, in nf
    return await f(*args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/pool.py", line 1822, in import_pool
    await self.middleware.call('zfs.dataset.inherit', pool_name, 'mountpoint', True)
  File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1160, in call
    return await self._call(name, serviceobj, methodobj, params, app=app, pipes=pipes, io_thread=True)
  File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1111, in _call
    return await run_method(methodobj, *args)
  File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1057, in run_in_thread
    return await self.loop.run_in_executor(executor, functools.partial(method, *args, **kwargs))
  File "/usr/local/lib/python3.7/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/zfs.py", line 564, in inherit
    raise CallError(str(e))
middlewared.service_exception.CallError: [EFAULT] Device busy

 

blackhat840

Dabbler
Joined
Sep 22, 2017
Messages
16
If you can show us the output of zpool history <poolname> we may be able to see if the pools were upgraded and if so, maybe look at using one of the new options to roll it back (not sure if that would have required you to do something first to enable the checkpoint feature to be accessible).
Any suggestions?
 

blackhat840

Dabbler
Joined
Sep 22, 2017
Messages
16
Ok, I'm hoping this will also help someone with my case. As you can see my data is still available although the first pool is degraded. I just can't import this pool, it states device is busy. However, if I do a zfs mount then I can see the missing pool through a mounted plex storage point until reboot.

zpool list
Code:
 zpool list
NAME           SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
Backup        1.81T  1.23T   594G        -         -    17%    67%  1.06x  DEGRADED  /mnt
ExtBackup     4.53T  3.51T  1.03T        -         -     1%    77%  1.00x  ONLINE  /mnt
freenas-boot     7G  1.53G  5.47G        -         -      -    21%  1.00x  ONLINE  -



zpool status
Code:
  pool: Backup
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
        the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://illumos.org/msg/ZFS-8000-2Q
  scan: resilvered 1.23T in 0 days 06:41:39 with 0 errors on Mon Mar 18 02:35:20 2019
config:

        NAME                                              STATE     READ WRITE CKSUM
        Backup                                            DEGRADED     0     0     0
          mirror-0                                        DEGRADED     0     0     0
            gptid/60e4b84b-9f45-11e7-aa40-f44d3074a230    ONLINE       0     0     0
            spare-1                                       DEGRADED     0     0     0
              11242790179143267932                        UNAVAIL      0     0     0  was /dev/gptid/61ffe9c0-9f45-11e7-aa40-f44d3074a230
              gptid/dc973530-490f-11e9-94df-001018e4add8  ONLINE       0     0     0
        spares
          14293407049143018985                            INUSE     was /dev/gptid/dc973530-490f-11e9-94df-001018e4add8

errors: No known data errors

  pool: ExtBackup
 state: ONLINE
  scan: none requested
config:

        NAME                                          STATE     READ WRITE CKSUM
        ExtBackup                                     ONLINE       0     0     0
          gptid/383f948e-47e9-11e9-b7ef-001018e4add8  ONLINE       0     0     0

errors: No known data errors

  pool: freenas-boot
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        freenas-boot  ONLINE       0     0     0
          da0p2     ONLINE       0     0     0

errors: No known data errors


zpool list -v
Code:
NAME                                     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
Backup                                  1.81T  1.23T   594G        -         -    17%    67%  1.06x  DEGRADED  /mnt
  mirror                                1.81T  1.23T   594G        -         -    17%    67%
    gptid/60e4b84b-9f45-11e7-aa40-f44d3074a230      -      -      -        -         -      -      -
    spare                                   -      -      -        -         -      -      -
      11242790179143267932                  -      -      -        -         -      -      -
      gptid/dc973530-490f-11e9-94df-001018e4add8      -      -      -        -         -      -      -
spare                                       -      -      -         -      -      -
  14293407049143018985                      -      -      -        -         -      -      -
ExtBackup                               4.53T  3.51T  1.03T        -         -     1%    77%  1.00x  ONLINE  /mnt
  gptid/383f948e-47e9-11e9-b7ef-001018e4add8  4.53T  3.51T  1.03T        -         -     1%    77%
freenas-boot                               7G  1.53G  5.47G        -         -      -    21%  1.00x  ONLINE  -
  da0p2                                    7G  1.53G  5.47G        -         -      -    21%
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
Any suggestions?
So what the history is showing is that you upgraded the pool, which is the last step and also the same step that introduces the checkpoint feature, so makes it impossible to use that feature given you would have needed to create a checkpoint before checkpoints were possible. (anyway, I guess you would not have known to do that before the upgrade, so would not have it).

Not much we can do with that.

You may be able to use zpool import with -f or -F to force the import even with the feature flags that aren't recognized.
 

blackhat840

Dabbler
Joined
Sep 22, 2017
Messages
16
So what the history is showing is that you upgraded the pool, which is the last step and also the same step that introduces the checkpoint feature, so makes it impossible to use that feature given you would have needed to create a checkpoint before checkpoints were possible. (anyway, I guess you would not have known to do that before the upgrade, so would not have it).

Not much we can do with that.

You may be able to use zpool import with -f or -F to force the import even with the feature flags that aren't recognized.
Ok. I've decided to just upgrade back to 11.3 and stay there until it becomes an official stable release so now I'm stuck on getting my other pool imported back into 11.3 as it states Device Busy. I'll try the zpool import -f to see how that goes. If I have to format these drives with zeroes I will as the 2nd pool I have contains a backup of all the data on the missing pool.
 

blackhat840

Dabbler
Joined
Sep 22, 2017
Messages
16
Is there any possibility to remove this unavailable spare from the Backup pool listed in zpool status above? The additional spare listed in the same pool is the missing spare...not sure how that happened but it did. I think the reason I can't import that pool back into 11.3 is because it's looking for the missing spare that actually exists

Old Missing Spare
Code:
 11242790179143267932                        UNAVAIL      0     0     0  was /dev/gptid/61ffe9c0-9f45-11e7-aa40-f44d3074a230


Same Spare as Missing Spare

Code:
spares
          14293407049143018985                            INUSE     was /dev/gptid/dc973530-490f-11e9-94df-001018e4add8
 

blackhat840

Dabbler
Joined
Sep 22, 2017
Messages
16
So what the history is showing is that you upgraded the pool, which is the last step and also the same step that introduces the checkpoint feature, so makes it impossible to use that feature given you would have needed to create a checkpoint before checkpoints were possible. (anyway, I guess you would not have known to do that before the upgrade, so would not have it).

Not much we can do with that.

You may be able to use zpool import with -f or -F to force the import even with the feature flags that aren't recognized.
After performing the zpool import -f -F my output for zpool status shows the following

Code:
root@freenas[~]# zpool status
  pool: Backup
 state: ONLINE
  scan: scrub in progress since Fri Mar 22 11:53:11 2019
        815G scanned at 1.40G/s, 63.6G issued at 112M/s, 1.23T total
        0 repaired, 5.04% done, 0 days 03:02:37 to go
config:

        NAME                                            STATE     READ WRITE CKSUM
        Backup                                          ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/60e4b84b-9f45-11e7-aa40-f44d3074a230  ONLINE       0     0     0
            gptid/dc973530-490f-11e9-94df-001018e4add8  ONLINE       0     0     0

errors: No known data errors

  pool: ExtBackup
 state: ONLINE
  scan: none requested
config:

        NAME                                          STATE     READ WRITE CKSUM
        ExtBackup                                     ONLINE       0     0     0
          gptid/383f948e-47e9-11e9-b7ef-001018e4add8  ONLINE       0     0     0

errors: No known data errors

  pool: freenas-boot
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        freenas-boot  ONLINE       0     0     0
          da0p2     ONLINE       0     0     0

errors: No known data errors


Notice that the pool Backup now shows 2 members online, no longer degraded and and performing a Scrub. However, it still does not show up in the Pools directory of the GUI. I'm going to let it finish scrubbing which should be done by the time i get off work today. I'll post an update here to let you know how that goes. Thanks for all the suggestions.
 

blackhat840

Dabbler
Joined
Sep 22, 2017
Messages
16
Welp...after the scrub finished the pool still does not show up in the GUI but is accessible through my jails and addons...

Code:

zpool status
  pool: Backup
 state: ONLINE
  scan: scrub repaired 744K in 0 days 04:50:43 with 0 errors on Fri Mar 22 16:43:54 2019
config:

        NAME                                            STATE     READ WRITE CKSUM
        Backup                                          ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/60e4b84b-9f45-11e7-aa40-f44d3074a230  ONLINE       0     0     0
            gptid/dc973530-490f-11e9-94df-001018e4add8  ONLINE       0     0     0

errors: No known data errors

  pool: ExtBackup
 state: ONLINE
  scan: none requested
config:

        NAME                                          STATE     READ WRITE CKSUM
        ExtBackup                                     ONLINE       0     0     0
          gptid/383f948e-47e9-11e9-b7ef-001018e4add8  ONLINE       0     0     0

errors: No known data errors

  pool: freenas-boot
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        freenas-boot  ONLINE       0     0     0
          da0p2     ONLINE       0     0     0

errors: No known data errors
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
So what the history is showing is that you upgraded the pool, which is the last step and also the same step that introduces the checkpoint feature, so makes it impossible to use that feature given you would have needed to create a checkpoint before checkpoints were possible. (anyway, I guess you would not have known to do that before the upgrade, so would not have it).

Not much we can do with that.

You may be able to use zpool import with -f or -F to force the import even with the feature flags that aren't recognized.
Ignore this post, I don't think this person has any clue what they are talking about. Did you get the 11.2 reinstall working from the conversation in irc?
 

blackhat840

Dabbler
Joined
Sep 22, 2017
Messages
16
Ignore this post, I don't think this person has any clue what they are talking about. Did you get the 11.2 reinstall working from the conversation in irc?
Not quite. I installed 11.2 and imported my config. Can't see either of the pools when I do that. If I upgrade back to the 11.3 nightly build then I can import the ExtBackup pool but still not the Backup pool. It's like there is something wrong with 11.3 and my pools have been upgraded so won't work in 11.2...
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
When you do zpool import Backup on the cli in 11.2 what is the error? Same question for your other pool?
 

blackhat840

Dabbler
Joined
Sep 22, 2017
Messages
16
When you do zpool import Backup on the cli in 11.2 what is the error? Same question for your other pool?

when I did a CLI zpool import on either pool it stated that the pools could only be imported using read only mode and no data could be written due to new features not available to 11.2

I also notice GEOM_MIRROR errors within the shell from monitor viewing.
Cannot open consumer ada2p1 and ada1p1 (error=1)
device swap0 destroyed
cannot open consumer ada2p1 (error=1)
cannot open consumer ada2p1 (error=1)
device swap 1 destroyed
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
when I did a CLI zpool import on either pool it stated that the pools could only be imported using read only mode and no data could be written due to new features not available to 11.2

I also notice GEOM_MIRROR errors within the shell from monitor viewing.
Cannot open consumer ada2p1 and ada1p1 (error=1)
device swap0 destroyed
cannot open consumer ada2p1 (error=1)
cannot open consumer ada2p1 (error=1)
device swap 1 destroyed
So you upgraded your zfs pool at some point when you had 11.3. just so you know, never upgrade your pool unless you plan on never going to previous releases. I usually wait a couple months before I upgrade my pool.
 

blackhat840

Dabbler
Joined
Sep 22, 2017
Messages
16
So you upgraded your zfs pool at some point when you had 11.3. just so you know, never upgrade your pool unless you plan on never going to previous releases. I usually wait a couple months before I upgrade my pool.

Well, I learned my lesson from this one. I've enjoyed FreeNas for a couple of years now. My system had been up for over a year uptime before I did this update. I guess I can live with the missing pool until I can determine if this is a bug or something else wrong with the pool.
 
Top