Empty datasets after upgrade to 11.2-U2 and volume import

Status
Not open for further replies.

ben-efiz

Cadet
Joined
Feb 21, 2019
Messages
9
Hello everyone, i have been (silently) using FreeNAS for some years now and beside the usual major update challenges i managed to safely keep my data. Until now. As always i was coming first here to find help (pretty much everytime) but know i think i am stuck and just want to be sure, there is nothing i missed/didn't see here.

TL;DR Update from 11.1-U6 to 11.7-U2 -> empty datasets after encrypted volume import while still showing a lot of used space

Background:
- Update from 11.1-U6 to 11.2-U2
- Had issues with USB boot stick as decribed here and here
- Bought another stick and did fresh install with imported config DB and pools
- Had a lot of struggle with SMB configurations which are broken, see here
- Importing my encrypted RAIDZ1 pool always worked without any reported issues, until i looked inside the datasets and found them empty :eek:

Of course i utilized several zfs/zpool commands

Code:

zfs list

NAME                                                            USED  AVAIL  REFER  MOUNTPOINT
...
pool                                                           1.35T  3.90T   128K  /mnt/pool
pool/Share                                                      269G  3.90T   117K  /mnt/pool/Share
pool/backup                                                     270G  3.90T   117K  /mnt/pool/backup
pool/data
...


Code:
zpool status pool
  pool: pool
 state: ONLINE
  scan: scrub repaired 0 in 0 days 10:03:36 with 0 errors on Sun Jan 13 10:03:39 2019
config:

    NAME                                                STATE     READ WRITE CKSUM
    pool                                                ONLINE       0     0     0
      raidz1-0                                          ONLINE       0     0     0
        gptid/2401a9d8-76ce-11e8-898c-c8cbb8c53651.eli  ONLINE       0     0     0
        gptid/25203d9e-76ce-11e8-898c-c8cbb8c53651.eli  ONLINE       0     0     0
        gptid/26f86cb7-76ce-11e8-898c-c8cbb8c53651.eli  ONLINE       0     0     0

errors: No known data errors


So yeah pool and datasets look fine. I found this thread about directories shadowing dataset mounts but unmounting the directories did not reveal anything (but unmounted/removed the dataset). Ok i checked via sudo zpool history pool if i made any errors

Code:
2018-06-23.12:14:10 zpool create -o cachefile=/data/zfs/zpool.cache -o failmode=continue -o autoexpand=on -O compression=lz4 -O aclmode=passthrough -O aclinherit=passthrough -f -m /pool -o altroot=/mnt pool raidz /dev/gptid/2401a9d8-76ce-11e8-898c-c8cbb8c53651.eli /dev/gptid/25203d9e-76ce-11e8-898c-c8cbb8c53651.eli /dev/gptid/26f86cb7-76ce-11e8-898c-c8cbb8c53651.eli
2018-06-23.12:14:15 zfs inherit mountpoint pool
2018-06-23.12:14:15 zpool set cachefile=/data/zfs/zpool.cache pool
2018-06-23.13:42:34 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 1527957841801345122
2018-06-23.13:42:34 zpool set cachefile=/data/zfs/zpool.cache pool
2018-06-23.14:39:52 zfs destroy pool@migrate
2018-06-24.07:39:37 zfs receive -Fv pool
2018-06-24.12:12:17 zpool import -f -R /mnt 1527957841801345122
2018-06-24.12:12:20 zfs inherit -r mountpoint pool
2018-06-24.12:12:20 zpool set cachefile=/data/zfs/zpool.cache pool
2018-06-24.12:12:21 zfs set aclmode=passthrough pool
2018-06-24.12:12:26 zfs set aclinherit=passthrough pool
2018-06-24.12:25:52 zpool scrub pool
2018-06-25.16:46:08 zpool import -f -R /mnt 1527957841801345122
2018-06-25.16:46:12 zfs inherit -r mountpoint pool
2018-06-25.16:46:12 zpool set cachefile=/data/zfs/zpool.cache pool
2018-06-25.16:46:12 zfs set aclmode=passthrough pool
2018-06-25.16:46:17 zfs set aclinherit=passthrough pool
2018-06-25.16:48:52 zpool export -f pool
2018-06-25.16:49:37 zpool import -f -R /mnt 1527957841801345122
2018-06-25.16:49:40 zfs inherit -r mountpoint pool
2018-06-25.16:49:40 zpool set cachefile=/data/zfs/zpool.cache pool
2018-06-25.16:49:40 zfs set aclmode=passthrough pool
2018-06-25.16:49:46 zfs set aclinherit=passthrough pool
2018-07-07.19:58:35 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 1527957841801345122
2018-07-07.19:58:35 zpool set cachefile=/data/zfs/zpool.cache pool
2018-08-05.00:00:11 zpool scrub pool
2018-08-13.15:16:00 zfs create -o quota=none -o refquota=300G -o reservation=none -o refreservation=none -o org.freenas:description=Apple Time Machine -o sync=disabled -o compression=off -o casesensitivity=sensitive pool/time-machine
2018-08-13.15:16:05 zfs set aclmode=passthrough pool/time-machine
2018-09-16.00:00:10 zpool scrub pool
2018-09-23.18:33:58 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 1527957841801345122
2018-09-23.18:33:58 zpool set cachefile=/data/zfs/zpool.cache pool
2018-10-28.00:00:12 zpool scrub pool
2018-11-23.11:53:49 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 1527957841801345122
2018-11-23.11:53:49 zpool set cachefile=/data/zfs/zpool.cache pool
2018-12-01.14:54:34 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 1527957841801345122
2018-12-01.14:54:34 zpool set cachefile=/data/zfs/zpool.cache pool
2018-12-02.00:00:11 zpool scrub pool
2019-01-10.19:24:13 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 1527957841801345122
2019-01-10.19:24:13 zpool set cachefile=/data/zfs/zpool.cache pool
2019-01-11.19:32:01 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 1527957841801345122
2019-01-11.19:32:01 zpool set cachefile=/data/zfs/zpool.cache pool
2019-01-13.00:00:12 zpool scrub pool
2019-02-14.10:45:18 zfs create -o quota=none -o refquota=none -o reservation=none -o refreservation=3T -o org.freenas:description=plots -o compression=off -o casesensitivity=sensitive pool/plots0
2019-02-14.10:45:23 zfs set aclmode=restricted pool/plots0
2019-02-14.10:46:06 zfs set quota=none pool/plots0
2019-02-14.10:46:06 zfs set refquota=none pool/plots0
2019-02-14.10:46:06 zfs set reservation=none pool/plots0
2019-02-14.10:46:06 zfs set refreservation=3T pool/plots0
2019-02-14.10:46:07 zfs set org.freenas:description=plots pool/plots0
2019-02-14.10:46:07 zfs inherit sync pool/plots0
2019-02-14.10:46:07 zfs set compression=off pool/plots0
2019-02-14.10:46:07 zfs inherit atime pool/plots0
2019-02-14.10:46:07 zfs inherit dedup pool/plots0
2019-02-14.10:46:07 zfs inherit recordsize pool/plots0
2019-02-14.10:46:08 zfs inherit readonly pool/plots0
2019-02-14.10:46:13 zfs set aclmode=restricted pool/plots0
2019-02-17.13:47:15 zpool import -f -R /mnt 1527957841801345122
2019-02-17.13:47:20 zfs inherit -r mountpoint pool
2019-02-17.13:47:20 zpool set cachefile=/data/zfs/zpool.cache pool
2019-02-17.13:47:20 zfs set aclmode=passthrough pool
2019-02-17.13:47:25 zfs set aclinherit=passthrough pool
2019-02-17.14:31:02 <iocage> zfs set org.freebsd.ioc:active=yes pool
2019-02-17.14:34:17 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 1527957841801345122
2019-02-17.14:34:17 zpool set cachefile=/data/zfs/zpool.cache pool
2019-02-17.14:38:00 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 1527957841801345122
2019-02-17.14:38:00 zpool set cachefile=/data/zfs/zpool.cache pool
2019-02-18.10:33:43 zfs destroy -fr pool/system/jails/murmur
2019-02-18.21:46:52 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 1527957841801345122
2019-02-18.21:46:52 zpool set cachefile=/data/zfs/zpool.cache pool
2019-02-18.21:59:33 <iocage> zfs mount pool/iocage
2019-02-18.21:59:34 <iocage> zfs mount pool/iocage/download
2019-02-18.21:59:35 <iocage> zfs mount pool/iocage/images
2019-02-18.21:59:35 <iocage> zfs mount pool/iocage/jails
2019-02-18.21:59:36 <iocage> zfs mount pool/iocage/log
2019-02-18.21:59:36 <iocage> zfs mount pool/iocage/releases
2019-02-18.21:59:38 <iocage> zfs mount pool/iocage/templates
2019-02-18.22:13:07 zfs set aclmode=passthrough pool/plots0
2019-02-18.22:22:38 zfs set aclmode=passthrough pool/plots0
2019-02-18.22:31:59 <iocage> zfs mount pool/iocage/download/11.2-RELEASE
2019-02-18.22:32:20 <iocage> zfs mount pool/iocage/releases/11.2-RELEASE/root
2019-02-18.22:33:59 zfs snapshot pool/iocage/releases/11.2-RELEASE/root@plots0
2019-02-18.22:34:01 zfs clone -p pool/iocage/releases/11.2-RELEASE/root@ plots0 pool/iocage/jails/plots0/root
2019-02-19.01:54:59 <iocage> zfs destroy plots0
2019-02-19.01:55:01 <iocage> zfs destroy pool/iocage/jails/plots0/root
2019-02-19.01:55:05 <iocage> zfs destroy pool/iocage/jails/plots0
2019-02-21.12:38:59 zpool import -f -R /mnt 1527957841801345122
2019-02-21.12:38:59 zpool set cachefile=/data/zfs/zpool.cache pool
2019-02-21.12:42:18  zfs set refreservation=none pool/plots0
2019-02-21.12:42:18  zfs set copies=1 pool/plots0
2019-02-21.12:42:23 zfs set aclmode=passthrough pool/plots0
2019-02-21.13:10:04 zpool import -f -R /mnt 1527957841801345122
2019-02-21.13:10:04 zpool set cachefile=/data/zfs/zpool.cache pool
2019-02-21.13:29:33 zpool import -f -R /mnt 1527957841801345122
2019-02-21.13:29:33 zpool set cachefile=/data/zfs/zpool.cache pool
2019-02-21.17:02:44 zpool import -f -R /mnt 1527957841801345122
2019-02-21.17:02:52 zfs inherit -r mountpoint pool
2019-02-21.17:02:52 zpool set cachefile=/data/zfs/zpool.cache pool
2019-02-21.17:02:52 zfs set aclmode=passthrough pool
2019-02-21.17:02:55 zfs set aclinherit=passthrough pool
2019-02-21.17:02:56  zfs set mountpoint=legacy pool/.system
2019-02-21.17:14:46 zpool import -f -R /mnt 1527957841801345122
2019-02-21.17:14:53 zfs inherit -r mountpoint pool
2019-02-21.17:14:53 zpool set cachefile=/data/zfs/zpool.cache pool
2019-02-21.17:14:53 zfs set aclmode=passthrough pool
2019-02-21.17:14:58 zfs set aclinherit=passthrough pool
2019-02-21.17:37:02 zpool export -f pool
2019-02-21.17:37:45 zpool import -f -R /mnt 1527957841801345122
2019-02-21.17:37:52 zfs inherit -r mountpoint pool
2019-02-21.17:37:52 zpool set cachefile=/data/zfs/zpool.cache pool
2019-02-21.17:37:52 zfs set aclmode=passthrough pool
2019-02-21.17:37:57 zfs set aclinherit=passthrough pool
2019-02-21.17:45:22 zfs set mountpoint=/mnt/pool/data/ pool/data
2019-02-21.17:47:09 zfs set mountpoint=/pool/data/ pool/data
2019-02-21.18:00:43 zpool export pool
2019-02-21.18:11:10 zpool import -f -R /mnt 1527957841801345122
2019-02-21.18:11:16 zpool set cachefile=/data/zfs/zpool.cache pool
2019-02-21.18:11:16 zfs set aclmode=passthrough pool
2019-02-21.18:11:21 zfs set aclinherit=passthrough pool


One can see only a few zfs action and mainly imports of the pool due to power on/off. After the update to 11.2 i played around with new jails via iocage. The old owncloud jail could be started via legacy UI and worked before the first restart of 11.2-U2 (see boot issue).

I see a growing number of issues and bugs, e.g. here. So in short, if you go to mount/directory its empty even though data set shows a lot of used space. I checked the details of the pool

Code:
zfs get all pool/data

NAME       PROPERTY                 VALUE                    SOURCE
pool/data  type                     filesystem               -
pool/data  creation                 Sat Jun 23 21:42 2018    -
pool/data  used                     441G                     -
pool/data  available                3.90T                    -
pool/data  referenced               117K                     -
pool/data  compressratio            1.03x                    -
pool/data  mounted                  yes                      -
pool/data  quota                    none                     default
pool/data  reservation              none                     default
pool/data  recordsize               128K                     default
pool/data  mountpoint               /mnt/pool/data           default
pool/data  sharenfs                 off                      default
pool/data  checksum                 on                       default
pool/data  compression              lz4                      inherited from pool
pool/data  atime                    on                       default
pool/data  devices                  on                       default
pool/data  exec                     on                       default
pool/data  setuid                   on                       default
pool/data  readonly                 off                      default
pool/data  jailed                   off                      default
pool/data  snapdir                  hidden                   default
pool/data  aclmode                  passthrough              inherited from pool
pool/data  aclinherit               passthrough              inherited from pool
pool/data  canmount                 on                       default
pool/data  xattr                    off                      temporary
pool/data  copies                   1                        default
pool/data  version                  5                        -
pool/data  utf8only                 off                      -
pool/data  normalization            none                     -
pool/data  casesensitivity          sensitive                -
pool/data  vscan                    off                      default
pool/data  nbmand                   off                      default
pool/data  sharesmb                 off                      default
pool/data  refquota                 none                     default
pool/data  refreservation           none                     default
pool/data  primarycache             all                      default
pool/data  secondarycache           all                      default
pool/data  usedbysnapshots          441G                     -
pool/data  usedbydataset            117K                     -
pool/data  usedbychildren           0                        -
pool/data  usedbyrefreservation     0                        -
pool/data  logbias                  latency                  default
pool/data  dedup                    off                      default
pool/data  mlslabel                                          -
pool/data  sync                     standard                 default
pool/data  refcompressratio         1.00x                    -
pool/data  written                  95.9K                    -
pool/data  logicalused              453G                     -
pool/data  logicalreferenced        36.5K                    -
pool/data  volmode                  default                  default
pool/data  filesystem_limit         none                     default
pool/data  snapshot_limit           none                     default
pool/data  filesystem_count         none                     default
pool/data  snapshot_count           none                     default
pool/data  redundant_metadata       all                      default
pool/data  org.freenas:description                           received
pool/data  org.freebsd.ioc:active   yes                      inherited from pool


You can see usedbydataset is pretty small and all the data remains with snapshots as seen in usedbysnapshots. The snapshots are pretty old (yes, my fault esp. given an FreeNAS OS upgrade). I reverted back to 11.1-U7 but same situation...

SO, before trying to re-create from snapshots, is there anything i missed? Something i can try?

And for helping out with tracing down the bug, can i provide anything? Obviously my old logs are gone
 

ben-efiz

Cadet
Joined
Feb 21, 2019
Messages
9
@dlavigne unfortunately not. I played back my snapshots. I didn't know how to get the data back since there was no evidence of it anymore (zfs history etc.). I was raising possible reasons here but no clue. I recently discovered this project which might be worth a try

https://github.com/Stefan311/ZfsSpy

It claims to be able to do
  • explore internal data structures from ZFS pools
  • recover data from damaged ZFS pools
  • recover deleted data from ZFS pools
which sounds very interesting. I was about trying it out on some playfield ZFS but as you might understand, currently not really in the mood to fiddle around with ZFS ;)
 
D

dlavigne

Guest
If it's still unresolved, please create a report at redmine.ixsystems.com that contains your debug (System -> Advanced -> Save debug) so a dev can take a look at what happened.
 

ben-efiz

Cadet
Joined
Feb 21, 2019
Messages
9
@dlavigne unfortunately i can't provide any information beside the ones here. I re-installed 11.1 and 11.2 a couple of times due to other usb boot device errors (which are already bug tracked here or here) and didn't bother to check the content of the pools/datasets since they were imported without any issues. That they were empty i realised later. zfs history doesn't show any evidence (see above). Logs have been stored on USB stick (i changed that now) but got lost during reformatting/re-installing several times on USB stick...

In another thread some was able to provide logs via PM. Not sure which bug its tracked at (maybe private?). But there is another one with logs as well already.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Status
Not open for further replies.
Top