Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

No data after zfs pool import

Big Black Duckk

Neophyte
Joined
May 6, 2020
Messages
4
Ok so after having a freshly installed FreeNas my exported ZFS pool has no data on it whatsoever.
- no datasets or zvol
- no folders or files

The problem I seem to have is when trying to import my encrypted pool my list of disks is empty. However when I just add as if it had no encryption I can add my pool but it's empty.

When I have a look in UI my disks show up just fine under Storage > Disks
I have a copy of my encryption key & recovery key but there is just no way I seem to be able to put this in anywhere.

I've been at it for 2 days and nothing seems to be working.
What should've be an easy automated process has turned out to be a big headache.

I just really hope someone here knows what to do.

Version:
FreeNAS-11.3-U2.1
Physical Memory:
15232 MiB
AMD A10-7800 Radeon R7, 12 Compute Cores 4C+8G


Code:
root@freenas[~]# zpool status
  pool: freenas-boot
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        freenas-boot  ONLINE       0     0     0
          ada1p2    ONLINE       0     0     0

errors: No known data errors


root@freenas[~]# zpool import -o HDD
   pool: HDD
     id: 4969374686145841218
  state: UNAVAIL
status: The pool can only be accessed in read-only mode on this system. It
        cannot be accessed in read-write mode because it uses the following
        feature(s) not supported on this system:
        org.zfsonlinux:project_quota (space/object accounting based on project ID.)
        org.zfsonlinux:userobj_accounting (User/Group object accounting.)
        com.delphix:log_spacemap (Log metaslab changes on a single spacemap andflush them periodically.)
action: The pool cannot be imported in read-write mode. Import the pool with
        "-o readonly=on", access the pool on a system that supports the
        required feature(s), or recreate the pool from b
 

HoneyBadger

Mushroom! Mushroom!
Joined
Feb 6, 2014
Messages
2,453
Unfortunately, when you created your zpool "HDD" you or whatever system was in place activated several feature flags that are only supported in very recent ZFSonLinux releases:


The three offenders as shown in your failed import command are:

Code:
org.zfsonlinux:project_quota (space/object accounting based on project ID.)
org.zfsonlinux:userobj_accounting (User/Group object accounting.)
com.delphix:log_spacemap (Log metaslab changes on a single spacemap and flush them periodically.)


None of these flags are supported in any of the FreeBSD 11.x releases, FreeNAS or TrueNAS. The upcoming switch to OpenZFS in the 12.x branch might have support for these features, but it's presently in a nightly/alpha stage and I wouldn't suggest using it for data you consider valuable.

Sorry to be the bearer of bad news. You'll have to import this pool on a current ZoL system - if you're set on migrating, you'll need to make a new pool without using incompatible flags and then export/import here.
 

Big Black Duckk

Neophyte
Joined
May 6, 2020
Messages
4
Thanks for the quick response!

At this point what's most important to me is recovering my data... I simply assumed that fixing the pool would do this.
However if I read and understood your comment correctly I should be able to import my disk into a new pool if I create one?
 

Yorick

Neophyte Sage
Joined
Nov 4, 2018
Messages
1,226
I should be able to import my disk into a new pool if I create one?
Yes, if worst comes to worst rsync over. If the read-only pool has a snapshot, you could replicate.
 

Big Black Duckk

Neophyte
Joined
May 6, 2020
Messages
4
Okay it sounded like a pretty straight forward job but when I create a new pool and add my disk it says all content will be erased. Needless to say I didn't proceed.

I'm thinking maybe I understood something wrong or am just doing it wrong. I've attached a screenshot for your convenience. Hoping you can point me in the right way.
 

Attachments

Yorick

Neophyte Sage
Joined
Nov 4, 2018
Messages
1,226
Okay I thought you were saying you'd create a new pool, on a separate disk, import this one read-only, and then copy your data over.

Your choices at this point are:

Import read-only, get the data off to another drive, blow away the current drive and create a new pool on it (*), copy the data back in

Build a new pool on a new drive, import this one read-only, copy the data over, then mirror the first one - yay redundancy

Import this drive in Ubuntu, not FreeNAS. It should understand the feature flags.

(*) What was the intent for the pool? A single drive without redundancy is risky for your data. Why go for ZFS when there is no redundancy to allow ZFS to do its thing? Without redundancy, ZFS is arguably not much better than ext4 or NTFS. A little better. It'll let you know something is dying, at least.
 

Big Black Duckk

Neophyte
Joined
May 6, 2020
Messages
4
I was testing freenas to see if it's for me. I'm an engineer so there's mostly music projects on there. I was planning on putting more of my drives into the pool after I felt it was safe.

I will take my drive and put it in a Linux machine tomorrow see if I can get my data of of it.
I'll keep you posted ;)
 

Yorick

Neophyte Sage
Joined
Nov 4, 2018
Messages
1,226
I was planning on putting more of my drives into the pool after I felt it was safe.
Are you aware that a raidz(n) cannot be expanded? If you put more of your drives into the pool, and you want your data to survive failure of any one drive, you'll end up with a bunch of mirror vdevs. Start with one disk, attach another for mirror vdev 0; then add two drives at a time for more mirror vdevs. Or add a bunch more drives as a single raidz1 vdev, that works too, but seriously begs the question "why".

For music, movie, backup and other generic file storage, a single raidz2 is often the best choice, 5 to 8 wide. That way, two drives can fail and you won't lose your data. The risk of a drive failing during rebuild goes up as drive size increases, because rebuild can take days and stresses the drives.
 
Top