Importing a ZFS pool from NAS4Free fails - shows corrupt

Status
Not open for further replies.

Midwan

Dabbler
Joined
Dec 16, 2013
Messages
15
Hi everyone,

I've been using NAS4Free (v9.1.x) for a few years now, running a ZFS pool with 4 disks on a RAIDZ-1 configuration without problems.
I decided to give FreeNAS a try now that the latest version runs on FreeBSD 9.x and the latest ZFS version as well. Since the ZFS version is the same as NAS4Free (v28) I didn't expect to have any problems importing the pool. Note: the ZFS pool is not encrypted.

Unfortunately, when I try the auto-import volumes function from FreeNAS, though it does find the Pool correctly it doesn't import it. Running "zpool import" from the shell gave me the reason why:

Code:
[root@koula ~]# zpool import                                               
  pool: KoulaOne                                                         
    id: 1976880193882533230                                               
  state: UNAVAIL                                                           
status: One or more devices contains corrupted data.                     
action: The pool cannot be imported due to damaged devices or data.       
  see: http://illumos.org/msg/ZFS-8000-5E                                 
config:                                                                   
                                                                           
        KoulaOne                  UNAVAIL  insufficient replicas           
          raidz1-0                UNAVAIL  insufficient replicas           
            14963642313260917131  UNAVAIL  corrupted data                 
            15112354759630307751  UNAVAIL  corrupted data                 
            18090732378555764694  UNAVAIL  corrupted data                 
            17259236641428954265  UNAVAIL  corrupted data 


Now the interesting thing is that the pool is actually fine, since if I boot back into NAS4Free and run a "zpool status" it shows up without problems:

Code:
 zpool status
  pool: KoulaOne
state: ONLINE
  scan: scrub repaired 40K in 6h44m with 0 errors on Mon Dec  9 05:22:44 2013
config:
 
        NAME          STATE    READ WRITE CKSUM
        KoulaOne      ONLINE      0    0    0
          raidz1-0    ONLINE      0    0    0
            ada0.nop  ONLINE      0    0    0
            ada1.nop  ONLINE      0    0    0
            ada2.nop  ONLINE      0    0    0
            ada3.nop  ONLINE      0    0    0
 
errors: No known data errors

And of course I can still access all the data there. I tried exporting the pool from NAS4Free to see if that would make a difference (zpool -f export KoulaOne) but the results were the same. I even run a scrub just in case, which came back without any problems mentioned.

So what am I missing here? Is there any difference between the two systems that could cause this specifically?

edit: Forgot to mention, I'm testing FreeNAS version 9.2.0-RC-x64.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I don't know how NAS4Free does it's pools. But it looks like whatever it does may not be compatible with FreeNAS.

As a sidenote, FreeNAS is using v5000 while NAS4Free is still v28 as far as I know.
 

Midwan

Dabbler
Joined
Dec 16, 2013
Messages
15
Interestingly, after reading through this thread: http://forums.freenas.org/threads/zfs-pool-not-available-after-upgrade-from-8-3-1-to-9-1-beta.13734/ I had an idea of what might be wrong.

I noticed that the disks in the NAS4Free pool are named with their device label "adaX.nop" while under FreeNAS they have a gptid instead. I assume therefore that the import tries to find the original device labels and fails, therefore leading to the error displayed?

Then the question is, is this possible to change without destroying the pool and recreating it?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
FreeNAS doesn't care if the device names change as long as it is able to detect and use the pool devices. Something is just not appearing to behave right with your old pool. It's a case of you say potato and I say potato. Different names that ultimately should point to the same place.
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
The NAS4Free zpool status output shows that N4F uses the NOP geom layer (device names end with .nop). I guess it does that to "force" 4k sectors.
Being curious I installed NAS4Free in a VM, created a pool (checked the 4k sectors checkbox) and got a pool which uses gnop devices. Shutdown the VM, attached the disks to a FreeNAS VM and I could import the pool without any problem. zpool import just noted that the pool is using a legacy format (NAS4Free uses pool v28, FreeNAS is already using feature flags -- pool "v5000") but everything works and the pool is healthy.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Good test, but there's no guarantee that they haven't changed something that might have an impact. FreeNAS 8.0-betas apparently didn't use gptids and 2 partitions from what I've heard. I'm not saying your test is invalid, but it does beg the question what is different between your VM test and the OP's real-life pool. I will admit I have no clue what the nop geom layer is.. that's greek to me.

@Midwan,

Can you post the output of zdb -l /dev/XXX substituting XXX with whatever device your disks come up as with FreeNAS. It'll be long so you might want to just attach it as a file.
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
Agree, my test just adds one datapoint to the puzzle. A pool created by latest N4F seems to import fine (but I'm not a N4F expert, so maybe I did something wrong/differently). I hoped I will be able to reproduce the issue and troubleshoot it in my environment, but so far no luck.
 

Midwan

Dabbler
Joined
Dec 16, 2013
Messages
15
@Dusan: Thanks a lot for going through the trouble of trying to recreate the issue. I'm also glad to see that at least it works in general, so it must be specific to my setup/situation somehow.

@cyberjock: I'm attaching the results in separate files (one per device) below, I hope it helps.

Meanwhile, I'm taking a full fresh backup of the contents and I'll experiment a bit, perhaps even recreate the whole pool in FreeNAS from scratch if all else fails. I'm just curious as to why it doesn't work, since it was "supposed" to do so. :smile:
 

Attachments

  • ada0.nop.txt
    6.3 KB · Views: 308
  • ada1.nop.txt
    6.3 KB · Views: 300
  • ada2.nop.txt
    6.3 KB · Views: 275
  • ada3.nop.txt
    6.3 KB · Views: 294

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
I assume these zdb outputs are from N4F. Does zdb -l <device> output anything in FreeNAS?
 

Midwan

Dabbler
Joined
Dec 16, 2013
Messages
15
Yes, the output shown above is from N4F. I'll post the same from FreeNAS as soon as I'm done with backing up everything just in case, then reboot into that. :smile:
 

Midwan

Dabbler
Joined
Dec 16, 2013
Messages
15
...and here's the output from FreeNAS, as promised. From what I can see it's identical except the values of the "txg" row, which I'm not really sure what it stands for. :-/
 

Attachments

  • ada0.txt
    6.3 KB · Views: 311
  • ada1.txt
    6.3 KB · Views: 276
  • ada2.txt
    6.3 KB · Views: 261
  • ada3.txt
    6.3 KB · Views: 295

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
txg stands for "transaction group". The counter increases when you do writes to the pool, so it's normal that it's larger in the second output.
Otherwise, I'm currently out of ideas on how to get it to import. :(
 

Midwan

Dabbler
Joined
Dec 16, 2013
Messages
15
OK, thanks for the help so far anyway. I'll give the 9.1 release a try as well, in case there's anything in the RC that might cause this (unlikely, but sometimes you never know) and failing that I think I'll destroy and create the pool again from scratch.

Perhaps it's a good opportunity to change something in the configuration as well. I currently have 4 HDDs (as you saw above) but I was thinking of adding a 5th one and perhaps change the RAIDZ1 to RAIDZ2. Though I read in other posts and websites that ideally you should follow the rule of a power of two + parity disks, for optimal performance. Of course this being just a home NAS, I have no high demands on performance (the bottleneck is on the network interface anyway, 1Gbps) but would prefer to be safer regarding disk malfunctions.

Needless to say, there are separate backups which are updated at certain intervals. ;-)

Do you have any suggestions from personal experience regarding configuration?
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
RAIDZ2 is definitely the recommended config. RAIDZ1 is too risky with today's drive sizes.
In a home NAS you should not notice any performance degradation with "non-optimal" number of drives.
You can always test the performance yourself before you commit your data to the pool.
 

Midwan

Dabbler
Joined
Dec 16, 2013
Messages
15
Great, that's what I suspected as well. Thanks for the help again! :smile:
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
OK, try this... it's just an idea.

Export your pool in NAS4Free first. Then try to do an import with FreeNAS.
 

Midwan

Dabbler
Joined
Dec 16, 2013
Messages
15
I've tried that already, if you look closer in my first post I actually mentioned it also ("I tried exporting the pool from NAS4Free to see if that would make a difference (zpool -f export KoulaOne) but the results were the same").

Never mind, I've set-up a new pool from scratch in FreeNAS and I'm now testing a few things before I settle with this setup. It still bugs me that I couldn't find out what it was, but I guess we can leave it for now.
 
Status
Not open for further replies.
Top