FreeNas 8.3.2 ZFS drive errors - please help!

Status
Not open for further replies.

ianW

Cadet
Joined
May 13, 2014
Messages
4
Ok, so we had some kind of power problem here and when I got to my NAS box there was no data. A "zpool import" shows the following:

zpool import
pool: Canies_Venatici
id: 7995112297536712063
state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the '-f' flag.
see: http://www.sun.com/msg/ZFS-8000-72
config:

Canies_Venatici FAULTED corrupted data
gptid/a6382084-86ff-11e2-945b-94de80070782 ONLINE

"gpart show" gives the following:

gpart show
=> 34 62533229 ada0 GPT (29G)
34 94 - free - (47k)
128 62533135 1 freebsd-zfs (29G)

=> 34 17581080509 da0 GPT (8.2T)
34 94 - free - (47k)
128 4194304 1 freebsd-swap (2.0G)
4194432 17576886111 2 freebsd-zfs (8.2T)

=> 63 15633345 da1 MBR (7.5G)
63 1930257 1 freebsd [active] (942M)
1930320 63 - free - (31k)
1930383 1930257 2 freebsd (942M)
3860640 3024 3 freebsd (1.5M)
3863664 41328 4 freebsd (20M)
3904992 11728416 - free - (5.6G)

=> 0 1930257 da1s1 BSD (942M)
0 16 - free - (8.0k)
16 1930241 1 !0 (942M)

Anny suggestions on how to go about getting this back?

Thanks

Ian
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
If its FAULTED you've got 3 options:

1. Restore from backup if one exists.
2. Kiss the data goodbye.
3. Contact a company(or myself) for data recovery services. If you call a company you can expect the price to be 5-digits... to start. I'm much less, but more than your casual home user will be particularly happy to part with. Some do go that route though.

I notice you mentioned that you use Highpoint in another thread. In fact, it was a thread where I warned not to use them. Guess this is where you get the "I told ya so" speech.

Hopefully you didn't have important data on the pool that wasn't backed up.
 

ianW

Cadet
Joined
May 13, 2014
Messages
4
Ok, so I got the data back, and lucky for me it was not that hard.

First of all you need to do all the usual checks to examine the status of your system:

gpart show
zpool status
camcontrol devlist

My data as you can see from above was FAULTED and the status was that the metadata was corrupted.

It seems that it is never as bad as it is painted. After digging around I found that I could look at how far a rollback of the data would go, to see if I could get the system back to a time before it was damaged. Use the command

zpool import -nfF [your pool name]

The -n means just show me what you would do without doing anything. I did not want to take the chance to further damage my data.

The results of that command were:

zpool import -nfF Canies_Venatici
Would be able to return Canies_Venatici to its state as of Thu May 15 10:02:09 2014.
Would discard approximately 5 seconds of transactions.

I figured that nothing ventured, nothing gained, so lets try.

Results were:
~# zpool import -fF Canies_Venatici
Pool Canies_Venatici returned to its state as of Thu May 15 10:02:09 2014.
Discarded approximately 5 seconds of transactions.
cannot mount '/Canies_Venatici': failed to create mountpoint
cannot mount '/Canies_Venatici/Backup-PC': failed to create mountpoint
cannot mount '/Canies_Venatici/Media': failed to create mountpoint
cannot mount '/Canies_Venatici/PlugJail': failed to create mountpoint
cannot mount '/Canies_Venatici/PlugSoft': failed to create mountpoint
cannot mount '/Canies_Venatici/Restricted': failed to create mountpoint
cannot mount '/Canies_Venatici/Vault': failed to create mountpoint
[root@BigSod] ~# gpart show
=> 34 62533229 ada0 GPT (29G)
34 94 - free - (47k)
128 62533135 1 freebsd-zfs (29G)

=> 34 17581080509 da0 GPT (8.2T)
34 94 - free - (47k)
128 4194304 1 freebsd-swap (2.0G)
4194432 17576886111 2 freebsd-zfs (8.2T)

=> 63 15633345 da1 MBR (7.5G)
63 1930257 1 freebsd [active] (942M)
1930320 63 - free - (31k)
1930383 1930257 2 freebsd (942M)
3860640 3024 3 freebsd (1.5M)
3863664 41328 4 freebsd (20M)
3904992 11728416 - free - (5.6G)

=> 0 1930257 da1s1 BSD (942M)
0 16 - free - (8.0k)
16 1930241 1 !0 (942M)

[root@BigSod] ~# camcontrol devlist
<SanDisk SDSSDRC032G 2.0.0> at scbus1 target 0 lun 0 (pass0,ada0)
<HPT DISK 0_0 4.00> at scbus4 target 0 lun 0 (pass1,da0)
<SanDisk Cruzer Fit 1.26> at scbus5 target 0 lun 0 (pass2,da1)
[root@BigSod] ~# zpool status
pool: Canies_Venatici
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://www.sun.com/msg/ZFS-8000-8A
scan: scrub repaired 0 in 3h3m with 1 errors on Sun May 4 03:03:19 2014
config:

NAME STATE READ WRITE CKSUM
Canies_Venatici ONLINE 0 0 0
gptid/a6382084-86ff-11e2-945b-94de80070782 ONLINE 0 0 0

errors: 1 data errors, use '-v' for a list
I then logged on via the web gui and all the disks and data are now back.

I will clean up the bad file in question and on we go!

:D Never give up!

Ian
 
Status
Not open for further replies.
Top