LOST ZPOOL, PLEASE HELP

Status
Not open for further replies.

papageorgi

Explorer
Joined
Jul 16, 2013
Messages
51
was having trouble importing zfs 28 from my FreeNAS 8.3.1p2 x64 build into my FreeNAS 9.1.0-RC1 x64...well that worked out but in the end i deleted a file in the /mnt folder called something.000. trying to re-import it, i thought to destroy the pool and have it re-imported but it didn't work and now i'm stuck and for some reason this version of freenas sees all the drives as unavailable but they're fine, verified by just looking at them in several other zfs aware os's and in the older version of freenas. they always say the disks are online!

note: i have started this discussion in another thread but think it may be more appropriate to do so here
( http://forums.freenas.org/threads/zfs-pool-not-available-after-upgrade-from-8-3-1-to-9-1-beta.13734/ )

Code:
[smurfy@nas] /# zpool import -D -R /mnt stor
cannot import 'stor': no such pool or dataset
Destroy and re-create the pool from
a backup source.
 
[smurfy@nas] /# zpool import -Df stor
cannot import 'stor': no such pool or dataset
Destroy and re-create the pool from
a backup source.
[smurfy@nas] /# zpool import -d /dev/dsk/16752418983724484862 stor
cannot open '/dev/dsk/16752418983724484862': must be an absolute path
cannot import 'stor': no such pool available
 
[smurfy@nas] /# zpool import -D -R /mnt -o rdonly=on 14817132263352275435
cannot import 'stor': no such pool or dataset
Destroy and re-create the pool from
a backup source.
 
[smurfy@nas] /# zpool import -D
  pool: stor
    id: 14817132263352275435
  state: UNAVAIL (DESTROYED)
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
  see: http://illumos.org/msg/ZFS-8000-3C
config:
 
stor                      UNAVAIL  insufficient replicas
raidz1-0                UNAVAIL  insufficient replicas
  16752418983724484862  UNAVAIL  cannot open
  16923247324607746236  UNAVAIL  cannot open
  10063454983377925543  UNAVAIL  cannot open
  800970979980896464    UNAVAIL  cannot open
  11402190904943199729  UNAVAIL  cannot open
raidz1-1                UNAVAIL  insufficient replicas
  13898617183391027350  UNAVAIL  cannot open
  10638658567095667509  UNAVAIL  cannot open
  10179731959774134998  UNAVAIL  cannot open
  18380244200663678529  UNAVAIL  cannot open
  14569402982510951241  UNAVAIL  cannot open
 
[smurfy@nas] /# zpool online 16752418983724484862
missing device name
usage:
online [-e] <pool> <device> ...
[smurfy@nas] /# smartctl -a /dev/ada0 | grep "test result: "
SMART overall-health self-assessment test result: PASSED
[smurfy@nas] /# smartctl -a /dev/ada1 | grep "test result: "
SMART overall-health self-assessment test result: PASSED
[smurfy@nas] /# smartctl -a /dev/ada2 | grep "test result: "
SMART overall-health self-assessment test result: PASSED
[smurfy@nas] /# smartctl -a /dev/ada3 | grep "test result: "
SMART overall-health self-assessment test result: PASSED
[smurfy@nas] /# smartctl -a /dev/ada4 | grep "test result: "
SMART overall-health self-assessment test result: PASSED
[smurfy@nas] /# smartctl -a /dev/da0 | grep "test result: "
SMART overall-health self-assessment test result: PASSED
[smurfy@nas] /# smartctl -a /dev/da1 | grep "test result: "
SMART overall-health self-assessment test result: PASSED
[smurfy@nas] /# smartctl -a /dev/da2 | grep "test result: "
SMART overall-health self-assessment test result: PASSED
[smurfy@nas] /# smartctl -a /dev/da3 | grep "test result: "
SMART overall-health self-assessment test result: PASSED
[smurfy@nas] /# smartctl -a /dev/da4 | grep "test result: "
SMART overall-health self-assessment test result: PASSED
 

Attachments

  • traversing blocks to verify checksums.png
    traversing blocks to verify checksums.png
    12.6 KB · Views: 297

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I will tell you that the test results "passing" means nothing at all. I have several hard drives that you can't even format without getting an error but they show "passed" too.

I got no ideas on how to recover your data. Sorry. :(
 

papageorgi

Explorer
Joined
Jul 16, 2013
Messages
51
since all other OS' see the drives as online i feel it has to be a bug. somehow i'm still confident that it will be fine, via a software update. that maybe some time but it may be just the right thing to do. i think it maybe a bug since i'm still verifying a "destroyed" pool and it's checking out perfectly after hours. it also recently passed a scrub that i think too 15.5 hrs. and it's picking up the pace slowly but steadily. i think it is as slow as it is just from the way my raid is setup which soon i'll configure via the advice of delphij.

strangely it's now seeing the pool, kinda.... i did recreate the mount directory in shell.
 

Attachments

  • Screenshot - 07192013 - 09:30:34 PM.png
    Screenshot - 07192013 - 09:30:34 PM.png
    130.6 KB · Views: 302
  • this is new.png
    this is new.png
    24.5 KB · Views: 328

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I wouldn't necessarily say its a "bug". I'd say its more of a condition where your zpool is outside of the designed parameters. While I expect it to be "capable" of traversing the checksums, I'm not expecting it will come back with a zpool that is restorable. The command you ran doesn't throw error messages in the event it finds something "bad".

I really don't have any good options for you. But I will tell you that some commands you do can be destructive and irreversible. So you'd better be sure that any command you want to do is exactly what you want to do. Just running commands because someone else on the planet ran it and it worked is NOT(in my opinion) justification to try it yourself. That's how a lot of FreeNAS users are when they have problems and they have no understanding of how zfs works and what those commands do exactly and realize the commands they ran because they saw them online somewhere worked for some random person somewhere. For every 1 person that gets their data back dozens of others will lose their data with some random command.

FreeNAS has 3 "grades" for a zpool. HEALTHY - all disks detected and working. DEGRADED - at least 1 error or 1 disk missing but able to use replicas to keep the pool online. UNKNOWN - a zpool that doesn't have enough replicas to function.

Good luck! I hope you get your data back.
 

delphij

FreeNAS Core Team
Joined
Jan 10, 2012
Messages
37
was having trouble importing zfs 28 from my FreeNAS 8.3.1p2 x64 build into my FreeNAS 9.1.0-RC1 x64...well that worked out but in the end i deleted a file in the /mnt folder called something.000. trying to re-import it, i thought to destroy the pool and have it re-imported but it didn't work and now i'm stuck and for some reason this version of freenas sees all the drives as unavailable but they're fine, verified by just looking at them in several other zfs aware os's and in the older version of freenas. they always say the disks are online!

Please do the following from command line:

Code:
sysctl vfs.zfs.vdev.larger_ashift_disable=1


Then:

zpool import -D -R /mnt stor
zpool export stor

If these works, try importing the pool from the GUI. Do NOT do anything (including reboot) other than these if this worked, instead, wait for the next RC which will address this issue.

If these won't work, please let us know.
 

papageorgi

Explorer
Joined
Jul 16, 2013
Messages
51
thank you so much sir, i will try this soon, just won't sleep if i try it now. i did look at the server this morning after running on ssh this command Ran: "zdb -e -bcsvl stor" and the laptop went to sleep severing the connection, nothing has really changed but now the server doesn't see it in 9.1.0. i will surely run the commands you provided very soon. again thanks so very much, i can not express.

the pictures attached were from moments ago.
 

Attachments

  • freenas 8.3.1.png
    freenas 8.3.1.png
    138.9 KB · Views: 435
  • freenas 9.1.0 a few minutes later.png
    freenas 9.1.0 a few minutes later.png
    8.9 KB · Views: 390

delphij

FreeNAS Core Team
Joined
Jan 10, 2012
Messages
37
Well, just calm down (data is not lost at this point as far as I can tell) and just forget about FreeNAS 8.x (it won't import your pool because the ZFS version is older and you have upgraded the pool) and don't try arbitrary things you found on the Internet unless you know exactly what these instructions are doing, as they could do more harm than good.
 

papageorgi

Explorer
Joined
Jul 16, 2013
Messages
51
i'm very much trying, thank you though. so i just tried the commands and i've done nothing else other than what you know.
-exported the pool
-destroyed the pool
-viewed it in other os versions
-reimage the usb key and upload my configuration to it.
-the zdb command (that ssh was broken via my laptop sleeping)

here are the results:
Code:
[root@nas ~]# sysctl vfs.zfs.vdev.larger_ashift_disable=1
vfs.zfs.vdev.larger_ashift_disable: 1 -> 1
[root@nas ~]# zpool import -D -R /mnt stor
cannot import 'stor': no such pool or dataset Destroy and re-create the pool from a backup source.
[root@nas ~]#
 

papageorgi

Explorer
Joined
Jul 16, 2013
Messages
51
after rebooting:
Code:
[root@nas ~]# zpool import -D -R /mnt stor                                                                                          
cannot import 'stor': no such pool or dataset                                                                                       
        Destroy and re-create the pool from                                                                                         
        a backup source.
[root@nas ~]# zpool import -D -R /mnt/stor                                                                                          
   pool: stor                                                                                                                       
     id: 14817132263352275435                                                                                                       
  state: UNAVAIL (DESTROYED)                                                                                                        
 status: One or more devices are missing from the system.                                                                           
 action: The pool cannot be imported. Attach the missing                                                                            
        devices and try again.                                                                                                      
   see: http://illumos.org/msg/ZFS-8000-3C                                                                                          
 config:                                                                                                                            
                                                                                                                                    
        stor                      UNAVAIL  insufficient replicas                                                                    
          raidz1-0                UNAVAIL  insufficient replicas                                                                    
            16752418983724484862  UNAVAIL  cannot open                                                                              
            16923247324607746236  UNAVAIL  cannot open                                                                              
            10063454983377925543  UNAVAIL  cannot open                                                                              
            800970979980896464    UNAVAIL  cannot open                                                                              
            11402190904943199729  UNAVAIL  cannot open                                                                              
          raidz1-1                UNAVAIL  insufficient replicas                                                                    
            13898617183391027350  UNAVAIL  cannot open                                                                              
            10638658567095667509  UNAVAIL  cannot open                                                                              
            10179731959774134998  UNAVAIL  cannot open                                                                              
            18380244200663678529  UNAVAIL  cannot open                                                                              
            14569402982510951241  UNAVAIL  cannot open
[root@nas ~]#

so this is better, a for sure improvement...we're getting closer :)
 

delphij

FreeNAS Core Team
Joined
Jan 10, 2012
Messages
37
What does 'sysctl vfs.zfs.vdev.larger_ashift_disable' and 'zpool import' say?

By the way, how did you destroyed your pool (which you should have never done: this step could damage data)? If you get the idea from our documentation, please give us a pointer and we will fix that.

Please also attach your 'gpart show' output by the way.
 

papageorgi

Explorer
Joined
Jul 16, 2013
Messages
51
after typing 'sysctl vfs.zfs.vdev.larger_ashift_disable' it just repeated it and the import command after reboot (not at first) said that the same as i once had, the insufficient replicas but no actual import. which is a big improvement since before typing those commands i had no pool at all with "zpool import -D"...so it sees the pool again!

i will make dd copy of each drive just incase but no other funny business commands. that way i will at least have less chance of messing it up more and also will not run any odd commands that i find from scouring the intertubes. i just wish freenas saw the drives as available, i feel like that's really the key. i just can't figure out how to do so.

i got the idea to destroy the pool from a random blog and not the freenas manual/documentation. i dearly wish i avoided doing so or at least had a full backup (maybe just a better backup solution); but, i believe the commands were "zpool export stor" then "zpool destroy stor" with no errors and it was very quick.

also here's the result you asked for
Code:
[smurfy@nas] /> gpart show
=>        34  7814037101  da0  GPT  (3.7T)
          34          94      - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  7809842703    2  freebsd-zfs  (3.7T)
 
=>        34  7814037101  da1  GPT  (3.7T)
          34          94      - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  7809842703    2  freebsd-zfs  (3.7T)
 
=>        34  7814037101  da2  GPT  (3.7T)
          34          94      - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  7809842703    2  freebsd-zfs  (3.7T)
 
=>        34  7814037101  da3  GPT  (3.7T)
          34          94      - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  7809842703    2  freebsd-zfs  (3.7T)
 
=>        34  7814037101  da4  GPT  (3.7T)
          34          94      - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  7809842703    2  freebsd-zfs  (3.7T)
 
=>      63  31326145  da5  MBR  (15G)
        63  3590433    1  freebsd  [active]  (1.7G)
  3590496        63      - free -  (31k)
  3590559  3590433    2  freebsd  (1.7G)
  7180992      3024    3  freebsd  (1.5M)
  7184016    41328    4  freebsd  (20M)
  7225344  24100864      - free -  (11G)
 
=>        34  1953525101  ada0  GPT  (931G)
          34          94        - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  1949330703    2  freebsd-zfs  (929G)
 
=>        34  1953525101  ada1  GPT  (931G)
          34          94        - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  1949330696    2  freebsd-zfs  (929G)
  1953525128          7        - free -  (3.5k)
 
=>        34  1953525101  ada2  GPT  (931G)
          34          94        - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  1949330703    2  freebsd-zfs  (929G)
 
=>        34  1953525101  ada3  GPT  (931G)
          34          94        - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  1949330703    2  freebsd-zfs  (929G)
 
=>        34  1953525101  ada4  GPT  (931G)
          34          94        - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  1949330703    2  freebsd-zfs  (929G)
 
=>      0  3590433  da5s1  BSD  (1.7G)
        0      16        - free -  (8.0k)
      16  3590417      1  !0  (1.7G)
 
[smurfy@nas] />
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Both an export and destroy are very quick. It requires a small amount of processing and deleting a few kb from each of the disks in the pool. Unfortunately, after you destroyed the zpool the disks were no longer protected by FreeNAS from anything being written to the disks. The disks were basically free for anything to write to. Granted nothing should be writing to empty disks but they could.

There really is no reason to ever destroy a zpool unless you are done with the zpool for good. After that, any attempt to recover is marginal at best. I'm kind of shocked that you'd try a command on very important data that has the word "destroy" in it. If someone told me to run a command that involved my zpool and had "destroy" in it I'd never be running it without someone with a lot more experience than myself telling me that's the command and it really is okay.

I've been watching the thread even though I haven't been responding and I'm not really seeing any output from the server to make me think recovery is possible. I have no ideas, but I'm not a zpool master with recovering destroyed zpools. I'm also new to the changes 9.1 brings to zpools so I'm not the best source for information right now. I know if we were in FreeNAS 8 and 9.1 hadn't been used at all I wouldn't have any recommendations right now either.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Some advice:

When delphij asks for things like:

What does 'sysctl vfs.zfs.vdev.larger_ashift_disable' and 'zpool import' say?

He's not asking for you to post your interpretation of what you saw. Just the formatting of the text alone can be crucial. You really should post the actual text as you did with the gpart show command. Especially since your data is on the line. ;)
 

papageorgi

Explorer
Joined
Jul 16, 2013
Messages
51
yes cyberjock, i made several mistakes. i'm paying the price and i'm not happy about it. if anything i hope this experience will show others to be more cautious and not panic as i did.

the commands i entered were exactly what i typed but not copied from the terminal since i did do not believe i copied the command or photographed it. i'm certain it was the exact syntax, because i read about it on several what i thought were good blogs for a period of time. i did keep relatively through logs of most of what i found, read, typed but i was panicked. i'm away from my computer being at work and i do not have remote access to those files or server. i will surely check and if found upload or add it to this post. i have a strangely sharp memory so i'm certain it is the command.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'm not talking about the syntax. I'm talking about the formatting of the output. For example, a zpool status can have different columns. The orientation of the information in those columns provides insight into what the zpool is doing. If you just post the test output without the actual column output alot of the meaning is lost(and then we sometimes ask you to post it again in code).

These provide different feedback to me:

Code:
[root@freenas] ~# zpool status
  pool: tank
state: ONLINE
  scan: scrub repaired 0 in 45h1m with 0 errors on Wed Jul 17 15:52:57 2013
config:
 
        NAME                                            STATE    READ WRITE CKSUM
        tank                                            ONLINE      0    0    0
          raidz3-0                                      ONLINE      0    0    0
            gptid/6fbb91d5-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
            gptid/70448fd2-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
            gptid/70c0c7b3-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
            gptid/713de0d5-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
            gptid/71e3eea1-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
            gptid/728458d2-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
            gptid/7326aebc-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
            gptid/73c64f27-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
            gptid/7468c69a-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
            gptid/75045f96-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
            gptid/75a0096a-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
            gptid/8dd1d140-ca02-11e2-bdf7-0015171496ae  ONLINE      0    0    0
            gptid/76d701fa-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
            gptid/77759c5c-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
            gptid/78190bd3-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
            gptid/78bb9173-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
            gptid/795a7052-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
            gptid/79fbc7b0-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
 
errors: No known data errors
[root@freenas] ~# zpool status
pool: tank
state: ONLINE
scan: scrub repaired 0 in 45h1m with 0 errors on Wed Jul 17 15:52:57 2013
config:
 
NAME                                            STATE    READ WRITE CKSUM
tank                                            ONLINE      0    0    0
raidz3-0                                      ONLINE      0    0    0
gptid/6fbb91d5-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
gptid/70448fd2-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
gptid/70c0c7b3-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
gptid/713de0d5-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
gptid/71e3eea1-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
gptid/728458d2-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
gptid/7326aebc-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
gptid/73c64f27-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
gptid/7468c69a-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
gptid/75045f96-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
gptid/75a0096a-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
gptid/8dd1d140-ca02-11e2-bdf7-0015171496ae  ONLINE      0    0    0
gptid/76d701fa-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
gptid/77759c5c-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
gptid/78190bd3-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
gptid/78bb9173-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
gptid/795a7052-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
gptid/79fbc7b0-4a95-11e2-bca4-0015171496ae  ONLINE      0    0    0
 
errors: No known data errors


Notice in the top zpool status all of the drives are clearly listed as being "under" the raidz3-0.

Now tell me how many are under the raidz3-0 on the bottom? Your guess is as good as mine. I might have 17 drives on the raidz3 and a drive by itself or I might have 5 disks in the raidz3 and 13 disks that are singular. If you had posted the bottom you didn't provide me with enough information to determine that.

So yes, the formatting of the output can make a very big deal.
 

papageorgi

Explorer
Joined
Jul 16, 2013
Messages
51
oh wow i see what you mean, i'm sorry it's very loud at work and i misread/understood what you were asking. thankfully i did save the panicked commands here they are (thank you for the clarity cyberjock):

note: the destroy command was entered in hope that it would recreate the something.000 file so i could see my pools data instead of zfs seeing the pool but there was no data or seemingly properly mounted pool where i can access data other than snapshots ...
Code:
[smurfy@nas] /# zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
stor  6.58T  11.2T  6.54T  /stor
[smurfy@nas] /# zpool get bootfs stor
NAME  PROPERTY  VALUE  SOURCE
stor  bootfs    -      default
[smurfy@nas] /# zfs snapshot -r stor@20130718
[smurfy@nas] /# zpool get listsnapshots stor
NAME  PROPERTY      VALUE      SOURCE
stor  listsnapshots  off        default
[smurfy@nas] /# zfs list -t snapshot
NAME                        USED  AVAIL  REFER  MOUNTPOINT
stor@auto-20130704.1100-2w  3.70M      -  2.60T  -
.
.
.
stor@auto-20130717.1800-2w  4.20M      -  5.57T  -
stor@auto-20130717.2301-6w  1.17M      -  5.89T  -
stor@auto-20130717.2316-6w  1.11M      -  5.89T  -
stor@auto-20130717.2331-6w  1.40M      -  5.89T  -
stor@auto-20130718.0000-6w  1.42M      -  5.88T  -
stor@auto-20130718.0015-6w  1.79M      -  5.90T  -
stor@auto-20130718.0030-6w  1.53M      -  5.90T  -
stor@auto-20130718.0045-6w  3.81M      -  5.91T  -
stor@auto-20130718.0100-6w  1.40M      -  5.93T  -
stor@auto-20130718.0115-6w  1.31M      -  5.95T  -
stor@auto-20130718.0130-6w  1.34M      -  5.97T  -
stor@auto-20130718.0145-6w  681K      -  5.99T  -
stor@auto-20130718.0200-6w  752K      -  6.01T  -
stor@auto-20130718.0215-6w  1.23M      -  6.03T  -
stor@auto-20130718.0230-6w  1.29M      -  6.05T  -
stor@auto-20130718.0245-6w  1.16M      -  6.06T  -
stor@auto-20130718.0300-6w  1.17M      -  6.08T  -
stor@auto-20130718.0315-6w  1.27M      -  6.10T  -
stor@auto-20130718.0330-6w  1.32M      -  6.12T  -
stor@auto-20130718.0345-6w  1.41M      -  6.14T  -
stor@auto-20130718.0400-6w  1.16M      -  6.15T  -
stor@auto-20130718.0415-6w  1.23M      -  6.17T  -
stor@auto-20130718.0430-6w  1.12M      -  6.19T  -
stor@auto-20130718.0445-6w  1.27M      -  6.21T  -
stor@auto-20130718.0500-6w  1.23M      -  6.23T  -
stor@auto-20130718.0515-6w  1.14M      -  6.24T  -
stor@auto-20130718.0530-6w  1.16M      -  6.26T  -
stor@auto-20130718.0545-6w  1.24M      -  6.28T  -
stor@auto-20130718.0600-6w  1.25M      -  6.30T  -
stor@auto-20130718.0615-6w  1.23M      -  6.32T  -
stor@auto-20130718.0630-6w  1.33M      -  6.34T  -
stor@auto-20130718.0645-6w  1.20M      -  6.36T  -
stor@auto-20130718.0700-6w  1.23M      -  6.37T  -
stor@auto-20130718.0715-6w  1.14M      -  6.39T  -
stor@auto-20130718.0730-6w  1.28M      -  6.41T  -
stor@auto-20130718.0745-6w  1.32M      -  6.43T  -
stor@auto-20130718.0800-6w  1.26M      -  6.45T  -
stor@auto-20130718.0815-6w  1.25M      -  6.47T  -
stor@auto-20130718.0830-6w  1.23M      -  6.48T  -
stor@auto-20130718.0845-6w  1.44M      -  6.50T  -
stor@auto-20130718.0900-6w  1.26M      -  6.52T  -
stor@auto-20130718.0915-6w  436K      -  6.54T  -
stor@20130718                  0      -  6.54T  -
[smurfy@nas] /# zfs list -o space
NAME  AVAIL  USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
stor  11.2T  6.58T    40.9G  6.54T              0      62.7M
[smurfy@nas] /# zfs get atime stor
NAME  PROPERTY  VALUE  SOURCE
stor  atime    on    local
[smurfy@nas] /# zfs mount stor
cannot mount '/stor': failed to create mountpoint
[smurfy@nas] /# zfs get mountpoint stor
NAME  PROPERTY    VALUE      SOURCE
stor  mountpoint  /stor      default
[smurfy@nas] /# zpool status -v
  pool: stor
state: ONLINE
  scan: scrub repaired 0 in 6h26m with 0 errors on Tue Jul 16 04:05:44 2013
config:
 
NAME                                            STATE    READ WRITE CKSUM
stor                                            ONLINE      0    0    0
raidz1-0                                      ONLINE      0    0    0
  gptid/8193e730-6aab-11e2-9f2f-00248c450018  ONLINE      0    0    0
  gptid/81f6b150-6aab-11e2-9f2f-00248c450018  ONLINE      0    0    0
  gptid/8255a13f-6aab-11e2-9f2f-00248c450018  ONLINE      0    0    0
  gptid/8402337d-6aab-11e2-9f2f-00248c450018  ONLINE      0    0    0
  gptid/8470dafe-6aab-11e2-9f2f-00248c450018  ONLINE      0    0    0
raidz1-1                                      ONLINE      0    0    0
  gptid/7ff753f5-edb7-11e2-b265-00248c450018  ONLINE      0    0    0
  gptid/805c1416-edb7-11e2-b265-00248c450018  ONLINE      0    0    0
  gptid/80c4b729-edb7-11e2-b265-00248c450018  ONLINE      0    0    0
  gptid/812e5515-edb7-11e2-b265-00248c450018  ONLINE      0    0    0
  gptid/8197afa9-edb7-11e2-b265-00248c450018  ONLINE      0    0    0
 
errors: No known data errors
[smurfy@nas] /# zpool destroy stor
[smurfy@nas] /# zpool import -D
  pool: stor
    id: 14817132263352275435
  state: UNAVAIL (DESTROYED)
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
  see: http://illumos.org/msg/ZFS-8000-3C
config:
 
stor                      UNAVAIL  insufficient replicas
raidz1-0                UNAVAIL  insufficient replicas
  16752418983724484862  UNAVAIL  cannot open
  16923247324607746236  UNAVAIL  cannot open
  10063454983377925543  UNAVAIL  cannot open
  800970979980896464    UNAVAIL  cannot open
  11402190904943199729  UNAVAIL  cannot open
raidz1-1                UNAVAIL  insufficient replicas
  13898617183391027350  UNAVAIL  cannot open
  10638658567095667509  UNAVAIL  cannot open
  10179731959774134998  UNAVAIL  cannot open
  18380244200663678529  UNAVAIL  cannot open
  14569402982510951241  UNAVAIL  cannot open
[smurfy@nas] /# zpool import -D stor
cannot import 'stor': no such pool or dataset
Destroy and re-create the pool from
a backup source.

note: i unplugged one drive and plugged it back in, no change, try in freenas 8.3.1 just to see if i can see it, even though it doesn't support zfs 5000.
Code:
[root@nas ~]# zpool import -D                                             
  pool: stor                                                             
    id: 14817132263352275435                                             
  state: UNAVAIL (DESTROYED)                                               
status: The pool is formatted using an incompatible version.             
action: The pool cannot be imported.  Access the pool on a system running newer
        software, or recreate the pool from backup.                       
  see: http://www.sun.com/msg/ZFS-8000-A5                                 
config:                                                                   
                                                                           
        stor                                            UNAVAIL  newer version
          raidz1-0                                      ONLINE             
            gptid/8193e730-6aab-11e2-9f2f-00248c450018  ONLINE             
            gptid/81f6b150-6aab-11e2-9f2f-00248c450018  ONLINE             
            gptid/8255a13f-6aab-11e2-9f2f-00248c450018  ONLINE             
            gptid/8402337d-6aab-11e2-9f2f-00248c450018  ONLINE             
            gptid/8470dafe-6aab-11e2-9f2f-00248c450018  ONLINE             
          raidz1-1                                      ONLINE             
            gptid/7ff753f5-edb7-11e2-b265-00248c450018  ONLINE             
            gptid/805c1416-edb7-11e2-b265-00248c450018  ONLINE             
            gptid/80c4b729-edb7-11e2-b265-00248c450018  ONLINE             
            gptid/812e5515-edb7-11e2-b265-00248c450018  ONLINE             
            gptid/8197afa9-edb7-11e2-b265-00248c450018  ONLINE             
 

delphij

FreeNAS Core Team
Joined
Jan 10, 2012
Messages
37
Can you just discard your FreeNAS 8.3 image? It won't help.

Please do exactly what I have described, and let us know the output:
  1. sysctl vfs.zfs.vdev.larger_ashift_disable=1
  2. zpool import -D
  3. zpool import -D -F -X -R /mnt stor
  4. gpart show (if step 3 didn't import your pool).
DO NOT DO ANYTHING OTHER THAN THESE! By doing so you could further complicate the situation and may irreversibly damage your data.
 

papageorgi

Explorer
Joined
Jul 16, 2013
Messages
51
oh i that was from the day of the incident in the previous post. i promise the old image is long gone and i'm not using any other commands or doing anything else. here is my input / output from the above request:
Code:
[smurfy@t61 ~]$ ssh smurfy@192.168.0.10
buffer_get_ret: trying to get more bytes 4 than in buffer 0
buffer_get_string_ret: cannot extract length
key_from_blob: can't read key type
key_read: key_from_blob 
 failed
buffer_get_ret: trying to get more bytes 4 than in buffer 0
buffer_get_string_ret: cannot extract length
key_from_blob: can't read key type
key_read: key_from_blob 
 failed
smurfy@192.168.0.10's password: 
FreeBSD 9.1-STABLE (FREENAS.amd64) #0 r+7f710c8: Fri Jul 12 15:24:36 PDT 2013
 
FreeNAS (c) 2009-2013, The FreeNAS Development Team
All rights reserved.
FreeNAS is released under the modified BSD license.
 
For more information, documentation, help or support, go here:
 http://freenas.org
FreeNAS
Could not chdir to home directory /mnt/stor/temp: No such file or directory
[smurfy@nas] /> su root
Password:
[smurfy@nas] /# sysctl vfs.zfs.vdev.larger_ashift_disable=1
vfs.zfs.vdev.larger_ashift_disable: 1 -> 1
[smurfy@nas] /# zpool import -D
   pool: stor
     id: 14817132263352275435
  state: UNAVAIL (DESTROYED)
 status: One or more devices are missing from the system.
 action: The pool cannot be imported. Attach the missing
devices and try again.
   see: http://illumos.org/msg/ZFS-8000-3C
 config:
 
stor                      UNAVAIL  insufficient replicas
 raidz1-0                UNAVAIL  insufficient replicas
   16752418983724484862  UNAVAIL  cannot open
   16923247324607746236  UNAVAIL  cannot open
   10063454983377925543  UNAVAIL  cannot open
   800970979980896464    UNAVAIL  cannot open
   11402190904943199729  UNAVAIL  cannot open
 raidz1-1                UNAVAIL  insufficient replicas
   13898617183391027350  UNAVAIL  cannot open
   10638658567095667509  UNAVAIL  cannot open
   10179731959774134998  UNAVAIL  cannot open
   18380244200663678529  UNAVAIL  cannot open
   14569402982510951241  UNAVAIL  cannot open
[smurfy@nas] /# zpool import -D -F -X -R /mnt stor
cannot import 'stor': no such pool or dataset
Destroy and re-create the pool from
a backup source.
[smurfy@nas] /# gpart show
=>        34  7814037101  da0  GPT  (3.7T)
          34          94       - free -  (47k)
         128     4194304    1  freebsd-swap  (2.0G)
     4194432  7809842703    2  freebsd-zfs  (3.7T)
 
=>        34  7814037101  da1  GPT  (3.7T)
          34          94       - free -  (47k)
         128     4194304    1  freebsd-swap  (2.0G)
     4194432  7809842703    2  freebsd-zfs  (3.7T)
 
=>        34  7814037101  da2  GPT  (3.7T)
          34          94       - free -  (47k)
         128     4194304    1  freebsd-swap  (2.0G)
     4194432  7809842703    2  freebsd-zfs  (3.7T)
 
=>        34  7814037101  da3  GPT  (3.7T)
          34          94       - free -  (47k)
         128     4194304    1  freebsd-swap  (2.0G)
     4194432  7809842703    2  freebsd-zfs  (3.7T)
 
=>        34  7814037101  da4  GPT  (3.7T)
          34          94       - free -  (47k)
         128     4194304    1  freebsd-swap  (2.0G)
     4194432  7809842703    2  freebsd-zfs  (3.7T)
 
=>      63  31326145  da5  MBR  (15G)
        63   3590433    1  freebsd  [active]  (1.7G)
   3590496        63       - free -  (31k)
   3590559   3590433    2  freebsd  (1.7G)
   7180992      3024    3  freebsd  (1.5M)
   7184016     41328    4  freebsd  (20M)
   7225344  24100864       - free -  (11G)
 
=>        34  1953525101  ada0  GPT  (931G)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  1949330703     2  freebsd-zfs  (929G)
 
=>        34  1953525101  ada1  GPT  (931G)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  1949330696     2  freebsd-zfs  (929G)
  1953525128           7        - free -  (3.5k)
 
=>        34  1953525101  ada2  GPT  (931G)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  1949330703     2  freebsd-zfs  (929G)
 
=>        34  1953525101  ada3  GPT  (931G)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  1949330703     2  freebsd-zfs  (929G)
 
=>        34  1953525101  ada4  GPT  (931G)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  1949330703     2  freebsd-zfs  (929G)
 
=>      0  3590433  da5s1  BSD  (1.7G)
        0       16         - free -  (8.0k)
       16  3590417      1  !0  (1.7G)
 
[smurfy@nas] /# 
 

delphij

FreeNAS Core Team
Joined
Jan 10, 2012
Messages
37
Ah I see what's happening, I'll get you a new image that will fix the problem. Stay tuned.
 
Status
Not open for further replies.
Top