So, I have a Freenas 11.2 install ... was with Raid z2 ... Core i3 8 GB RAM... was running just fine and then today I get an error:
Beginning ZFS volume imports
Importing 14519832302454782612
txg 6063307 import pool version 5000; software version 5000/5; uts 11.0-STABLE 1100512 amd64
ZFS volume imports complete
panic: Solaris(panic): blkptr at 0xfffff8011945c180 DVA 0 has invalid VDEV 512
cpuid = 2
KDB: stack backtrace:
db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe0467a8b0c0
vpanic() at vpanic+0x186/frame 0xfffffe0467a8b140
panic() at panic+0x43/frame 0xfffffe0467a8b1a0
vcmn_err() at vcmn_err+0xc2/frame 0xfffffe0467a8b2e0
zfs_panic_recover() at zfs_panic_recover+0x5a/frame 0xfffffe0467a8b340
zfs_blkptr_verify() at zfs_blkptr_verify+0x2d3/frame 0xfffffe0467a8b380
zio_read() at zio_read+0x2c/frame 0xfffffe0467a8b3c0
arc_read() at arc_read+0x71b/frame 0xfffffe0467a8b460
dbuf_read() at dbuf_read+0x7d1/frame 0xfffffe0467a8b510
dmu_buf_hold_by_dnode() at dmu_buf_hold_by_dnode+0x3d/frame 0xfffffe0467a8b550
zap_get_leaf_byblk() at zap_get_leaf_byblk+0x4d/frame 0xfffffe0467a8b5b0
fzap_length() at fzap_length+0x96/frame 0xfffffe0467a8b620
zap_length_uint64() at zap_length_uint64+0xd7/frame 0xfffffe0467a8b680
ddt_zap_lookup() at ddt_zap_lookup+0x3d/frame 0xfffffe0467a8b7e0
ddt_lookup() at ddt_lookup+0x2a2/frame 0xfffffe0467a8b9b0
zio_ddt_write() at zio_ddt_write+0x75/frame 0xfffffe0467a8bad0
zio_execute() at zio_execute+0xac/frame 0xfffffe0467a8bb20
taskqueue_run_locked() at taskqueue_run_locked+0x127/frame 0xfffffe0467a8bb80
taskqueue_thread_loop() at taskqueue_thread_loop+0xc8/frame 0xfffffe0467a8bbb0
fork_exit() at fork_exit+0x85/frame 0xfffffe0467a8bbf0
fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe0467a8bbf0
--- trap 0, rip = 0, rsp = 0, rbp = 0 ---
KDB: enter: panic
[ thread pid 0 tid 101534 ]
Stopped at kdb_enter+0x3b: movq $0,kdb_why
db>
A quick Google took me to a ixsystems.com site and a ixsystems tech stated the following:
"This looks like a pool corruption triggered this panic. Do you experience this only with FreeNAS 11, or it happens for 9.10 too?
If it is indeed a pool corruption, you may try to set vfs.zfs.recover sysctl/tunable and try to import the pool to copy-out the data. There are no mechanisms in ZFS to fix corruptions like that."
My question is this... was I mislead into thinking that ZFS was much stronger and better than any other o/s ? I thought corruption was not possible with ZFS ? ... with CoW and all... And, as stated in the last sentence.. "There are no mechanisms in ZFS to fix corruptions like that." ... is this true?
Just curious on others thoughts about this issue .. Makes me nervous knowing that I have about 20 of these FreeNAS systems in production environments... should I be worried?
Appreciate everyone's time concerning this matter.
Kell
Beginning ZFS volume imports
Importing 14519832302454782612
txg 6063307 import pool version 5000; software version 5000/5; uts 11.0-STABLE 1100512 amd64
ZFS volume imports complete
panic: Solaris(panic): blkptr at 0xfffff8011945c180 DVA 0 has invalid VDEV 512
cpuid = 2
KDB: stack backtrace:
db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe0467a8b0c0
vpanic() at vpanic+0x186/frame 0xfffffe0467a8b140
panic() at panic+0x43/frame 0xfffffe0467a8b1a0
vcmn_err() at vcmn_err+0xc2/frame 0xfffffe0467a8b2e0
zfs_panic_recover() at zfs_panic_recover+0x5a/frame 0xfffffe0467a8b340
zfs_blkptr_verify() at zfs_blkptr_verify+0x2d3/frame 0xfffffe0467a8b380
zio_read() at zio_read+0x2c/frame 0xfffffe0467a8b3c0
arc_read() at arc_read+0x71b/frame 0xfffffe0467a8b460
dbuf_read() at dbuf_read+0x7d1/frame 0xfffffe0467a8b510
dmu_buf_hold_by_dnode() at dmu_buf_hold_by_dnode+0x3d/frame 0xfffffe0467a8b550
zap_get_leaf_byblk() at zap_get_leaf_byblk+0x4d/frame 0xfffffe0467a8b5b0
fzap_length() at fzap_length+0x96/frame 0xfffffe0467a8b620
zap_length_uint64() at zap_length_uint64+0xd7/frame 0xfffffe0467a8b680
ddt_zap_lookup() at ddt_zap_lookup+0x3d/frame 0xfffffe0467a8b7e0
ddt_lookup() at ddt_lookup+0x2a2/frame 0xfffffe0467a8b9b0
zio_ddt_write() at zio_ddt_write+0x75/frame 0xfffffe0467a8bad0
zio_execute() at zio_execute+0xac/frame 0xfffffe0467a8bb20
taskqueue_run_locked() at taskqueue_run_locked+0x127/frame 0xfffffe0467a8bb80
taskqueue_thread_loop() at taskqueue_thread_loop+0xc8/frame 0xfffffe0467a8bbb0
fork_exit() at fork_exit+0x85/frame 0xfffffe0467a8bbf0
fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe0467a8bbf0
--- trap 0, rip = 0, rsp = 0, rbp = 0 ---
KDB: enter: panic
[ thread pid 0 tid 101534 ]
Stopped at kdb_enter+0x3b: movq $0,kdb_why
db>
A quick Google took me to a ixsystems.com site and a ixsystems tech stated the following:
"This looks like a pool corruption triggered this panic. Do you experience this only with FreeNAS 11, or it happens for 9.10 too?
If it is indeed a pool corruption, you may try to set vfs.zfs.recover sysctl/tunable and try to import the pool to copy-out the data. There are no mechanisms in ZFS to fix corruptions like that."
My question is this... was I mislead into thinking that ZFS was much stronger and better than any other o/s ? I thought corruption was not possible with ZFS ? ... with CoW and all... And, as stated in the last sentence.. "There are no mechanisms in ZFS to fix corruptions like that." ... is this true?
Just curious on others thoughts about this issue .. Makes me nervous knowing that I have about 20 of these FreeNAS systems in production environments... should I be worried?
Appreciate everyone's time concerning this matter.
Kell