meekjt13
Cadet
- Joined
- Dec 24, 2016
- Messages
- 1
After upgrading to 9.10.2 there was an error indicated in the web ui, to get more information I ran "zpool status -v" from the console and got the following output:
This was the first time I encountered an error on FreeNAS so I thought I should be able to roll back to a previous boot environment to solve the problem. It turns out that any boot environment I change to has the same error, but with the active boot environments kernel file.
I'll now reinstall from scratch to fix the problem, but I'm quite confused as to what is happening here. My questions are:
1. Why doesn't the output of "zpool status -v" report errors in non-active datasets?
2. How can the same file be damaged across multiple datasets?
3. Since by boot device is a mirror across two drives how could this error have happened? Shouldn't ZFS prevented a file from becoming corrupt? I know it's theoretically possible, but isn't it highly unlikely in this configuration?
4. Would increasing the number of devices in the pool mirror help prevent this from happening again?
Thanks for any help, I'm mainly curious in knowing what might have happened here.
EDIT: The other odd thing is that the system boots and works as usual, I didn't expect that to be possible if the kernel file has error.
Code:
pool: freenas-boot state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://illumos.org/msg/ZFS-8000-8A scan: scrub repaired 245K in 0h13m with 1 errors on Fri Dec 23 08:03:03 2016 config: NAME STATE READ WRITE CKSUM freenas-boot ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 da0p2 ONLINE 0 0 0 da1p2 ONLINE 0 0 0 errors: Permanent errors have been detected in the following files: freenas-boot/ROOT/9.10.2@2016-05-03-06:22:02:/boot/kernel/kernel
This was the first time I encountered an error on FreeNAS so I thought I should be able to roll back to a previous boot environment to solve the problem. It turns out that any boot environment I change to has the same error, but with the active boot environments kernel file.
I'll now reinstall from scratch to fix the problem, but I'm quite confused as to what is happening here. My questions are:
1. Why doesn't the output of "zpool status -v" report errors in non-active datasets?
2. How can the same file be damaged across multiple datasets?
3. Since by boot device is a mirror across two drives how could this error have happened? Shouldn't ZFS prevented a file from becoming corrupt? I know it's theoretically possible, but isn't it highly unlikely in this configuration?
4. Would increasing the number of devices in the pool mirror help prevent this from happening again?
Thanks for any help, I'm mainly curious in knowing what might have happened here.
EDIT: The other odd thing is that the system boots and works as usual, I didn't expect that to be possible if the kernel file has error.