What doesSo scrub finished. It still shows unhealthy, and it has 142 errors. How do I fix?
zpool status -v
show? Does it show "permanent errors in the following files"?What doeszpool status -v
show? Does it show "permanent errors in the following files"?
root@truenas[~]# zpool status -v pool: Tank state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A scan: scrub repaired 0B in 01:16:52 with 142 errors on Tue Jun 20 13:12:18 2023 config: NAME STATE READ WRITE CKSUM Tank ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 gptid/e0b02732-0f6c-11ee-afa2-afa1770be631 ONLINE 0 0 0 gptid/11bac542-ad95-11ed-8d1c-7df9cea98351 ONLINE 0 0 0 gptid/11c0215d-ad95-11ed-8d1c-7df9cea98351 ONLINE 0 0 0 logs gptid/111fa2ca-ad95-11ed-8d1c-7df9cea98351 ONLINE 0 0 0 cache gptid/111d9ded-ad95-11ed-8d1c-7df9cea98351 ONLINE 0 0 0 errors: Permanent errors have been detected in the following files: Tank/Data/windowsfiles@auto-2023-05-31_00-00:<0x1> pool: boot-pool state: ONLINE scan: scrub repaired 0B in 00:00:04 with 0 errors on Tue Jun 20 03:45:04 2023 config: NAME STATE READ WRITE CKSUM boot-pool ONLINE 0 0 0 da0p2 ONLINE 0 0 0 errors: No known data errors root@truenas[~]#
Tank/Data/windowsfiles@auto-2023-05-31_00-00
zpool clear Tank
and then run the scrub again, if the issue persists then you may need to delete the snapshot in question - although in your situation, I'd still update your backups first.That it is, specifically metadata describing or contained within the snapshotTank/Data/windowsfiles@auto-2023-05-31_00-00
Attempt azpool clear Tank
and then run the scrub again, if the issue persists then you may need to delete the snapshot in question - although in your situation, I'd still update your backups first.
I assume you mean an HBA?my unraid card
I assume you mean an HBA?
And you do have a second drive now, so a four-drive Z2 is certainly an option.
3+2 RAIDZ2 would be even better, ~12T usable and two-drive redundancy.Yeah, that's what I meant. I will actually have 5 drives in total.
Yeah, I was going to do away with that. I will probably buy more ram down the road. But I've spent enough money this past week.3+2 RAIDZ2 would be even better, ~12T usable and two-drive redundancy.
I would question if you're getting any value out of the cache and log devices - if you have a very small amount of RAM, the cache device isn't likely to be able to fill itself with particularly good candidate data, and unless you're making synchronous writes (eg: NFS) against the data, the log will be idle.
root@truenas[~]# zpool status -v pool: Tank state: DEGRADED status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A scan: scrub repaired 0B in 01:13:24 with 72 errors on Tue Jun 20 17:54:06 2023 config: NAME STATE READ WRITE CKSUM Tank DEGRADED 0 0 0 raidz1-0 DEGRADED 0 0 0 gptid/e0b02732-0f6c-11ee-afa2-afa1770be631 DEGRADED 0 0 288 too many errors gptid/11bac542-ad95-11ed-8d1c-7df9cea98351 DEGRADED 0 0 284 too many errors gptid/11c0215d-ad95-11ed-8d1c-7df9cea98351 DEGRADED 0 0 280 too many errors logs gptid/111fa2ca-ad95-11ed-8d1c-7df9cea98351 ONLINE 0 0 0 cache gptid/111d9ded-ad95-11ed-8d1c-7df9cea98351 ONLINE 0 0 0 errors: Permanent errors have been detected in the following files: Tank/Data/windowsfiles:<0x1> pool: boot-pool state: ONLINE scan: scrub repaired 0B in 00:00:04 with 0 errors on Tue Jun 20 03:45:04 2023 config: NAME STATE READ WRITE CKSUM boot-pool ONLINE 0 0 0 da0p2 ONLINE 0 0 0 errors: No known data errors root@truenas[~]#
Did you runSo redid the scrub, same result, I deleted all the snapshots, and redid it again here's the latest zpool status -v
zpool clear Tank
before one of those steps?Did you runzpool clear Tank
before one of those steps?
root@truenas[~]# zpool status -v pool: Tank state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A scan: scrub repaired 0B in 01:13:19 with 72 errors on Tue Jun 20 23:27:51 2023 config: NAME STATE READ WRITE CKSUM Tank ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 gptid/e0b02732-0f6c-11ee-afa2-afa1770be631 ONLINE 0 0 144 gptid/11bac542-ad95-11ed-8d1c-7df9cea98351 ONLINE 0 0 142 gptid/11c0215d-ad95-11ed-8d1c-7df9cea98351 ONLINE 0 0 140 logs gptid/111fa2ca-ad95-11ed-8d1c-7df9cea98351 ONLINE 0 0 0 cache gptid/111d9ded-ad95-11ed-8d1c-7df9cea98351 ONLINE 0 0 0 errors: Permanent errors have been detected in the following files: Tank/Data/windowsfiles:<0x1> pool: boot-pool state: ONLINE scan: scrub repaired 0B in 00:00:04 with 0 errors on Tue Jun 20 03:45:04 2023 config: NAME STATE READ WRITE CKSUM boot-pool ONLINE 0 0 0 da0p2 ONLINE 0 0 0 errors: No known data errors root@truenas[~]#
The GUI is probably parsing the output ofIn the ui it still says unhealthy.
zpool status -v
, and if it hits any instance of the "Permanent errors" or a non-zero number in the READ/WRITE/CKUM column, it interprets this as "UNHEALTHY". (Which is what you see on the GUI's Dashboard or Pools page.)Tank/Data/windowsfiles:<0x1>
suggests metadata corruption for the dataset itself. (Not any specific file.)