zpool status -v
Please show the output ofzpool status -v
zpool status -v pool: Dozer state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM Dozer ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gptid/fde05a88-4251-11e7-be70-0cc47ae3a3b2 ONLINE 0 0 0 gptid/fe94dfc5-4251-11e7-be70-0cc47ae3a3b2 ONLINE 0 0 0 errors: No known data errors pool: Tank state: ONLINE scan: scrub repaired 0 in 0h0m with 0 errors on Mon May 15 04:00:39 2017 config: NAME STATE READ WRITE CKSUM Tank ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gptid/2801b587-2e95-11e7-bd4a-0cc47ae3a3b2 ONLINE 0 0 0 gptid/28fdb264-2e95-11e7-bd4a-0cc47ae3a3b2 ONLINE 0 0 0 gptid/4f25b9d4-346e-11e7-ad5c-0cc47ae3a3b2 ONLINE 0 0 0 gptid/2af94c7b-2e95-11e7-bd4a-0cc47ae3a3b2 ONLINE 0 0 0 gptid/2bfbdcf5-2e95-11e7-bd4a-0cc47ae3a3b2 ONLINE 0 0 0 gptid/2cfce92f-2e95-11e7-bd4a-0cc47ae3a3b2 ONLINE 0 0 0 errors: No known data errors pool: freenas-boot state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://illumos.org/msg/ZFS-8000-8A scan: scrub repaired 0 in 0h0m with 4 errors on Sat May 27 03:45:04 2017 config: NAME STATE READ WRITE CKSUM freenas-boot ONLINE 0 0 4 ada0p2 ONLINE 0 0 8 errors: Permanent errors have been detected in the following files: //usr/local/lib/python2.7/site-packages/dns/rdtypes/IN/__init__.pyc //usr/local/lib/python2.7/site-packages/dns/rdtypes/ANY/__init__.pyc //usr/local/lib/python2.7/site-packages/dns/rdtypes/ANY/SOA.pyc freenas-boot/ROOT/Initial-Install:/data/freenas-v1.db
You should definitely save your config and restore it to a fresh install. It should all come back.
The bad news is your SSD lost data. Don't know why, but since there was no redundancy ZFS couldn't fix it.
We're saying a fresh install to the same SSD, right? Or do i need to get a new one? (Informationally, i now plan to get a 2nd SSD to mirror my boot drive).
Also, do i need to delete the corrupted files? Shouldn't the fresh install overwrite everything and clear everything that way?
Hardware is doing it's job. No complaints beyond what is in this thread. I haven't tried to really tax the processor. I've watched a couple minutes of a movie I put in to test it out and confirm it would workYou've had a bumpy road so far, but how's the overall experience? Are you happy with the hardware choices? How is the processor handling Plex?
########## ZPool status report for Dozer ########## pool: Dozer state: ONLINE scan: scrub repaired 0 in 0h0m with 0 errors on Thu Jun 15 04:00:01 2017 config: NAME STATE READ WRITE CKSUM Dozer ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gptid/fde05a88-4251-11e7-be70-0cc47ae3a3b2 ONLINE 0 0 0 gptid/fe94dfc5-4251-11e7-be70-0cc47ae3a3b2 ONLINE 0 0 0 errors: No known data errors ########## ZPool status report for Tank ########## pool: Tank state: ONLINE scan: scrub repaired 0 in 1h0m with 0 errors on Thu Jun 15 05:00:12 2017 config: NAME STATE READ WRITE CKSUM Tank ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gptid/2801b587-2e95-11e7-bd4a-0cc47ae3a3b2 ONLINE 0 0 0 gptid/28fdb264-2e95-11e7-bd4a-0cc47ae3a3b2 ONLINE 0 0 0 gptid/4f25b9d4-346e-11e7-ad5c-0cc47ae3a3b2 ONLINE 0 0 0 gptid/2af94c7b-2e95-11e7-bd4a-0cc47ae3a3b2 ONLINE 0 0 0 gptid/2bfbdcf5-2e95-11e7-bd4a-0cc47ae3a3b2 ONLINE 0 0 0 gptid/2cfce92f-2e95-11e7-bd4a-0cc47ae3a3b2 ONLINE 0 0 0 errors: No known data errors
is there a simple command to
check the last time a scrub was run? And is there one to kick off a scrub manually?
man zpool
describes the following commands: zpool history Dozer
and zpool scrub Dozer
zpool history
command will produce a large amount of output. You can pare it down to just the entries with "scrub" with zpool history Dozer | grep scrub
zpool scrub Dozer
will start a scrub of the pool Dozer.A much simpler way than grepping throughis there a simple command to check the last time a scrub was run?
zpool history
would be simply zpool status Dozer
. That shows that a scrub ran very quickly. How much data do you have on Dozer?Very little. Couple test files to confirm the share worked and that's about it.A much simpler way than grepping throughzpool history
would be simplyzpool status Dozer
. That shows that a scrub ran very quickly. How much data do you have on Dozer?
[root@Rand ~]# zpool status Dozer pool: Dozer state: ONLINE scan: scrub repaired 0 in 0h0m with 0 errors on Thu Jun 15 04:00:01 2017 config: NAME STATE READ WRITE CKS UM Dozer ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gptid/fde05a88-4251-11e7-be70-0cc47ae3a3b2 ONLINE 0 0 0 gptid/fe94dfc5-4251-11e7-be70-0cc47ae3a3b2 ONLINE 0 0 0 errors: No known data errors [root@Rand ~]# ^C
I can't say for sure, but it would make sense to me.Could that be the reason the scrub shows 0h 0m, because the scrub took less than 1 minute?
Ah! I didn't realize there was practically no data on it. Yes, that would be the reason for such a fast scrub.Could that be the reason the scrub shows 0h 0m, because the scrub took less than 1 minute?
Makes sense (I know, I know...simple logic).