Repairing data corruption - restore from backups or another method?

Status
Not open for further replies.

cchayre

Dabbler
Joined
Jul 11, 2012
Messages
15
I just finished migrating a zpool from one NAS to a fresh build and it came back with some data corruption after running a scrub. My inclination is that the data corruption existed prior to the move---shame on me for not running a scrub on the old NAS prior. Does anyone have a recommendation for resolving this in a safe, reliable, long-term way?

I have at least 2-3 good copies of all files in question (those with permanent errors as shown in the below output). Would it simply be enough to do an rsync w/checksum to overwrite the files in question or should I be looking to do something a bit more drastic? Ex. Blowing away the zpool and starting from scratch.

pool: grandcentral
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://illumos.org/msg/ZFS-8000-8A
scan: scrub repaired 92.4M in 4h26m with 239 errors on Wed Jan 29 03:33:19 2014
config:

NAME STATE READ WRITE CKSUM
grandcentral ONLINE 0 0 241
raidz1-0 ONLINE 0 0 482
ada0 ONLINE 0 0 693
ada1 ONLINE 0 0 410
ada3 ONLINE 0 0 713
ada2 ONLINE 0 0 403

errors: Permanent errors have been detected in the following files:

/mnt/grandcentral/Old Holding Tank/Cutting Room Floor/New Camera pt 2/00002.MTS
/mnt/grandcentral/Old Holding Tank/Cutting Room Floor/New Camera pt 2/00017.MTS
/mnt/grandcentral/Old Holding Tank/Cutting Room Floor/New Camera pt 2/00030.MTS
/mnt/grandcentral/Old Holding Tank/Cutting Room Floor/New Camera pt 2/00032.MTS
/mnt/grandcentral/Old Holding Tank/Cutting Room Floor/New Camera pt 2/00034.MTS
<output truncated>
 

warri

Guru
Joined
Jun 6, 2011
Messages
1,193
All your drives seem to have checksum errors. Did you test the RAM of the new build prior to the migration? If not, do that immediately and do not initiate another scrub or use the volume at all. When did you run the last scrub on the old system?

Which FreeNAS version are you using and what is the hardware of the NAS? On which version did you create the pool originally (doesn't look like a recent version, because no GPTIDs are used), or did you create the volume manually from CLI?

Also please post the output of smartctl -a -q nosersial /dev/adaX (X from 0 to 3) in [code] tags to preserve the formatting.
 
Status
Not open for further replies.
Top