Pool issue, degraded mirror.

Status
Not open for further replies.

Gilus

Cadet
Joined
Sep 16, 2014
Messages
2
Hello Everyone! First of all - thanks for reading this ;-)

I'm dealing with a degraded mirror. My setup is really simple - 2 hard drives in a mirror. Unfortunately one of them burned (complete fail). I have replaced it with brand new drive, proceed with resilvering... and unfortunately ended with degraded pool.

This is what zpool status shows: (yeah, yet-another-scrub is happening now)
Code:
 pool: hdd0
state: DEGRADED
status: One or more devices has experienced an error resulting in data
    corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
    entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
  scan: scrub in progress since Tue Sep 16 20:08:06 2014
        51.8G scanned out of 1.63T at 91.4M/s, 5h2m to go
        0 repaired, 3.10% done
config:

    NAME                                              STATE     READ WRITE CKSUM
    hdd0                                              DEGRADED     0     0     0
      mirror-0                                        DEGRADED     0     0     0
        gptid/3177f6ba-3920-11e4-a7d0-e840f23d7670    ONLINE       0     0     0
        replacing-1                                   DEGRADED     0     0     0
          17272646910272060976                        UNAVAIL      0     0     0  was /dev/gptid/856457d2-7e92-11e2-8673-e840f23d7670
          gptid/00969bb4-3b92-11e4-93e4-e840f23d7670  ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

        hdd0/media:<0x100e2>


First of all -> mirror-0 is in DEGRADED status. Replacing-1 is DEGRADED also.
I'm not able to detach 17272646910272060976 (no valid replicas error). How to get rid of this unavailable drive?

Srubing does not help to fix permanent error (already made it couple of times - couple hours for each scrub process). Permanent error (despite the fact that I deleted broken file still exists). Now I see only inode information. How to clear this inode hdd0/media:<0x100e2> error?

What should I do?
I'm running FreeNAS 8.3 (yeah, quite old - I had never any bigger needs then having NFS service).

I'm desperate - I have checked and tried every solution I could find on forums.

Should I rebuild everything from scratch? Ahgr!
Thanks for any support!
 
D

dlavigne

Guest
Did the resilver and scrub actually finish? If so, did that improve the output of zpool status? Typically "no valid replicas" indicates that there is no redundancy, which is weird...
 

Gilus

Cadet
Joined
Sep 16, 2014
Messages
2
Hi! Thank You very much for answer.

Srub - it finished couple of times. Each pass reduced amount of permanent errors detected. Unfortunately I can not rid of las one hdd0/media:<0x100e2>.
Last scrub finished after 12h and repaired nothing.

No valid replicas - I see this error whenever I try to detach 17272646910272060976.

Is there any way to force system to do replace once again? This replacing-1 is still present... Disk which was supposed to replace faulty one is already in the pool, resilvered and online...

I'm stuck.
It's not critical data but it makes me sad to think that I will need to move this 1.6GB of data somwehre else, destroy pool, re-create it and put data back.
It was just a simple mirror ;-)))

Ech.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
See those permanent errors. You already have corruption in your pool. I wouldn't be the least bit surprised if your corruption is directly attributable to the problem you are currently having.

Unfortunately this means 2 things:

1. At some point in the past you did a disk replacement and there wasn't enough redundancy to reconstruct the missing data.
2. The only good long-term fix is to destroy the pool and recreate it.
 
Status
Not open for further replies.
Top