Please help! Started up FreeNAS, suddenly voluime storage (ZFS) status unknown?!

Status
Not open for further replies.

purduephotog

Explorer
Joined
Jan 14, 2013
Messages
73
So... how many drives did you pull before it 'came back'? Was it just after the first one? Can you enumerate your steps using the history command ?
 

HHawk

Contributor
Joined
Jun 8, 2011
Messages
176
First drive... However I did a similar thing yesterday, with another drive (last disk), but that didn't do anything at all...

Code:
root@mfsbsd:/root # history
     1  10:03   zpool import
     2  10:03   zpool import storage
     3  10:04   zpool import -T 735242 storage
     4  10:07   zpool status
     5  10:07   zpool status -v
     6  10:10   zpool status -x
     7  10:15   zpool status -v
     8  10:15   zpool import storage
     9  10:15   zpool status
    10  10:16   zpool status -xv
    11  16:10   history
 

HHawk

Contributor
Joined
Jun 8, 2011
Messages
176
Apparently most / everything is now still intact...!

Great... I am currently very happy, however before I do something (stupid), what should I do now...? Can the ZFS be repaired and get it back working again...? So I don't even have to lose my FreeNAS settings etc...?

Of course it would be best to back it up now, however I do not have the spare space or harddisks to transfer everything...? However since I can access it now, it means it can also be fixed right?

Please provide me with some solutions or what I should do next... Thanks...

napoleoN or protosd, what do you recommend to do now?
I don't want to leave the NAS on now all the time. Can the ZFS be repaired, so I can get FreeNAS working again as it was?

And how do I fix this message?

Code:
errors: Permanent errors have been detected in the following files:

        /rw/storage/Jail/plugins/var/log/messages


And what do I do with the unplugged disk? If I re-add it, the problem will be back again, right?

Please advice what to do. Thanks!
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
It sounds like great news, but I'd still stay calm and don't do anything. Post the output of another "zpool status -v" and "camcontrol devlist".

If it were me, you've come too far to be hasty and make some mistake, wait and get a couple of disks to copy your files off. I'm guessing you're still going to find some damaged files later.

You could do a scrub

You could format the disk you removed and copy some data to that.

I'd feel better waiting to see what PaleoN thinks, but I'd be patient and wait til you can get a couple more disks and do stuff the SAFE way. THEN you'll also have a backup! :)

You could do an "ls -Ral" and direct the output to a text file on ANOTHER computer. Don't make any changes to files on your NAS yet! Then you'll have a list of your files, which I've always found helpful in situations like this.


EDIT: ZFS also has it's own history command "zpool history"
 

HHawk

Contributor
Joined
Jun 8, 2011
Messages
176
Yeah exactly... I have been patient enough so far and I am really happy that at least most stuff seems to be safe (as far as I can tell).

As for the output of the given commands, look below:

Code:
root@mfsbsd:/root # zpool status -v
  pool: storage
 state: DEGRADED
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://illumos.org/msg/ZFS-8000-8A
  scan: scrub repaired 0 in 3h50m with 0 errors on Sun Feb 24 03:13:57 2013
config:

        NAME                                            STATE     READ WRITE CKSUM
        storage                                         DEGRADED     0     0     2
          raidz2-0                                      DEGRADED     0     0     4
            gptid/19177fb9-25fa-11e2-9ab0-00151736994a  ONLINE       0     0     0
            gptid/19b5ec3a-25fa-11e2-9ab0-00151736994a  ONLINE       0     0     0
            gptid/3dc2f956-3de6-11e2-8af1-00151736994a  ONLINE       0     0     0
            gptid/1aefa3e9-25fa-11e2-9ab0-00151736994a  ONLINE       0     0     0
            5393521929904432319                         UNAVAIL      0     0     0  was /dev/gptid/1b8f2b64-25fa-11e2-9ab0-00151736994a
            gptid/1c2d6a74-25fa-11e2-9ab0-00151736994a  ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

        /rw/storage/Jail/plugins/var/log/messages


Code:
root@mfsbsd:/root # camcontrol devlist
<ATA WDC WD20EARX-00P AB51>        at scbus0 target 0 lun 0 (pass0,da0)
<ATA WDC WD20EARX-00P AB51>        at scbus0 target 1 lun 0 (pass1,da1)
<ATA WDC WD20EARX-00P AB51>        at scbus0 target 3 lun 0 (pass2,da2)
<ATA WDC WD20EARX-00P AB51>        at scbus0 target 4 lun 0 (pass3,da3)
<ATA WDC WD20EARX-008 AB51>        at scbus0 target 5 lun 0 (pass4,da4)
<SanDisk Extreme 0001>             at scbus11 target 0 lun 0 (da5,pass5)


I also did ls -Ral from storage and after displaying several files and directories it rebooted the machine?!
Hopefully it didn't do any damage... :S

- - - Updated - - -

Uhmz...

Now I am getting this after the reboot?!

Code:
root@mfsbsd:/root # zpool import storage
cannot import 'storage': I/O error
        Recovery is possible, but will result in some data loss.
        Returning the pool to its state as of Sun Mar 24 11:13:56 2013
        should correct the problem.  Approximately 496 minutes of data
        must be discarded, irreversibly.  Recovery can be attempted
        by executing 'zpool import -F storage'.  A scrub of the pool
        is strongly recommended after recovery.


Now what...? :(
Or is this the data which was already gone anyways?

- - - Updated - - -

I will wait for paleoN and what he advices to do now.
I guess I could do the "zpool import -F storage", but I will first wait till he returns.
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
Yeah exactly... I have been patient enough so far and I am really happy that at least most stuff seems to be safe (as far as I can tell).

I also did ls -Ral from storage and after displaying several files and directories it rebooted the machine?!
Hopefully it didn't do any damage... :S

*FUCK*.... there's that bloody I/O error AGAIN..... there's some hardware problem SOMEWHERE!

I'd take that new controller and all the disks you've got connected and put them in another system. If a simple "ls" is causing your system to crash, trying to copy files isn't going to go very smoothly either.

Wait for PaleoN just so we have some consensus.
 

HHawk

Contributor
Joined
Jun 8, 2011
Messages
176
*FUCK*.... there's that bloody I/O error AGAIN..... there's some hardware problem SOMEWHERE!

I'd take that new controller and all the disks you've got connected and put them in another system. If a simple "ls" is causing your system to crash, trying to copy files isn't going to go very smoothly either.

Wait for PaleoN just so we have some consensus.

Yeah well I/O error, but now it says it can be repaired and "only" lose 496 minutes of data.
Maybe this data was "already" lost, but couldn't calculate this or display this, because of other problems concerning the ZFS...?

And maybe "ls -Ral"-command triggered a reboot because it wanted to access / read files which were destroyed or unavailable. My guess is that it rebooted at 30% - 40% of the files being displayed. I can see it rebooted by at a movie I downloaded. I don't know how ZFS stores files, but I reckon it's randomly written, right?

In regards to 496 minutes data lost. How do I need to interpret that? I think for ZFS it's normal to show this amount in minutes, but what does that mean in MB or GB? I cannot put a finger on that...

Also it says it wants to and I quote:

"Returning the pool to its state as of Sun Mar 24 11:13:56 2013"

That's the state AFTER I could access the ZFS pool called "storage" finally. But after that I didn't add, change or remove files. So how can stuff be lost, if I didn't change, add or delete files after that time? Weird...
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
Can he mount the pool readonly, and attempt select data recovery? ie, copying off pictures and stuff?

If the pools mounted readonly, and attempting a file copy crashes the system, I assume nothing on the pool change change, so no harm done?

edit:

Just noticed the 496 minutes of suggested rollback is 'after' the initial problem accessing the pool.

This implies the pool has changed after it was mounted for recovery.

Would it have been better to have been trying readonly mounts exclusively during attempted recovery? Just to ensure nothing got changed on a potentially damaged pool?
 

HHawk

Contributor
Joined
Jun 8, 2011
Messages
176
Just noticed the 496 minutes of suggested rollback is 'after' the initial problem accessing the pool.

This implies the pool has changed after it was mounted for recovery.

Would it have been better to have been trying readonly mounts exclusively during attempted recovery? Just to ensure nothing got changed on a potentially damaged pool?

Well that's the weird part... I didn't add, remove or change anything. The only thing that I did, was download a few files to see if they could still be read or opened. That shouldn't have affected the pool in any way, right? :confused:
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
Yeah well I/O error, but now it says it can be repaired and "only" lose 496 minutes of data.
Maybe this data was "already" lost, but couldn't calculate this or display this, because of other problems concerning the ZFS...?

And maybe "ls -Ral"-command triggered a reboot because it wanted to access / read files which were destroyed or unavailable. My guess is that it rebooted at 30% - 40% of the files being displayed. I can see it rebooted by at a movie I downloaded. I don't know how ZFS stores files, but I reckon it's randomly written, right?

In regards to 496 minutes data lost. How do I need to interpret that? I think for ZFS it's normal to show this amount in minutes, but what does that mean in MB or GB? I cannot put a finger on that...

Also it says it wants to and I quote:

"Returning the pool to its state as of Sun Mar 24 11:13:56 2013"

That's the state AFTER I could access the ZFS pool called "storage" finally. But after that I didn't add, change or remove files. So how can stuff be lost, if I didn't change, add or delete files after that time? Weird...

I'm not sure about the 496 minutes. The amount of data depends on how much was written. I think it has to do with the -T and rolling back transactions to try and get the pool to mount. Losing 496 minutes is better than losing everything and I don't think its as important as figuring out what's causing the I/O error. It seems possible it gets to a certain point, maybe the same point that the "ls" got to and there's some error. I realize you didn't change anything. I don't think the 496 minutes of data is anything we have control of, so you'll have to take what you can get, but until you figure out what's causing the I/O error, I wouldn't trust your system.

I think Titan's suggestion about mounting read-only is a good one.

I also think I'd do what the error says and try "zpool import -F storage" and then do a scrub, then you can remount it read-only and try copying stuff when you have some disks to copy to.

I don't know PaleoN's schedule and I imagine he has other things he wants to do since it is the weekend.
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
I also think I'd do what the error says and try "zpool import -F storage" and then do a scrub, then you can remount it read-only and try copying stuff when you have some disks to copy to.

If it were me, I'd hold off on doing a scrub.

I thought it was inadvisable to scrub a pool that is having problems?

I agree the 496 minute rollback is probably minor. But I wouldn't be scrubbing it until I had copies of my data, or if a scrub was absolutely required in order to get data.

Honestly, in this situation, if I had good copies of my data, I'd probably recreate the pool from scratch just to be sure. After copying everything off, and before recreating the pool, I'd probably do some surface verification of the drives. As in a dd wipe, dd read, and another smart long test of each drive. Might be a bit overkill, but it's something I would do.
 

HHawk

Contributor
Joined
Jun 8, 2011
Messages
176
Okay done that.

Here is the output:

Code:
root@mfsbsd:/root # zpool import
   pool: storage
     id: 17472259698871586545
  state: DEGRADED
 status: One or more devices are missing from the system.
 action: The pool can be imported despite missing or damaged devices.  The
        fault tolerance of the pool may be compromised if imported.
   see: http://illumos.org/msg/ZFS-8000-2Q
 config:

        storage                                         DEGRADED
          raidz2-0                                      DEGRADED
            gptid/19177fb9-25fa-11e2-9ab0-00151736994a  ONLINE
            gptid/19b5ec3a-25fa-11e2-9ab0-00151736994a  ONLINE
            gptid/3dc2f956-3de6-11e2-8af1-00151736994a  ONLINE
            gptid/1aefa3e9-25fa-11e2-9ab0-00151736994a  ONLINE
            5393521929904432319                         UNAVAIL  cannot open
            gptid/1c2d6a74-25fa-11e2-9ab0-00151736994a  ONLINE
root@mfsbsd:/root # zpool import storage
cannot import 'storage': I/O error
        Recovery is possible, but will result in some data loss.
        Returning the pool to its state as of Sun Mar 24 11:13:56 2013
        should correct the problem.  Approximately 496 minutes of data
        must be discarded, irreversibly.  Recovery can be attempted
        by executing 'zpool import -F storage'.  A scrub of the pool
        is strongly recommended after recovery.
root@mfsbsd:/root # zpool import -F storage
Pool storage returned to its state as of Sun Mar 24 11:13:56 2013.
Discarded approximately 496 minutes of transactions.
root@mfsbsd:/root # zpool status
  pool: storage
 state: DEGRADED
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://illumos.org/msg/ZFS-8000-8A
  scan: scrub repaired 0 in 3h50m with 0 errors on Sun Feb 24 03:13:57 2013
config:

        NAME                                            STATE     READ WRITE CKSUM
        storage                                         DEGRADED     0     0     0
          raidz2-0                                      DEGRADED     0     0     0
            gptid/19177fb9-25fa-11e2-9ab0-00151736994a  ONLINE       0     0     0
            gptid/19b5ec3a-25fa-11e2-9ab0-00151736994a  ONLINE       0     0     0
            gptid/3dc2f956-3de6-11e2-8af1-00151736994a  ONLINE       0     0     0
            gptid/1aefa3e9-25fa-11e2-9ab0-00151736994a  ONLINE       0     0     0
            5393521929904432319                         UNAVAIL      0     0     0  was /dev/gptid/1b8f2b64-25fa-11e2-9ab0-00151736994a
            gptid/1c2d6a74-25fa-11e2-9ab0-00151736994a  ONLINE       0     0     0

errors: 1 data errors, use '-v' for a list
root@mfsbsd:/root # zpool status -v
  pool: storage
 state: DEGRADED
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://illumos.org/msg/ZFS-8000-8A
  scan: scrub repaired 0 in 3h50m with 0 errors on Sun Feb 24 03:13:57 2013
config:

        NAME                                            STATE     READ WRITE CKSUM
        storage                                         DEGRADED     0     0     0
          raidz2-0                                      DEGRADED     0     0     0
            gptid/19177fb9-25fa-11e2-9ab0-00151736994a  ONLINE       0     0     0
            gptid/19b5ec3a-25fa-11e2-9ab0-00151736994a  ONLINE       0     0     0
            gptid/3dc2f956-3de6-11e2-8af1-00151736994a  ONLINE       0     0     0
            gptid/1aefa3e9-25fa-11e2-9ab0-00151736994a  ONLINE       0     0     0
            5393521929904432319                         UNAVAIL      0     0     0  was /dev/gptid/1b8f2b64-25fa-11e2-9ab0-00151736994a
            gptid/1c2d6a74-25fa-11e2-9ab0-00151736994a  ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

        /rw/storage/Jail/plugins/var/log/messages
root@mfsbsd:/root # 


One thing has changed in comparism to before. The "CKSUM" has turned to 0 (zero) while it was before some number.
I guess this is good right?

I will now do a scrub.

- - - Updated - - -

Uhmz... I just read titan_rw's post about holding off the scrub... :s
It's now running... :/

- - - Updated - - -

And a new reboot during scrub. Not good. :S
 

HHawk

Contributor
Joined
Jun 8, 2011
Messages
176
Ugh...

Now it is getting worse. Whenever I try to import 'storage' it gives a kernel panis and causes reboot.
When I just was thinking things were getting better. Pfft...
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
If it were me, I'd hold off on doing a scrub.

I thought it was inadvisable to scrub a pool that is having problems?

I agree the 496 minute rollback is probably minor. But I wouldn't be scrubbing it until I had copies of my data, or if a scrub was absolutely required in order to get data.

Honestly, in this situation, if I had good copies of my data, I'd probably recreate the pool from scratch just to be sure. After copying everything off, and before recreating the pool, I'd probably do some surface verification of the drives. As in a dd wipe, dd read, and another smart long test of each drive. Might be a bit overkill, but it's something I would do.

I think in this case a scrub is needed.

I agree about recreating the pool from scratch, but not until the I/O error is identified or he could be right back in the same situation.

I would get some more disks and start trying to rsync my data off, but I'd expect it to crash like it did with the "ls", and I would redo the mount -F and scrub after each crash.

Right now I feel like there's pressure to move ahead when he really needs to wait for extra disks to copy data to.

- - - Updated - - -

Uhmz... I just read titan_rw's post about holding off the scrub... :s
It's now running... :/

- - - Updated - - -

And a new reboot during scrub. Not good. :S

Yeah, that's not good, but I suspect it has to do with that I/O error. I would just stop and wait for some disks to copy stuff to, AND to see if PaleoN has any ideas, though at this point he probably doesn't want to help since you didn't wait.... I don't know.
 

HHawk

Contributor
Joined
Jun 8, 2011
Messages
176
Well before I got sleep one last post

Code:
root@mfsbsd:/root # zpool status -v
  pool: storage
 state: DEGRADED
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://illumos.org/msg/ZFS-8000-8A
  scan: scrub in progress since Sun Mar 24 22:07:34 2013
        33.0G scanned out of 3.74T at 352M/s, 3h4m to go
        0 repaired, 0.86% done
config:

        NAME                                            STATE     READ WRITE CKSUM
        storage                                         DEGRADED     0     0     8
          raidz2-0                                      DEGRADED     0     0    32
            gptid/19177fb9-25fa-11e2-9ab0-00151736994a  ONLINE       0     0     0
            gptid/19b5ec3a-25fa-11e2-9ab0-00151736994a  ONLINE       0     0     0
            gptid/3dc2f956-3de6-11e2-8af1-00151736994a  ONLINE       0     0     0
            gptid/1aefa3e9-25fa-11e2-9ab0-00151736994a  ONLINE       0     0     0
            5393521929904432319                         UNAVAIL      0     0     0  was /dev/gptid/1b8f2b64-25fa-11e2-9ab0-00151736994a
            gptid/1c2d6a74-25fa-11e2-9ab0-00151736994a  ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

        storage:<0x0>
        /rw/storage/Jail/plugins/var/log/messages
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
Yeah, that's not good, but I suspect it has to do with that I/O error. I would just stop and wait for some disks to copy stuff to, AND to see if PaleoN has any ideas, though at this point he probably doesn't want to help since you didn't wait.... I don't know.
Or I occasionally do other things and I was also fighting a PoS new cable modem today. I only looked now.

Now I am running / trying the commands without 1 disk (at a time).
This is not what I asked you to do. You were only supposed try a normal zpool import with a missing disk. Still I'm ecstatic you appeared to have made some progress.

HHawk, you should copy off only the data you would like to save from the pool. The rest of it you can leave where it is. I'd suggest going for some certain pictures first if possible. Now is the time to slow down. If that means you wait a couple of days while you buy some backup drives so be it. We finally seem to be getting somewhere. Let's not throw it away.

I think in this case a scrub is needed.
Quite possibly required, but as titan_rw suggested I'd attempt a read-only import first. Try to see what we can get first and then scrub. This may be an upgraded pool which means the failmode is likely not set to continue which would be better.

Now it is getting worse. Whenever I try to import 'storage' it gives a kernel panis and causes reboot.
Essentially, we can always make it worse. Did you happen to record the kernel panic? While I certainly can't do anything directly with it, it very well may provide some useful information. Write down or take pictures of all such occurrences.

there's that bloody I/O error AGAIN..... there's some hardware problem SOMEWHERE!
Assuming the message isn't a red herring which I'm inclined to believe it's not. Perhaps the PSU isn't sending out the correct voltages all the time to all the drives. Usually such problems are more overt, but it'd be nice to rule out everything except the disks themselves.

and I would redo the mount -F and scrub after each crash.
And don't do this. At least not blindly. Though some -F imports may be required.

If the pool is still up after that scrub, leave it alone and leave it on. If not record the error, shutdown and reconnect the disconnected drive before trying any further imports.
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
If the pool is still up after that scrub, leave it alone and leave it on. If not record the error, shutdown and reconnect the disconnected drive before trying any further imports.

Agreed, other than more unknown power outages (which PaleoN has a point about your PSU), the amount of money you'll be spending on electricity vs. getting your data back is negligible.

When copying your data off I would use rsync with the logging option. This will help us figure out which file its getting to before it crashes (I'm sure it will if 'ls' made it crash), and it can be excluded on the next try. So something like this:

rsync -av --partial --log-file=/some-place-not-on-your-pool.log source-directory destination-directory

After it crashes and you do whatever PaleoN suggests, look at that log file and get the path/name and exclude it by adding this to the command above for each file. If it gets long, we'll figure out another way:

--exclude 'dir1/dir2/file3.txt'
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
WOW. Pretty amazing. Watching to see how this goes! I was somewhat convinced way back in the beginning that one disk was having issues(although the disk may not have been the problem, but merely a symptom) and that removing the "bad" disk may have fixed the issue.

Definitely get yourself some backup storage space :P
 

HHawk

Contributor
Joined
Jun 8, 2011
Messages
176
If the pool is still up after that scrub, leave it alone and leave it on. If not record the error, shutdown and reconnect the disconnected drive before trying any further imports.

Nah pool wasn't up anymore.

So I shutdown this morning, reconnected the drive and did the following:

Code:
root@mfsbsd:/root # zpool status
no pools available
root@mfsbsd:/root # zpool import
   pool: storage
     id: 17472259698871586545
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        storage                                         ONLINE
          raidz2-0                                      ONLINE
            gptid/19177fb9-25fa-11e2-9ab0-00151736994a  ONLINE
            gptid/19b5ec3a-25fa-11e2-9ab0-00151736994a  ONLINE
            gptid/3dc2f956-3de6-11e2-8af1-00151736994a  ONLINE
            gptid/1aefa3e9-25fa-11e2-9ab0-00151736994a  ONLINE
            gptid/1b8f2b64-25fa-11e2-9ab0-00151736994a  ONLINE
            gptid/1c2d6a74-25fa-11e2-9ab0-00151736994a  ONLINE
root@mfsbsd:/root # zpool import storage


After that last command it causes a kernel panic and reboots the NAS. :(
I am now at work, so I cannot make physically any changes (like switching off the NAS and re-disconnect the first drive).

So what should I do now?

On a sidenote; I did backup the important pictures, however I would like to rescue more data (the more the better, of course I know some stuff is gone though).

- - - Updated - - -

I know I am a bad boy, but I tried the following command: zpool import -T 735242 storage

This is the result:

Code:
root@mfsbsd:/root # zpool import -T 735242 storage
Pool storage returned to its state as of Fri Mar 15 02:03:31 2013.
root@mfsbsd:/root # zpool status
  pool: storage
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://illumos.org/msg/ZFS-8000-8A
  scan: scrub repaired 0 in 3h50m with 0 errors on Sun Feb 24 03:13:57 2013
config:

        NAME                                            STATE     READ WRITE CKSUM
        storage                                         ONLINE       0     0     2
          raidz2-0                                      ONLINE       0     0     4
            gptid/19177fb9-25fa-11e2-9ab0-00151736994a  ONLINE       0     0     0
            gptid/19b5ec3a-25fa-11e2-9ab0-00151736994a  ONLINE       0     0     0
            gptid/3dc2f956-3de6-11e2-8af1-00151736994a  ONLINE       0     0     0
            gptid/1aefa3e9-25fa-11e2-9ab0-00151736994a  ONLINE       0     0     0
            gptid/1b8f2b64-25fa-11e2-9ab0-00151736994a  ONLINE       0     0     0
            gptid/1c2d6a74-25fa-11e2-9ab0-00151736994a  ONLINE       0     0     0

errors: 1 data errors, use '-v' for a list


Running zpool status -v shows the following:
Code:
root@mfsbsd:/root # zpool status -v
  pool: storage
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://illumos.org/msg/ZFS-8000-8A
  scan: scrub repaired 0 in 3h50m with 0 errors on Sun Feb 24 03:13:57 2013
config:

        NAME                                            STATE     READ WRITE CKSUM
        storage                                         ONLINE       0     0     2
          raidz2-0                                      ONLINE       0     0     4
            gptid/19177fb9-25fa-11e2-9ab0-00151736994a  ONLINE       0     0     0
            gptid/19b5ec3a-25fa-11e2-9ab0-00151736994a  ONLINE       0     0     0
            gptid/3dc2f956-3de6-11e2-8af1-00151736994a  ONLINE       0     0     0
            gptid/1aefa3e9-25fa-11e2-9ab0-00151736994a  ONLINE       0     0     0
            gptid/1b8f2b64-25fa-11e2-9ab0-00151736994a  ONLINE       0     0     0
            gptid/1c2d6a74-25fa-11e2-9ab0-00151736994a  ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

        /rw/storage/Jail/plugins/var/log/messages


Now I will wait till I get some feedback from napoleoN, however I will try downloading some files first.
No harm in that, right?

On a sidenote; we are making progress here, right?
Because yesterday we had the pool "online" with 5 out of the 6 disks and now it's online with all 6 disks.

I don't know if this is positive, but it does feels as something positive.
 

HHawk

Contributor
Joined
Jun 8, 2011
Messages
176
Something I am currently considering.

To make sure I can save / rescue / salvage stuff (and thus before purchasing several new harddisks) is there anyway to really check if I can rescue stuff without the NAS going into a new reboot like yesterday with the "ls -Ral"-command from ProtoSD yesterday?

I also ran the command zpool history (since that doesn't do any harm). The output can be found on Pastebin here.
 
Status
Not open for further replies.
Top