zpool status: no pools available

Status
Not open for further replies.

mlanner

Dabbler
Joined
Jun 11, 2011
Messages
23
Hi,

I had a disk that was throwing a bunch of SMART errors, so I figured I'd replace it. I shutdown my NAS and popped in a new disk. Unfortunately I got a bit confused as I did that and added the disk in as striped disk instead of replacing the disk I had taken out.

In an attempt to get things "back to normal" I put my original disk back in again. After doing so, I'm still having issues importing the pool back. FreeNAS sees my disks OK and are not reporting anything wrong with the disks. As you can see below, it shows my RAIDZ2-2TB pool as UNAVAILABLE with missing devices.

I've run a bunch of zpool commands to try to figure out how to get it back, but to no avail. If anyone has any suggestions, I'd greatly appreciate it.

Build: FreeNAS-9.2.0-RELEASE-x64 with 8GB RAM.

Here is the output from some of the commands:

Code:
[root@filer01] ~# zpool import
  pool: RAIDZ2-2TB
    id: 10263045930313125735
  state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
    devices and try again.
  see: http://illumos.org/msg/ZFS-8000-6X
config:
 
    RAIDZ2-2TB                                      UNAVAIL  missing device
      raidz2-0                                      ONLINE
        gptid/4f5caad6-f186-11e2-a755-c8cbb8c7c733  ONLINE
        gptid/4fba2cc0-f186-11e2-a755-c8cbb8c7c733  ONLINE
        gptid/501812fa-f186-11e2-a755-c8cbb8c7c733  ONLINE
        gptid/50743a3c-f186-11e2-a755-c8cbb8c7c733  ONLINE
 
    Additional devices are known to be part of this pool, though their
    exact configuration cannot be determined.


Code:
[root@filer01] ~# camcontrol devlist
<WDC WD1003FBYX-01Y7B1 01.01V02>  at scbus0 target 0 lun 0 (ada0,pass0)
<WDC WD1003FBYX-01Y7B1 01.01V02>  at scbus1 target 0 lun 0 (ada1,pass1)
<WDC WD1003FBYX-01Y7B1 01.01V02>  at scbus2 target 0 lun 0 (ada2,pass2)
<WDC WD1003FBYX-01Y7B1 01.01V02>  at scbus3 target 0 lun 0 (ada3,pass3)
<SanDisk Cruzer Fit 1.22>          at scbus7 target 0 lun 0 (da0,pass4)


Code:
[root@filer01] ~# gpart show
=>        34  1953525101  ada0  GPT  (931G)
          34          94        - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  1949330703    2  freebsd-zfs  (929G)
 
=>        34  1953525101  ada1  GPT  (931G)
          34          94        - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  1949330703    2  freebsd-zfs  (929G)
 
=>        34  1953525101  ada2  GPT  (931G)
          34          94        - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  1949330703    2  freebsd-zfs  (929G)
 
=>        34  1953525101  ada3  GPT  (931G)
          34          94        - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  1949330703    2  freebsd-zfs  (929G)
 
=>      63  15633345  da0  MBR  (7.5G)
        63  3590433    1  freebsd  [active]  (1.7G)
  3590496        63      - free -  (31k)
  3590559  3590433    2  freebsd  (1.7G)
  7180992      3024    3  freebsd  (1.5M)
  7184016    41328    4  freebsd  (20M)
  7225344  8408064      - free -  (4.0G)
 
=>      0  3590433  da0s1  BSD  (1.7G)
        0      16        - free -  (8.0k)
      16  1930241      1  !0  (942M)
  1930257  1660176        - free -  (810M)
 
=>      0  3590433  da0s2  BSD  (1.7G)
        0      16        - free -  (8.0k)
      16  1930241      1  !0  (942M)
  1930257  1660176        - free -  (810M)


Code:
[root@filer01] ~# glabel status
                                      Name  Status  Components
gptid/4fba2cc0-f186-11e2-a755-c8cbb8c7c733    N/A  ada0p2
gptid/501812fa-f186-11e2-a755-c8cbb8c7c733    N/A  ada1p2
gptid/50743a3c-f186-11e2-a755-c8cbb8c7c733    N/A  ada2p2
gptid/4f5caad6-f186-11e2-a755-c8cbb8c7c733    N/A  ada3p2
                            ufs/FreeNASs3    N/A  da0s3
                            ufs/FreeNASs4    N/A  da0s4
                            ufs/FreeNASs1a    N/A  da0s1a
                    ufsid/521c684590455604    N/A  da0s2a
                            ufs/FreeNASs2a    N/A  da0s2a


Code:
[root@filer01] ~# dmesg | grep ada0
ada0 at ahcich0 bus 0 scbus0 target 0 lun 0
ada0: <WDC WD1003FBYX-01Y7B1 01.01V02> ATA-8 SATA 2.x device
ada0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada0: Command Queueing enabled
ada0: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C)
ada0: Previously was known as ad4
GEOM_ELI: Device ada0p1.eli created.
 
[root@filer01] ~# dmesg | grep ada1
ada1 at ahcich1 bus 0 scbus1 target 0 lun 0
ada1: <WDC WD1003FBYX-01Y7B1 01.01V02> ATA-8 SATA 2.x device
ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada1: Command Queueing enabled
ada1: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C)
ada1: Previously was known as ad6
GEOM_ELI: Device ada1p1.eli created.
 
[root@filer01] ~# dmesg | grep ada2
ada2 at ahcich2 bus 0 scbus2 target 0 lun 0
ada2: <WDC WD1003FBYX-01Y7B1 01.01V02> ATA-8 SATA 2.x device
ada2: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada2: Command Queueing enabled
ada2: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C)
ada2: Previously was known as ad8
GEOM_ELI: Device ada2p1.eli created.
 
[root@filer01] ~# dmesg | grep ada3
ada3 at ahcich3 bus 0 scbus3 target 0 lun 0
ada3: <WDC WD1003FBYX-01Y7B1 01.01V02> ATA-8 SATA 2.x device
ada3: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada3: Command Queueing enabled
ada3: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C)
ada3: Previously was known as ad10
GEOM_ELI: Device ada3p1.eli created.


Thanks in advance!
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
The striped disk you added became a critical part of the pool. The pool will not import without that drive.
In an attempt to get things "back to normal" I put my original disk back in again. After doing so, I'm still having issues importing the pool back. FreeNAS sees my disks OK and are not reporting anything wrong with the disks. As you can see below, it shows my RAIDZ2-2TB pool as UNAVAILABLE with missing devices.
The missing device is the striped disk. You need to plug it back for the pool to import. Unfortunately it is not possible to remove it from the pool. If you want to keep redundancy in your pool you need to replace the failed disk and add another one to mirror the single drive vdev (this needs to be done via CLI). Or, you have to backup the pool, destroy it and create a new one.
 

mlanner

Dabbler
Joined
Jun 11, 2011
Messages
23
Dusan,

Thanks for the analysis. Unfortunately I only have four disk slots in my server (HP MicroServer). I guess I'll have to find a SATA controller to pop in there to be able to use all five disks. Hopefully that will work. I have a spare NAS that I'm using right now. (This was just a backup target, so not much would really be lost.) Still, it would be nice to be able to restore it all, if possible.

Thanks for the help. I'll update the thread if I'm able to get it back or if I have specific questions about commands needed.
 

mlanner

Dabbler
Joined
Jun 11, 2011
Messages
23
Hmm ... I took the old, failing disk out and plugged in the striped disk into the system. As expected, the RAIDZ2 now shows up as DEGRADED, which is fine, as it's just operating as RAIDZ1. Correct? However, the disk that I inadvertently added as a striped disk is now showing as "UNAVAIL cannot open".

Here's the output of zpool status with the new, striped drive inserted:
Code:
[root@filer01] ~# zpool import
  pool: RAIDZ2-2TB
    id: 10263045930313125735
  state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
    devices and try again.
  see: http://illumos.org/msg/ZFS-8000-6X
config:
 
    RAIDZ2-2TB                                      UNAVAIL  missing device
      raidz2-0                                      DEGRADED
        15840637288563004401                        UNAVAIL  cannot open
        gptid/4fba2cc0-f186-11e2-a755-c8cbb8c7c733  ONLINE
        gptid/501812fa-f186-11e2-a755-c8cbb8c7c733  ONLINE
        gptid/50743a3c-f186-11e2-a755-c8cbb8c7c733  ONLINE
 
    Additional devices are known to be part of this pool, though their
    exact configuration cannot be determined.


To open/insert the new, striped disk to the point where I can actually import the pool back again, what commands would I need to run? Or do I still need to add in the 5th (old) original disk into the array?

Again, thanks in advance for any suggestions and/or instructions.
 

mlanner

Dabbler
Joined
Jun 11, 2011
Messages
23
I now have all 5 disks installed and recognized by the system. Still no luck importing the pool, though. :(

Latest output:
Code:
[root@filer01] ~# zpool import
  pool: RAIDZ2-2TB
    id: 10263045930313125735
  state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
    devices and try again.
  see: http://illumos.org/msg/ZFS-8000-6X
config:
 
    RAIDZ2-2TB                                      UNAVAIL  missing device
      raidz2-0                                      ONLINE
        gptid/4f5caad6-f186-11e2-a755-c8cbb8c7c733  ONLINE
        gptid/4fba2cc0-f186-11e2-a755-c8cbb8c7c733  ONLINE
        gptid/501812fa-f186-11e2-a755-c8cbb8c7c733  ONLINE
        gptid/50743a3c-f186-11e2-a755-c8cbb8c7c733  ONLINE
 
    Additional devices are known to be part of this pool, though their
    exact configuration cannot be determined.
[root@filer01] ~# zpool import -f RAIDZ2-2TB
cannot import 'RAIDZ2-2TB': one or more devices is currently unavailable
[root@filer01] ~# glabel status
                                      Name  Status  Components
gptid/4fba2cc0-f186-11e2-a755-c8cbb8c7c733    N/A  ada0p2
gptid/501812fa-f186-11e2-a755-c8cbb8c7c733    N/A  ada1p2
gptid/50743a3c-f186-11e2-a755-c8cbb8c7c733    N/A  ada2p2
gptid/b2acb767-8bcc-11e3-9747-c8cbb8c7c733    N/A  ada3p2
gptid/4f4cd87d-f186-11e2-a755-c8cbb8c7c733    N/A  ada4p1
gptid/4f5caad6-f186-11e2-a755-c8cbb8c7c733    N/A  ada4p2
                            ufs/FreeNASs3    N/A  da0s3
                            ufs/FreeNASs4    N/A  da0s4
                            ufs/FreeNASs1a    N/A  da0s1a
                    ufsid/521c684590455604    N/A  da0s2a
                            ufs/FreeNASs2a    N/A  da0s2a
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
Hmm ... I took the old, failing disk out and plugged in the striped disk into the system. As expected, the RAIDZ2 now shows up as DEGRADED, which is fine, as it's just operating as RAIDZ1. Correct?
Yes.
However, the disk that I inadvertently added as a striped disk is now showing as "UNAVAIL cannot open".
No, the UNAVAIL disk is the failed one you removed. The inadvertently added disk is being referenced by this: "Additional devices are known to be part of this pool, though their exact configuration cannot be determined."
To open/insert the new, striped disk to the point where I can actually import the pool back again, what commands would I need to run? Or do I still need to add in the 5th (old) original disk into the array?
It should just work, the old disk does not need to be present.
Can you please post output of:
zdb -l /dev/ada0p2
zdb -l /dev/ada3p2
 

mlanner

Dabbler
Joined
Jun 11, 2011
Messages
23
Thanks Dusan.

Can you please post output of:
zdb -l /dev/ada0p2
zdb -l /dev/ada3p2

Sure, here it is:
Code:
[root@filer01] ~# zdb -l /dev/ada0p2
--------------------------------------------
LABEL 0
--------------------------------------------
    version: 5000
    name: 'RAIDZ2-2TB'
    state: 1
    txg: 3373858
    pool_guid: 10263045930313125735
    hostid: 2070241634
    hostname: 'filer01.example.com'
    top_guid: 16019072985416527237
    guid: 8421149948470126193
    vdev_children: 2
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 16019072985416527237
        nparity: 2
        metaslab_array: 34
        metaslab_shift: 35
        ashift: 9
        asize: 3992209850368
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 15840637288563004401
            path: '/dev/gptid/4f5caad6-f186-11e2-a755-c8cbb8c7c733'
            phys_path: '/dev/gptid/4f5caad6-f186-11e2-a755-c8cbb8c7c733'
            whole_disk: 1
            not_present: 1
            DTL: 188
            create_txg: 4
        children[1]:
            type: 'disk'
            id: 1
            guid: 8421149948470126193
            path: '/dev/gptid/4fba2cc0-f186-11e2-a755-c8cbb8c7c733'
            phys_path: '/dev/gptid/4fba2cc0-f186-11e2-a755-c8cbb8c7c733'
            whole_disk: 1
            create_txg: 4
        children[2]:
            type: 'disk'
            id: 2
            guid: 10724922771076217874
            path: '/dev/gptid/501812fa-f186-11e2-a755-c8cbb8c7c733'
            phys_path: '/dev/gptid/501812fa-f186-11e2-a755-c8cbb8c7c733'
            whole_disk: 1
            create_txg: 4
        children[3]:
            type: 'disk'
            id: 3
            guid: 10202858728757542620
            path: '/dev/gptid/50743a3c-f186-11e2-a755-c8cbb8c7c733'
            phys_path: '/dev/gptid/50743a3c-f186-11e2-a755-c8cbb8c7c733'
            whole_disk: 1
            create_txg: 4
    features_for_read:
--------------------------------------------
LABEL 1
--------------------------------------------
    version: 5000
    name: 'RAIDZ2-2TB'
    state: 1
    txg: 3373858
    pool_guid: 10263045930313125735
    hostid: 2070241634
    hostname: 'filer01.example.com'
    top_guid: 16019072985416527237
    guid: 8421149948470126193
    vdev_children: 2
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 16019072985416527237
        nparity: 2
        metaslab_array: 34
        metaslab_shift: 35
        ashift: 9
        asize: 3992209850368
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 15840637288563004401
            path: '/dev/gptid/4f5caad6-f186-11e2-a755-c8cbb8c7c733'
            phys_path: '/dev/gptid/4f5caad6-f186-11e2-a755-c8cbb8c7c733'
            whole_disk: 1
            not_present: 1
            DTL: 188
            create_txg: 4
        children[1]:
            type: 'disk'
            id: 1
            guid: 8421149948470126193
            path: '/dev/gptid/4fba2cc0-f186-11e2-a755-c8cbb8c7c733'
            phys_path: '/dev/gptid/4fba2cc0-f186-11e2-a755-c8cbb8c7c733'
            whole_disk: 1
            create_txg: 4
        children[2]:
            type: 'disk'
            id: 2
            guid: 10724922771076217874
            path: '/dev/gptid/501812fa-f186-11e2-a755-c8cbb8c7c733'
            phys_path: '/dev/gptid/501812fa-f186-11e2-a755-c8cbb8c7c733'
            whole_disk: 1
            create_txg: 4
        children[3]:
            type: 'disk'
            id: 3
            guid: 10202858728757542620
            path: '/dev/gptid/50743a3c-f186-11e2-a755-c8cbb8c7c733'
            phys_path: '/dev/gptid/50743a3c-f186-11e2-a755-c8cbb8c7c733'
            whole_disk: 1
            create_txg: 4
    features_for_read:
--------------------------------------------
LABEL 2
--------------------------------------------
    version: 5000
    name: 'RAIDZ2-2TB'
    state: 1
    txg: 3373858
    pool_guid: 10263045930313125735
    hostid: 2070241634
    hostname: 'filer01.example.com'
    top_guid: 16019072985416527237
    guid: 8421149948470126193
    vdev_children: 2
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 16019072985416527237
        nparity: 2
        metaslab_array: 34
        metaslab_shift: 35
        ashift: 9
        asize: 3992209850368
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 15840637288563004401
            path: '/dev/gptid/4f5caad6-f186-11e2-a755-c8cbb8c7c733'
            phys_path: '/dev/gptid/4f5caad6-f186-11e2-a755-c8cbb8c7c733'
            whole_disk: 1
            not_present: 1
            DTL: 188
            create_txg: 4
        children[1]:
            type: 'disk'
            id: 1
            guid: 8421149948470126193
            path: '/dev/gptid/4fba2cc0-f186-11e2-a755-c8cbb8c7c733'
            phys_path: '/dev/gptid/4fba2cc0-f186-11e2-a755-c8cbb8c7c733'
            whole_disk: 1
            create_txg: 4
        children[2]:
            type: 'disk'
            id: 2
            guid: 10724922771076217874
            path: '/dev/gptid/501812fa-f186-11e2-a755-c8cbb8c7c733'
            phys_path: '/dev/gptid/501812fa-f186-11e2-a755-c8cbb8c7c733'
            whole_disk: 1
            create_txg: 4
        children[3]:
            type: 'disk'
            id: 3
            guid: 10202858728757542620
            path: '/dev/gptid/50743a3c-f186-11e2-a755-c8cbb8c7c733'
            phys_path: '/dev/gptid/50743a3c-f186-11e2-a755-c8cbb8c7c733'
            whole_disk: 1
            create_txg: 4
    features_for_read:
--------------------------------------------
LABEL 3
--------------------------------------------
    version: 5000
    name: 'RAIDZ2-2TB'
    state: 1
    txg: 3373858
    pool_guid: 10263045930313125735
    hostid: 2070241634
    hostname: 'filer01.example.com'
    top_guid: 16019072985416527237
    guid: 8421149948470126193
    vdev_children: 2
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 16019072985416527237
        nparity: 2
        metaslab_array: 34
        metaslab_shift: 35
        ashift: 9
        asize: 3992209850368
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 15840637288563004401
            path: '/dev/gptid/4f5caad6-f186-11e2-a755-c8cbb8c7c733'
            phys_path: '/dev/gptid/4f5caad6-f186-11e2-a755-c8cbb8c7c733'
            whole_disk: 1
            not_present: 1
            DTL: 188
            create_txg: 4
        children[1]:
            type: 'disk'
            id: 1
            guid: 8421149948470126193
            path: '/dev/gptid/4fba2cc0-f186-11e2-a755-c8cbb8c7c733'
            phys_path: '/dev/gptid/4fba2cc0-f186-11e2-a755-c8cbb8c7c733'
            whole_disk: 1
            create_txg: 4
        children[2]:
            type: 'disk'
            id: 2
            guid: 10724922771076217874
            path: '/dev/gptid/501812fa-f186-11e2-a755-c8cbb8c7c733'
            phys_path: '/dev/gptid/501812fa-f186-11e2-a755-c8cbb8c7c733'
            whole_disk: 1
            create_txg: 4
        children[3]:
            type: 'disk'
            id: 3
            guid: 10202858728757542620
            path: '/dev/gptid/50743a3c-f186-11e2-a755-c8cbb8c7c733'
            phys_path: '/dev/gptid/50743a3c-f186-11e2-a755-c8cbb8c7c733'
            whole_disk: 1
            create_txg: 4
    features_for_read:
[root@filer01] ~# zdb -l /dev/ada3p2
--------------------------------------------
LABEL 0
--------------------------------------------
failed to unpack label 0
--------------------------------------------
LABEL 1
--------------------------------------------
failed to unpack label 1
--------------------------------------------
LABEL 2
--------------------------------------------
failed to unpack label 2
--------------------------------------------
LABEL 3
--------------------------------------------
failed to unpack label 3
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
Can you confirm that ada3 is the drive you striped in? (you can check the serial number on the View Disks screen)
What did you exactly do after you realized the mistake?
Please post output of "gpart list /dev/ada3".
 

mlanner

Dabbler
Joined
Jun 11, 2011
Messages
23
Yes, ada3 is the disk that got striped in. ada4 is the "bad" disk with SMART errors.

As I recall, when I realized the mistake I detached the pool and tried to put the "bad" disk back into the system. After that I've not been able to import the pool back no matter the disks that are in or out of the system.

# gpart list /dev/ada3
gpart: No such geom: /dev/ada3
 

mlanner

Dabbler
Joined
Jun 11, 2011
Messages
23
Ah, sorry, this command please:
gpart list ada3

Code:
[root@filer01] ~# gpart list ada3
Geom name: ada3
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 1953525134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: ada3p1
  Mediasize: 2147483648 (2.0G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 65536
  Mode: r1w1e1
  rawuuid: b2a13966-8bcc-11e3-9747-c8cbb8c7c733
  rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 2147483648
  offset: 65536
  type: freebsd-swap
  index: 1
  end: 4194431
  start: 128
2. Name: ada3p2
  Mediasize: 998057319936 (929G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 2147549184
  Mode: r0w0e0
  rawuuid: b2acb767-8bcc-11e3-9747-c8cbb8c7c733
  rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 998057319936
  offset: 2147549184
  type: freebsd-zfs
  index: 2
  end: 1953525134
  start: 4194432
Consumers:
1. Name: ada3
  Mediasize: 1000204886016 (931G)
  Sectorsize: 512
  Mode: r1w1e2
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
Hmm, ada3 has the proper partitions, but the zfs partition doesn't contain any vdev labels, so zpool import doesn't recognize it as a ZFS device. This means the pool is not importable and you should restore from a backup. Some people managed to salvage data in this situation, but it involves low level disk editing as you need to forge an "acceptable" vdev label: Recover data from a pool that cannot be imported...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yikes! That's a mess!
 

mlanner

Dabbler
Joined
Jun 11, 2011
Messages
23
Dusan,

Thanks for verifying. I'll give that a read to see what I can possibly achieve. Like I said, it was mostly backups of data on there and it would be great if I can get it back. However, if I can't, it's not the end of the world.

cyberjock: Yes, it surely is. :S
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I do offer recovery services, but I can guarantee you its not worth the money for backups. I'd just redo the pool and let the backups begin populating the server. That is, after all, why we have backups!
 

mlanner

Dabbler
Joined
Jun 11, 2011
Messages
23
cyberjock,

Yes, indeed. Not that it really matters in the long run, but now I wish I had had offsite backups of the backups on that NAS. Oh well ...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
By the way, when I said its not worth the money for backups I meant it wasn't worth the money to try to recover your backups. Backups are definitely a good investment if your data is worth it!
 

mlanner

Dabbler
Joined
Jun 11, 2011
Messages
23
Yup, I understood that. Still, it might be worth my time to see if it's possible. I might learn something.
 

mlanner

Dabbler
Joined
Jun 11, 2011
Messages
23
So, I've been trying to understand the vdev label a bit better. From a few ZFS sources I've read:
Four copies of the vdev label are written to each physical vdev within a ZFS storage pool. Aside from the small time frame during label update (described below), these four labels are identical and any copy can be used to access and verify the contents of the pool. When a device is added to the pool, ZFS places two labels at the front of the device and two labels at the back of the device.

Maybe a dumb question, but does it mean all four of them were wiped out on my ada3 disk? If not, it seems that perhaps I could restore it by using one of the remaining copies. Or has ZFS already tried to read those and failed?
 
Status
Not open for further replies.
Top