ZFS dead drive, confused about pool status

Status
Not open for further replies.

jrodder

Dabbler
Joined
Nov 10, 2011
Messages
28
So I had a spare storage system that I really only used on my DMZ, to swap data to for machines I was working on that weren't allowed on my main network. Basically an ancient dell, with 4 spare drives I had laying around. I did this a few years ago, so I have forgotten exactly how I had it set up. but I *think* what I did was 3 drives in ZFS RAIDZ1, (250GB apiece) and a 4th drive that was 750GB set up as a mirror to the vdev. One of the drives is completely dead, the one I thought was the mirror. However upon booting the system I was getting "status unavailable" for everything. I decided to try and detach the zpool and reattach it, and now I am even more confused. Let me paste what I can see now and maybe someone can help me shed light on it?

Code:
[root@DMZNAS] ~# zpool status
no pools available
[root@DMZNAS] ~# zpool import
  pool: TANK
    id: 1936822090422793928
  state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
        devices and try again.
  see: http://www.sun.com/msg/ZFS-8000-6X
config:
 
        TANK        UNAVAIL  missing device
          raidz1-0  ONLINE
            ada0p2  ONLINE
            ada1p2  ONLINE
            ada2p2  ONLINE
 
        Additional devices are known to be part of this pool, though their
        exact configuration cannot be determined.
[root@DMZNAS] ~# zdb -e TANK
 
Configuration for import:
        vdev_children: 2
        version: 28
        pool_guid: 1936822090422793928
        name: 'TANK'
        state: 0
        hostid: 1375593374
        hostname: 'DMZNAS.local'
        vdev_tree:
            type: 'root'
            id: 0
            guid: 1936822090422793928
            children[0]:
                type: 'raidz'
                id: 0
                guid: 1201893195064143088
                nparity: 1
                metaslab_array: 23
                metaslab_shift: 32
                ashift: 9
                asize: 593691672576
                is_log: 0
                children[0]:
                    type: 'disk'
                    id: 0
                    guid: 1110348436019
                    phys_path: '/dev/ada0p2'
                    whole_disk: 0
                    DTL: 34
                    path: '/dev/dsk/ada0p2'
                children[1]:
                    type: 'disk'
                    id: 1
                    guid: 18009700714485737566
                    phys_path: '/dev/ada1p2'
                    whole_disk: 0
                    DTL: 33
                    path: '/dev/dsk/ada1p2'
                children[2]:
                    type: 'disk'
                    id: 2
                    guid: 7830532914506800351
                    phys_path: '/dev/ada3p2'
                    whole_disk: 0
                    DTL: 32
                    path: '/dev/dsk/ada2p2'
            children[1]:
                type: 'missing'
                id: 1
                guid: 0
zdb: can't open 'TANK': File exists
[root@DMZNAS] ~#
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
It looks like your zpool consist of 3 disks in raidz1 stripped with another disk. That 'other' disk is missing, hence the inability to import the pool.

You'll have to attach the missing disk, or if it is attached, figure out why it's not being recognized.

Definitely not a recommended zpool configuration.
 

jrodder

Dabbler
Joined
Nov 10, 2011
Messages
28
I guess I was thinking that I would have all my data striped on the "other" disk, and that way if I completely lost the raidz1, I would still have a copy of all the data on the 4th disk. Looks like I thought wrong. That other disk is dead-dead-dead. I guess what confused me is that RAIDZ1-0 is showing ONLINE. Maybe in the future I should go with something like a RAIDZ1, and another UFS volume attached, with an rsync task to just keep data mirrored?

*or*

Would there be some way to edit the zpool configuration to disregard that stripe? If the 3 disks in a vdev are OK?
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
The raidz1 and the other disk were stripped together, not mirrored.

If the 4th disk is dead, then all the data is gone. There is no partial recovery. The key point about zpool configs: Loose any vdev in the pool, and the entire pool is lost. You had two vdevs. One with redundancy (3 drives in raidz1), and one vdev of a single disk, ie NO redundancy. Therefore, lose the single disk, and the entire pool dies.

If you wanted the single disk to be a backup of the 3 raidz1 disks, using a second pool and zfs replication would be one way to do it. Then losing the single disk wouldn't affect the other 3. And loosing 2 out of 3 of the raidz1 wouldn't affect the single disk.
 

jrodder

Dabbler
Joined
Nov 10, 2011
Messages
28
Thanks for clearing that up. I thought what I had configured was a mirror of the redundant vdev to a single disk that were not dependent on each other for existence. Live and learn, better than on my production machine. In that case, I have 2 separate freenas servers and ZFS replication between them.
 
Status
Not open for further replies.
Top