Pool offline after disk replace

Giblet

Cadet
Joined
Mar 23, 2020
Messages
7
So, I have a freenas pool consisting of 3 vdevs
In vdev 1 (raidz1) a drive failed
I offlined the drive, replaced it with a new drive and went to the gui to "replace" the offline drive
When I arrived at the gui the other two drives in the vdev were faulted
I suspect the drives haven't failed but software is the culprit.
Could someone please help to try and recover the vdev & zpool.

Currently it looks like:

Code:
root@freenas[~]# zpool status
  pool: Storage_vol
 state: UNAVAIL
status: One or more devices are faulted in response to persistent errors.  There are insufficient replicas for the pool to
        continue functioning.
action: Destroy and re-create the pool from a backup source.  Manually marking the device
        repaired using 'zpool clear' may allow some data to be recovered.
  scan: resilvered 0 in 0 days 00:06:12 with 8519722 errors on Mon Mar 23 17:37:48 2020
config:

        NAME                                              STATE     READ WRITE CKSUM
        Storage_vol                                       UNAVAIL      0     7     0
          raidz1-0                                        UNAVAIL      0    16     0
            gptid/06ab7f99-967f-11e8-82cc-000c2998b1b3    FAULTED      0    11     0  too many errors
            replacing-1                                   OFFLINE      0     0     0
              1402766699997302299                         OFFLINE     12   148    34  was /dev/gptid/07916e44-967f-11e8-82cc-000c2998b1b3
              gptid/bc8bcd4d-6d23-11ea-99a0-9c5c8e4f22ec  ONLINE       0     0     0
            gptid/08328bac-967f-11e8-82cc-000c2998b1b3    FAULTED      0    16     0  too many errors
          raidz1-1                                        ONLINE       0     0     0
            gptid/317fd2d7-6d14-11e9-81d5-000c2998b1b3    ONLINE       0     0     0
            gptid/33d3620b-6d14-11e9-81d5-000c2998b1b3    ONLINE       0     0     0
            gptid/35bf3546-6d14-11e9-81d5-000c2998b1b3    ONLINE       0     0     0
            gptid/379da7b5-6d14-11e9-81d5-000c2998b1b3    ONLINE       0     0     0
          raidz1-2                                        ONLINE       0     0     0
            gptid/7cee2bde-e159-11e9-832e-9c5c8e4f22ec    ONLINE       0     0     0
            gptid/7ddd1c63-e159-11e9-832e-9c5c8e4f22ec    ONLINE       0     0     0
            gptid/7eca5e4d-e159-11e9-832e-9c5c8e4f22ec    ONLINE       0     0     0
            gptid/7fba2e00-e159-11e9-832e-9c5c8e4f22ec    ONLINE       0     0     0
 

Giblet

Cadet
Joined
Mar 23, 2020
Messages
7
So, I have a freenas pool consisting of 3 vdevs on FreeNAS-11.2-U7
In vdev 1 (raidz1) a drive failed
I offlined the drive, replaced it with a new drive and went to the gui to "replace" the offline drive
When I arrived at the gui the other two drives in the vdev were faulted
I suspect the drives haven't failed but software is the culprit.
Could someone please help to try and recover the vdev & zpool.

Currently it looks like:

Code:
root@freenas[~]# zpool status
  pool: Storage_vol
state: UNAVAIL
status: One or more devices are faulted in response to persistent errors.  There are insufficient replicas for the pool to
        continue functioning.
action: Destroy and re-create the pool from a backup source.  Manually marking the device
        repaired using 'zpool clear' may allow some data to be recovered.
  scan: resilvered 0 in 0 days 00:06:12 with 8519722 errors on Mon Mar 23 17:37:48 2020
config:

        NAME                                              STATE     READ WRITE CKSUM
        Storage_vol                                       UNAVAIL      0     7     0
          raidz1-0                                        UNAVAIL      0    16     0
            gptid/06ab7f99-967f-11e8-82cc-000c2998b1b3    FAULTED      0    11     0  too many errors
            replacing-1                                   OFFLINE      0     0     0
              1402766699997302299                         OFFLINE     12   148    34  was /dev/gptid/07916e44-967f-11e8-82cc-000c2998b1b3
              gptid/bc8bcd4d-6d23-11ea-99a0-9c5c8e4f22ec  ONLINE       0     0     0
            gptid/08328bac-967f-11e8-82cc-000c2998b1b3    FAULTED      0    16     0  too many errors
          raidz1-1                                        ONLINE       0     0     0
            gptid/317fd2d7-6d14-11e9-81d5-000c2998b1b3    ONLINE       0     0     0
            gptid/33d3620b-6d14-11e9-81d5-000c2998b1b3    ONLINE       0     0     0
            gptid/35bf3546-6d14-11e9-81d5-000c2998b1b3    ONLINE       0     0     0
            gptid/379da7b5-6d14-11e9-81d5-000c2998b1b3    ONLINE       0     0     0
          raidz1-2                                        ONLINE       0     0     0
            gptid/7cee2bde-e159-11e9-832e-9c5c8e4f22ec    ONLINE       0     0     0
            gptid/7ddd1c63-e159-11e9-832e-9c5c8e4f22ec    ONLINE       0     0     0
            gptid/7eca5e4d-e159-11e9-832e-9c5c8e4f22ec    ONLINE       0     0     0
            gptid/7fba2e00-e159-11e9-832e-9c5c8e4f22ec    ONLINE       0     0     0
 

Giblet

Cadet
Joined
Mar 23, 2020
Messages
7
I think disk da0 and da2 for some reason got a different gptid or something, and now freenas doesnt understand they are part of the vdev anymore?
Is there someone who can please help trying to clean this mess up?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
You might first try a reboot and you could find that the system is able to mount the pool again (but I would be using that opportunity to back up the data as quickly as possible before the drives are faulted again).

You can share the output from SMART for those drives if you would like some help in determining the cause of the problems.
 

Giblet

Cadet
Joined
Mar 23, 2020
Messages
7
You might first try a reboot and you could find that the system is able to mount the pool again (but I would be using that opportunity to back up the data as quickly as possible before the drives are faulted again).

You can share the output from SMART for those drives if you would like some help in determining the cause of the problems.

Thanks a bunch
I didn't dare to reboot because it might break more
But it seems the other two disks have been correctly identified as members of the vdev and the rebuild is in progress
 
Top