Corrupted Pool; Healthy Disks. Can I recover?

j_sh

Cadet
Joined
Jul 20, 2019
Messages
5
Hello, everyone. I've done a lot of searching of forums and other research, and I've found a lot of good information; but I just can't get to the bottom of this. Thanks in advance for any help you can provide!

Randomly, yesterday, my pool disappeared. I can't promise there wasn't a power blip, but I see no other signs of that with other appliances.
Disks are healthy: they show up in the system and show healthy smart status. Here is all the info I know to provide. Glad to provide more if needed:

System info:
Code:
[root@freenas /]# uname -a
FreeBSD freenas.local 11.2-STABLE FreeBSD 11.2-STABLE #0 r325575+95cc58ca2a0(HEAD): Fri May 10 15:57:35 EDT 2019     root@mp20.tn.ixsystems.com:/freenas-releng/freenas/_BE/objs/freenas-releng/freenas/_BE/os/sys/FreeNAS.amd64  amd64


Pool status. Boot pool shows up. Storage pool does not:
Code:
[root@freenas /]# zfs list
NAME                                USED  AVAIL  REFER  MOUNTPOINT
freenas-boot                       2.03G   897G   176K  none
freenas-boot/ROOT                  2.02G   897G   136K  none
freenas-boot/ROOT/11.2-U4.1        2.02G   897G  1.01G  /
freenas-boot/ROOT/Initial-Install     8K   897G  1.01G  legacy
freenas-boot/ROOT/default           468K   897G  1.01G  legacy


Code:
[root@freenas /]# zpool status
  pool: freenas-boot
state: ONLINE
  scan: scrub repaired 0 in 0 days 00:00:12 with 0 errors on Fri Jul 19 03:45:32 2019
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      ada0p2    ONLINE       0     0     0

errors: No known data errors


Import shows my storage pool, but…
Code:
[root@freenas /]# zpool import
   pool: storage-pool
     id: 17899316521172327458
  state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
    devices and try again.
   see: http://illumos.org/msg/ZFS-8000-3C
config:

    storage-pool                                        UNAVAIL  insufficient replicas
      gptid/7851d72e-348e-11e9-9882-60a44ccd2fee.eli  ONLINE
      8531039083316170755                             UNAVAIL  cannot open


Check DB for pool:
Code:
[root@freenas /]# sqlite3 /data/freenas-v1.db "select * from storage_volume"
1|storage-pool|17899316521172327458|1|a46abcba-d24d-4bc6-9c81-b1537224770e


Disks are all there:
Code:
[root@freenas /]# gpart show
=>        40  1953525088  ada0  GPT  (932G)
          40      532480     1  efi  (260M)
      532520  1952972800     2  freebsd-zfs  (931G)
  1953505320       19808        - free -  (9.7M)

=>        40  7814037088  ada1  GPT  (3.6T)
          40          88        - free -  (44K)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  7809842696     2  freebsd-zfs  (3.6T)

=>        40  7814037088  ada2  GPT  (3.6T)
          40          88        - free -  (44K)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  7809842696     2  freebsd-zfs  (3.6T)


Code:
[root@freenas /]# camcontrol devlist
<ST1000DX001-1CM162 CC43>          at scbus2 target 0 lun 0 (pass0,ada0)
<ASUS DRW-24B3ST   i 1.00>         at scbus3 target 0 lun 0 (pass1,cd0)
<ST4000DM004-2CV104 0001>          at scbus4 target 0 lun 0 (pass2,ada1)
<ST4000DM004-2CV104 0001>          at scbus5 target 0 lun 0 (pass3,ada2)


I don't know much about ZFS labels, but this certainly isn't right:
Code:
[root@freenas /]# zdb -l /dev/ada1
------------------------------------
LABEL 0
------------------------------------
failed to unpack label 0
------------------------------------
LABEL 1
------------------------------------
failed to unpack label 1
------------------------------------
LABEL 2
------------------------------------
failed to unpack label 2
------------------------------------
LABEL 3
------------------------------------
failed to unpack label 3

[root@freenas /]# zdb -l /dev/ada2
------------------------------------
LABEL 0
------------------------------------
failed to unpack label 0
------------------------------------
LABEL 1
------------------------------------
failed to unpack label 1
------------------------------------
LABEL 2
------------------------------------
failed to unpack label 2
------------------------------------
LABEL 3
------------------------------------
failed to unpack label 3


That being said, even my boot pool disk looks the same:
Code:
[root@freenas /]# zdb -l /dev/ada0
------------------------------------
LABEL 0
------------------------------------
failed to unpack label 0
------------------------------------
LABEL 1
------------------------------------
failed to unpack label 1
------------------------------------
LABEL 2
------------------------------------
failed to unpack label 2
------------------------------------
LABEL 3
------------------------------------
failed to unpack label 3


I'll spare you the smart info details, but all disks passed.

For what it's worth, in the web interface, ada1 is associated with "storage-pool" but ada2 says "unused".

Any ideas on how to recover the pool?

THANKS!
 
Joined
Oct 18, 2018
Messages
969
Code:
[root@freenas /]# zpool import
pool: storage-pool
id: 17899316521172327458
state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http://illumos.org/msg/ZFS-8000-3C
config:

storage-pool UNAVAIL insufficient replicas
gptid/7851d72e-348e-11e9-9882-60a44ccd2fee.eli ONLINE
8531039083316170755 UNAVAIL cannot open

It looks your storage pool is composed of two striped vdevs each composted of single disks. Remember, pools are made of vdevs. Vdevs are made of disks. If you lose a single vdev in your pool, you lose the pool. Redundancy within vdevs is extremely important for this reason. If I am reading the above correctly you had a pool with 2 vdevs, each composed of a singe disk. The system was unable to access one of those disks and so you lost your pool.

I'll spare you the smart info details, but all disks passed.
Actually, smart readouts of your drives via smartctl -a /dev/ada{x} would be very helpful. Feel free to wrap them in code tags or as attachments to make it easier to read.

For what it's worth, in the web interface, ada1 is associated with "storage-pool" but ada2 says "unused".
This is unsurprising. From the printout of zpool import you see that one of the disks in storage-pool is unavailable.



There may be hope for you. If a cable is loose or something you may be able to get that disk back online and recover your vdev and thus your pool. If you do, you should immediately begin planning to migrate your data off of that pool and rebuild it. The best way to do that depends on your system somewhat. Could you post your exact full system specs including hardware? Look to my signature for the kind of detail that would be helpful. It is totally find to provide too much information rather than not enough. :)



Anyway, check for a loose cable first, then check the smart readouts. If ada2 shows up after you check the cables run a long smart test on it then send that output.
 

j_sh

Cadet
Joined
Jul 20, 2019
Messages
5
Thanks, @PhiloEpisteme! Very good info.

I've checked / swapped cables, SATA ports, etc. Same result. Doesn't seem like a hardware problem, especially considering the OS sees the physical disk just fine.

It looks your storage pool is composed of two striped vdevs each composted of single disks.

How were you able to come to this conclusion based on the output of the import command?

I will admit, though I'm a seasoned *nix admin, this was my first FreeBSD/ZFS experience. I thought I had created a mirror; clearly, that's not the case. Ha! I'm anxious to learn more about the types of pools and how to manage them.

Anyway, to me, it seems the OS sees the disk, but ZFS doesn't recognize it as part of the pool. Am I missing something? You said "if ada2 shows up", but it never didn't show up, other than as part of the pool.

Thanks, again!

Here are the smart details, as requested:
Code:
[root@freenas /]# smartctl -a /dev/ada1
smartctl 6.6 2017-11-05 r4594 [FreeBSD 11.2-STABLE amd64] (local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Seagate Barracuda 3.5
Device Model:     ST4000DM004-2CV104
Serial Number:    ZFN1S9ML
LU WWN Device Id: 5 000c50 0b41fe252
Firmware Version: 0001
User Capacity:    4,000,787,030,016 bytes [4.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5425 rpm
Form Factor:      3.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-3 T13/2161-D revision 5
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Sat Jul 20 07:57:25 2019 PDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00)    Offline data collection activity
                    was never started.
                    Auto Offline Data Collection: Disabled.
Self-test execution status:      (   0)    The previous self-test routine completed
                    without error or no self-test has ever
                    been run.
Total time to complete Offline
data collection:         (    0) seconds.
Offline data collection
capabilities:              (0x73) SMART execute Offline immediate.
                    Auto Offline data collection on/off support.
                    Suspend Offline collection upon new
                    command.
                    No Offline surface scan supported.
                    Self-test supported.
                    Conveyance Self-test supported.
                    Selective Self-test supported.
SMART capabilities:            (0x0003)    Saves SMART data before entering
                    power-saving mode.
                    Supports SMART auto save timer.
Error logging capability:        (0x01)    Error logging supported.
                    General Purpose Logging supported.
Short self-test routine
recommended polling time:      (   1) minutes.
Extended self-test routine
recommended polling time:      ( 473) minutes.
Conveyance self-test routine
recommended polling time:      (   2) minutes.
SCT capabilities:            (0x30a5)    SCT Status supported.
                    SCT Data Table supported.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   078   064   006    Pre-fail  Always       -       60632786
  3 Spin_Up_Time            0x0003   097   097   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       7
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   087   060   045    Pre-fail  Always       -       498465457
  9 Power_On_Hours          0x0032   096   096   000    Old_age   Always       -       3619 (226 96 0)
10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       7
183 Runtime_Bad_Block       0x0032   100   100   000    Old_age   Always       -       0
184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
188 Command_Timeout         0x0032   100   100   000    Old_age   Always       -       0 0 0
189 High_Fly_Writes         0x003a   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0022   069   064   040    Old_age   Always       -       31 (Min/Max 29/31)
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       0
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       149
193 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       156
194 Temperature_Celsius     0x0022   031   040   000    Old_age   Always       -       31 (0 22 0 0 0)
195 Hardware_ECC_Recovered  0x001a   078   064   000    Old_age   Always       -       60632786
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       3612h+47m+12.551s
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       5641435307
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       8329146152

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]

SMART Selective self-test log data structure revision number 1
SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

[root@freenas /]# smartctl -a /dev/ada2
smartctl 6.6 2017-11-05 r4594 [FreeBSD 11.2-STABLE amd64] (local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Seagate Barracuda 3.5
Device Model:     ST4000DM004-2CV104
Serial Number:    ZFN1REFT
LU WWN Device Id: 5 000c50 0b41a8a48
Firmware Version: 0001
User Capacity:    4,000,787,030,016 bytes [4.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5425 rpm
Form Factor:      3.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-3 T13/2161-D revision 5
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Sat Jul 20 07:58:12 2019 PDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00)    Offline data collection activity
                    was never started.
                    Auto Offline Data Collection: Disabled.
Self-test execution status:      (   0)    The previous self-test routine completed
                    without error or no self-test has ever
                    been run.
Total time to complete Offline
data collection:         (    0) seconds.
Offline data collection
capabilities:              (0x73) SMART execute Offline immediate.
                    Auto Offline data collection on/off support.
                    Suspend Offline collection upon new
                    command.
                    No Offline surface scan supported.
                    Self-test supported.
                    Conveyance Self-test supported.
                    Selective Self-test supported.
SMART capabilities:            (0x0003)    Saves SMART data before entering
                    power-saving mode.
                    Supports SMART auto save timer.
Error logging capability:        (0x01)    Error logging supported.
                    General Purpose Logging supported.
Short self-test routine
recommended polling time:      (   1) minutes.
Extended self-test routine
recommended polling time:      ( 493) minutes.
Conveyance self-test routine
recommended polling time:      (   2) minutes.
SCT capabilities:            (0x30a5)    SCT Status supported.
                    SCT Data Table supported.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   047   047   006    Pre-fail  Always       -       177585040
  3 Spin_Up_Time            0x0003   097   097   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       7
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   087   060   045    Pre-fail  Always       -       477452184
  9 Power_On_Hours          0x0032   096   096   000    Old_age   Always       -       3619 (6 219 0)
10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       7
183 Runtime_Bad_Block       0x0032   100   100   000    Old_age   Always       -       0
184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
188 Command_Timeout         0x0032   100   097   000    Old_age   Always       -       8 8 8
189 High_Fly_Writes         0x003a   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0022   069   064   040    Old_age   Always       -       31 (Min/Max 29/31)
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       0
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       146
193 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       156
194 Temperature_Celsius     0x0022   031   040   000    Old_age   Always       -       31 (0 22 0 0 0)
195 Hardware_ECC_Recovered  0x001a   082   064   000    Old_age   Always       -       177585040
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       3612h+27m+50.312s
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       5548990843
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       8241733319

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]

SMART Selective self-test log data structure revision number 1
SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
 
Last edited:
Joined
Oct 18, 2018
Messages
969
How were you able to come to this conclusion based on the output of the import command?
The output of zpool import suggests that you used a stripped vdev rather than a mirror. You can tell based on the indentation and labeling of the disks. Because the disks are at the same indent level and not nested under a vdev name it suggests that they are each in a single-disk, striped vdev. Here is an example of a zpool import of a 2-disk array composed of 1 mirror vdev, note the difference.

Code:
freenas# zpool import
   pool: test
     id: 15627419264373463030
  state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

        test                                            ONLINE
          mirror-0                                      ONLINE
            gptid/46688785-ab0c-11e9-b432-9c5c8ebc9d87  ONLINE
            gptid/48444ac2-ab0c-11e9-b432-9c5c8ebc9d87  ONLINE


Anyway, to me, it seems the OS sees the disk, but ZFS doesn't recognize it as part of the pool. Am I missing something? You said "if ada2 shows up", but it never didn't show up, other than as part of the pool.
This may be good news, or it may suggest that something went wrong on your disk that holds data related to zfs required to mount the disk as part of a vdev and pool.

Here are the smart details, as requested:
Can you provide that same output for /dev/ada2, the disk which is having issues? Seems I can't read, you did provide the output.
 
Last edited:
Joined
Oct 18, 2018
Messages
969
SMART Self-test log structure revision number 1 No self-tests have been logged. [To run self-tests, use: smartctl -t]
Looks like you're not running smart tests on your drives, perhaps? Go ahead and run smartctl -t long /dev/ada2. That test will take a while to complete. Once it is done go ahead and submit the same results again for that drive.

Also, check out the User Guide on setting up automatic SMART tests.
 

j_sh

Cadet
Joined
Jul 20, 2019
Messages
5
Looks like you're not running smart tests on your drives, perhaps? Go ahead and run smartctl -t long /dev/ada2. That test will take a while to complete. Once it is done go ahead and submit the same results again for that drive.

Great info, again. Thanks! I'll get back to you in approximately 493 minutes. ;)

I've read through some great info this morning. Really learning a lot about ZFS. I have a CE/CS background; reading through the details of how all this works is fun. Very impressed!

Can you explain what's going on with my "labels" in my original post?
 
Last edited:

j_sh

Cadet
Joined
Jul 20, 2019
Messages
5
Go ahead and run smartctl -t long /dev/ada2. That test will take a while to complete. Once it is done go ahead and submit the same results again for that drive.

Here are the results. Looks like the drive is fine. I'm really scratching my head as to how this happened. :rolleyes:

Code:

[root@freenas /]# smartctl -a /dev/ada2
smartctl 6.6 2017-11-05 r4594 [FreeBSD 11.2-STABLE amd64] (local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Seagate Barracuda 3.5
Device Model:     ST4000DM004-2CV104
Serial Number:    ZFN1REFT
LU WWN Device Id: 5 000c50 0b41a8a48
Firmware Version: 0001
User Capacity:    4,000,787,030,016 bytes [4.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5425 rpm
Form Factor:      3.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-3 T13/2161-D revision 5
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Sun Jul 21 11:07:47 2019 PDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00)    Offline data collection activity
                    was never started.
                    Auto Offline Data Collection: Disabled.
Self-test execution status:      (   0)    The previous self-test routine completed
                    without error or no self-test has ever 
                    been run.
Total time to complete Offline 
data collection:         (    0) seconds.
Offline data collection
capabilities:              (0x73) SMART execute Offline immediate.
                    Auto Offline data collection on/off support.
                    Suspend Offline collection upon new
                    command.
                    No Offline surface scan supported.
                    Self-test supported.
                    Conveyance Self-test supported.
                    Selective Self-test supported.
SMART capabilities:            (0x0003)    Saves SMART data before entering
                    power-saving mode.
                    Supports SMART auto save timer.
Error logging capability:        (0x01)    Error logging supported.
                    General Purpose Logging supported.
Short self-test routine 
recommended polling time:      (   1) minutes.
Extended self-test routine
recommended polling time:      ( 493) minutes.
Conveyance self-test routine
recommended polling time:      (   2) minutes.
SCT capabilities:            (0x30a5)    SCT Status supported.
                    SCT Data Table supported.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   047   047   006    Pre-fail  Always       -       177585040
  3 Spin_Up_Time            0x0003   097   097   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       7
  5 Reallocated_Sector_Ct   0x0033   095   095   010    Pre-fail  Always       -       14256
  7 Seek_Error_Rate         0x000f   087   060   045    Pre-fail  Always       -       479785220
  9 Power_On_Hours          0x0032   096   096   000    Old_age   Always       -       3646 (227 201 0)
 10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       7
183 Runtime_Bad_Block       0x0032   100   100   000    Old_age   Always       -       0
184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
188 Command_Timeout         0x0032   100   097   000    Old_age   Always       -       8 8 8
189 High_Fly_Writes         0x003a   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0022   070   064   040    Old_age   Always       -       30 (Min/Max 29/34)
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       0
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       147
193 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       157
194 Temperature_Celsius     0x0022   030   040   000    Old_age   Always       -       30 (0 22 0 0 0)
195 Hardware_ECC_Recovered  0x001a   082   064   000    Old_age   Always       -       177585040
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       3639h+36m+59.732s
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       5548990843
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       8241733319

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed without error       00%      3629         -

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
 
Joined
Oct 18, 2018
Messages
969
The SMART test saying there are no errors is misleading, it does not necessarily mean there are no issues with the drive.

The following line suggest your drive may be completely hosed.

Code:
  5 Reallocated_Sector_Ct   0x0033   095   095   010    Pre-fail  Always       -       14256


If you have backups it may be time to get more drives and restore from backup. In the future, if you use mirrored or RAIDZ1|2|3 you'll be more resilient to these kinds of failures. Feel free to create a new thread somewhere else before you buy drives and set up your next pool and folks will be happy to help you pick a good setup within your budget to fit your use case.

If I'm wrong and this drive is somehow recoverable hopefully someone will correct me. :)
 

j_sh

Cadet
Joined
Jul 20, 2019
Messages
5
Good catch, @PhiloEpisteme! Very misleading that anything above zero isn't reported as a potential failure.

Time to rebuild… I can now remember making a conscious decision to go with a basic stripe rather than redundancy. Most of this data is non-critical; the bits that are critical are backed up periodically. Minor losses, but no big deal.

I'm going to order two new drives and got with RAIDZ this time.

I realize I could open a can of worms here, so I will intentionally just ask your quick opinion on drives to use. This is mostly media, accessed by several users simultaneously. I've always used 4TB/5400RPM/6gbps with no issues (until now; ha!).
 
Joined
Oct 18, 2018
Messages
969
Good catch, @PhiloEpisteme! Very misleading that anything above zero isn't reported as a potential failure.
It certainly could be more clear. :)

I realize I could open a can of worms here, so I will intentionally just ask your quick opinion on drives to use. This is mostly media, accessed by several users simultaneously. I've always used 4TB/5400RPM/6gbps with no issues (until now; ha!).
Those sound fine. I _think_ many folks recommend RAIDZ2 at 6 drives per vdev for optimal performance and use of space. It is worth looking up how many drives to use for the various vdev types to not waste space. I use RAIDZ2 for the added security at 6 drives. You'll note too that I use an on-site backup. I have off-site backups as well. I'd really like to not lose my data.

Def worth opening the can of worms though. Do some reading through the documentation and forums and feel free to post. Lots of folks with more experience than me have better advice.
 
Top