Is the disk dying (ATA error count increased from 0 to 1)?

AgentMC

Cadet
Joined
Mar 13, 2020
Messages
9
(I read other related topics but still want the community to validate my assumption)

Hi all,

I have 2 Toshiba drives in Mirror running for about 2 years. After the Scrub run last night (I have them weekly), I have got this:
New alerts:
* Device: /dev/ada0, ATA error count increased from 0 to 1.

The scrub itself completed without an error.

The /dev/ada1 has super-nice looking SMART log, zero errors.

Here's the smartctl -a /dev/ada0 log:


Code:
smartctl 7.0 2018-12-30 r4883 [FreeBSD 11.3-RELEASE-p5 amd64] (local build)
Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Toshiba L200 (SMR)
Device Model:     TOSHIBA HDWL120
Serial Number:    10JBPI0JT
LU WWN Device Id: 5 000039 9c26857b2
Firmware Version: JT000A
User Capacity:    2,000,398,934,016 bytes [2.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5400 rpm
Form Factor:      2.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-3 T13/2161-D revision 5
SATA Version is:  SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Sun Feb 20 13:01:49 2022 GMT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00)    Offline data collection activity
                    was never started.
                    Auto Offline Data Collection: Disabled.
Self-test execution status:      (   0)    The previous self-test routine completed
                    without error or no self-test has ever
                    been run.
Total time to complete Offline
data collection:         (  120) seconds.
Offline data collection
capabilities:              (0x5b) SMART execute Offline immediate.
                    Auto Offline data collection on/off support.
                    Suspend Offline collection upon new
                    command.
                    Offline surface scan supported.
                    Self-test supported.
                    No Conveyance Self-test supported.
                    Selective Self-test supported.
SMART capabilities:            (0x0003)    Saves SMART data before entering
                    power-saving mode.
                    Supports SMART auto save timer.
Error logging capability:        (0x01)    Error logging supported.
                    General Purpose Logging supported.
Short self-test routine
recommended polling time:      (   2) minutes.
Extended self-test routine
recommended polling time:      ( 336) minutes.
SCT capabilities:            (0x003d)    SCT Status supported.
                    SCT Error Recovery Control supported.
                    SCT Feature Control supported.
                    SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000b   100   100   050    Pre-fail  Always       -       0
  2 Throughput_Performance  0x0005   100   100   050    Pre-fail  Offline      -       0
  3 Spin_Up_Time            0x0027   100   100   001    Pre-fail  Always       -       1579
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       27
  5 Reallocated_Sector_Ct   0x0033   100   100   050    Pre-fail  Always       -       22            <-------!!!
  7 Seek_Error_Rate         0x000b   100   100   050    Pre-fail  Always       -       0
  8 Seek_Time_Performance   0x0005   100   100   050    Pre-fail  Offline      -       0
  9 Power_On_Hours          0x0032   058   058   000    Old_age   Always       -       17019
 10 Spin_Retry_Count        0x0033   100   100   030    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       26
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       6
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       5
193 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       9386
194 Temperature_Celsius     0x0022   100   100   000    Old_age   Always       -       38 (Min/Max 16/68)
196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       -       22            <-------!!!
197 Current_Pending_Sector  0x0032   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
220 Disk_Shift              0x0002   100   100   000    Old_age   Always       -       0
222 Loaded_Hours            0x0032   058   058   000    Old_age   Always       -       17000
223 Load_Retry_Count        0x0032   100   100   000    Old_age   Always       -       0
224 Load_Friction           0x0022   100   100   000    Old_age   Always       -       0
226 Load-in_Time            0x0026   100   100   000    Old_age   Always       -       266
240 Head_Flying_Hours       0x0001   100   100   001    Pre-fail  Offline      -       0

SMART Error Log Version: 1
ATA Error Count: 3                                                                                    <-------!!!
    CR = Command Register [HEX]
    FR = Features Register [HEX]
    SC = Sector Count Register [HEX]
    SN = Sector Number Register [HEX]
    CL = Cylinder Low Register [HEX]
    CH = Cylinder High Register [HEX]
    DH = Device/Head Register [HEX]
    DC = Device Command Register [HEX]
    ER = Error register [HEX]
    ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.

Error 3 occurred at disk power-on lifetime: 17011 hours (708 days + 19 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 41 f8 f8 16 ab 40  Error: UNC at LBA = 0x00ab16f8 = 11212536

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 00 f8 90 16 ab 40 00  20d+04:55:10.169  READ FPDMA QUEUED
  61 08 f0 f8 7b 16 40 00  20d+04:55:10.169  WRITE FPDMA QUEUED
  61 08 e8 68 17 b3 40 00  20d+04:55:10.164  WRITE FPDMA QUEUED
  61 10 e0 b0 13 6b 40 00  20d+04:55:10.164  WRITE FPDMA QUEUED
  61 30 d8 90 dd 56 40 00  20d+04:55:10.164  WRITE FPDMA QUEUED

Error 2 occurred at disk power-on lifetime: 17010 hours (708 days + 18 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 41 e8 38 e6 61 40  Error: UNC at LBA = 0x0061e638 = 6415928

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 00 f8 90 e7 61 40 00  20d+04:40:45.908  READ FPDMA QUEUED
  60 00 f0 90 e6 61 40 00  20d+04:40:45.908  READ FPDMA QUEUED
  60 00 e8 90 e5 61 40 00  20d+04:40:45.908  READ FPDMA QUEUED
  60 00 e0 90 e4 61 40 00  20d+04:40:45.908  READ FPDMA QUEUED
  60 00 d8 90 e3 61 40 00  20d+04:40:45.908  READ FPDMA QUEUED

Error 1 occurred at disk power-on lifetime: 17010 hours (708 days + 18 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 41 c8 60 e0 19 40  Error: UNC at LBA = 0x0019e060 = 1695840

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 00 f8 90 e4 19 40 00  20d+04:23:45.980  READ FPDMA QUEUED
  60 00 f0 90 e3 19 40 00  20d+04:23:45.980  READ FPDMA QUEUED
  60 00 e8 90 e2 19 40 00  20d+04:23:45.980  READ FPDMA QUEUED
  61 08 e0 90 21 14 40 00  20d+04:23:45.980  WRITE FPDMA QUEUED
  61 08 d8 98 21 14 40 00  20d+04:23:45.980  WRITE FPDMA QUEUED

SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.



You can see that all 3 errors occurred within the span of a scrub.
I have set up the regular smart checks (short) now to further monitor the state of the drives.

My assumption is that since this is the first time I am seeing this, I'll keep an eye on the relocated sectors and further errors, but for now it is not necessary to take any actions at least until the normalized value in Smart goes down on those reallocations. What do you think guys?
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
There is a high chance that the disk is dying. For many years disks have had a lot of protection mechanisms in place to avoid data loss. E.g. they internally re-map bad sectors, without anything showing up to the outside. If issues become visible, it usually means that the internal pool for re-mapping is exhausted. Or in other words: there has a problem for quite some time, only it was concealed.

If it were my disk, I would consider it dead and get a replacement immediately. And of course ensure that I have a current backup.
 
Joined
Oct 22, 2019
Messages
3,641
What @ChrisRJ said, but also...

The scrub itself completed without an error.
This doesn't mean you're in the safe zone, hardware-wise. ZFS is doing its thing: verifying checksums and automatically repairing integrity based on your vdev setup.

However, you might be in a danger zone (very likely, even) if your vdev is compromised due to one of your drives showing early signs of failing beyond feasible self-repair. When this threshold is broken your pool is essentially operating atop a "striped" vdev, with no redundancy. Your data might be safe in the meantime, but on precarious grounds.

It's possible for these combinations to exist:
  • SMART to pass internal tests for all drives, yet still ZFS finds corruptions on some records with a scrub
  • SMART to fail internal tests for one or more drives, yet still ZFS finds no corruptions on any records with a scrub
 
Last edited:

AgentMC

Cadet
Joined
Mar 13, 2020
Messages
9
If it were my disk, I would consider it dead and get a replacement immediately. And of course ensure that I have a current backup.
Okay very good suggestion. Today (2 days passed) the relocations count is increased to 66.
 
Joined
Oct 22, 2019
Messages
3,641
Okay very good suggestion. Today (2 days passed) the relocations count is increased to 66.
I would replace the drive and resilver ASAP. (Not to mention make sure I have an up-to-date backup of everything.)

Slow and steady: do the replacement and resilvering process step by step, making sure you know exactly which drive you're dealing with.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Okay very good suggestion. Today (2 days passed) the relocations count is increased to 66.
The increasing counter is definitive evidence that the drive is unhealthy.
 
Top