Drive write errors - fixed after cable swap? (I am suspicious and want to make sure)

jolness1

Dabbler
Joined
May 21, 2020
Messages
29
Hey all!

I got an alert from my TrueNAS scale box that one of the drives had two write errors.
Of course I panicked a bit. I do have 2 spare drives (running a 5 drive array with raidz2 so figure I can be ready immediately for a failure) and do have both a scheduled onsite and offsite backup but if I can avoid data loss, even better. I have it configured to run smart short Monday-Saturday and then a Long test weekly on sunday. All of the values look fine (besides just now noticing my power_on_hours and LifeTime in the smart self test don't match... I am unsure if that is typical or if I bought drives years ago that were more used than I thought? I swear they had next to 0 on them but hard to know now) (Edit: The number of total hours lines up nearly with my roughly 3 years of usage so.. I think they were new, wasa pretty sure but the lack of match concerned me)

So I shut down the machine after going through a troubleshooting guide I found here and noticed the cable from my HBA was seated by had more wiggle than I would like so I moved it to another cable and on boot, the error is gone.

I have a couple of questions (and am also open to any and all insight about this)
1) Is the scrub what would detect these errors? Or is it the smart test (the error came about 45 minutes after they both are scheduled to start)?
2) Does restarting normally make them go away? Would swapping to a new cable? It seems like it shouldn't but I will admit I have no clue.
3) What can I do to check to be sure the issue was actually a fluke?

I am running TrueNAS-SCALE-22.12.4.2, have been using this same build for 3ish years now and haven't ever had an issue like this. I believe I updated TrueNAS last week after the previous Sunday night scrub/long test so wondering if that is the cause? Doubtful again but just spitballing as I go here.

Apologies for a bit of a rant, did my best to organize my thoughts but still a little panicked. I shouldn't be, I have been cautious about data protection but just want to avoid any sort of unnecessary stress. Thanks in advance!

Edit: ran "zpool status" and notice it hasn't run since october 15th, threshold is 35 days so I am guessing it wasn't a scrub that turned this up. Running one manually to check now.

zpool status output:
Code:
zpool status
  pool: Tank
 state: ONLINE
  scan: scrub repaired 0B in 01:59:05 with 0 errors on Sun Oct 15 01:59:07 2023
config:

    NAME                                      STATE     READ WRITE CKSUM
    Tank                                      ONLINE       0     0     0
      raidz2-0                                ONLINE       0     0     0
        088ae6a4-bf0b-11ea-94a7-0cc47a7040ac  ONLINE       0     0     0
        097971df-bf0b-11ea-94a7-0cc47a7040ac  ONLINE       0     0     0
        0990b440-bf0b-11ea-94a7-0cc47a7040ac  ONLINE       0     0     0
        09a9b329-bf0b-11ea-94a7-0cc47a7040ac  ONLINE       0     0     0
        09c25c50-bf0b-11ea-94a7-0cc47a7040ac  ONLINE       0     0     0

errors: No known data errors


Here is the smart output:
Code:
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000b   100   100   016    Pre-fail  Always       -       0
  2 Throughput_Performance  0x0005   130   130   054    Pre-fail  Offline      -       100
  3 Spin_Up_Time            0x0007   167   167   024    Pre-fail  Always       -       409 (Average 380)
  4 Start_Stop_Count        0x0012   100   100   000    Old_age   Always       -       50
  5 Reallocated_Sector_Ct   0x0033   100   100   005    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000b   100   100   067    Pre-fail  Always       -       0
  8 Seek_Time_Performance   0x0005   128   128   020    Pre-fail  Offline      -       18
  9 Power_On_Hours          0x0012   096   096   000    Old_age   Always       -       28928
 10 Spin_Retry_Count        0x0013   100   100   060    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       50
 22 Helium_Level            0x0023   100   100   025    Pre-fail  Always       -       100
192 Power-Off_Retract_Count 0x0032   099   099   000    Old_age   Always       -       1212
193 Load_Cycle_Count        0x0012   099   099   000    Old_age   Always       -       1212
194 Temperature_Celsius     0x0002   206   206   000    Old_age   Always       -       29 (Min/Max 17/40)
196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0022   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0008   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x000a   200   200   000    Old_age   Always       -       0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00%     12038         -
# 2  Extended offline    Completed without error       00%     12035         -
# 3  Short offline       Completed without error       00%     11990         -
# 4  Short offline       Completed without error       00%     11966         -
# 5  Short offline       Completed without error       00%     11942         -
# 6  Short offline       Completed without error       00%     11918         -
# 7  Short offline       Completed without error       00%     11894         -
# 8  Short offline       Completed without error       00%     11870         -
# 9  Extended offline    Completed without error       00%     11867         -
#10  Short offline       Completed without error       00%     11822         -
#11  Short offline       Completed without error       00%     11798         -
#12  Short offline       Completed without error       00%     11774         -
#13  Short offline       Completed without error       00%     11750         -
#14  Short offline       Completed without error       00%     11726         -
#15  Short offline       Completed without error       00%     11702         -
#16  Extended offline    Completed without error       00%     11698         -
#17  Short offline       Completed without error       00%     11653         -
#18  Short offline       Completed without error       00%     11629         -
#19  Short offline       Completed without error       00%     11605         -
#20  Short offline       Completed without error       00%     11581         -
#21  Short offline       Completed without error       00%     11557         -

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
 
Last edited:

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
ZFS won't know about smart errors necessarily. Smart won't know about ZFS errors necessarily. And the errors could come just from doing IO, so neither is also a valid answer. Heck, they can come from bad memory too.

Nothing wrong with doing a scrub. You can have drive errors but no zfs errors (still important to look into). But when you get zfs errors they are something that should be looked into for sure as they are coming from somewhere and yes swapping a cable (or re-seating it) can make them stop.

Smart says no bad sectors or other errors, that's good.

If the scrub goes well, then you're good to go. I wouldn't worry about it at that point. The scrub will check your data.
 

jolness1

Dabbler
Joined
May 21, 2020
Messages
29
ZFS won't know about smart errors necessarily. Smart won't know about ZFS errors necessarily. And the errors could come just from doing IO, so neither is also a valid answer. Heck, they can come from bad memory too.

Nothing wrong with doing a scrub. You can have drive errors but no zfs errors (still important to look into). But when you get zfs errors they are something that should be looked into for sure as they are coming from somewhere and yes swapping a cable (or re-seating it) can make them stop.

Smart says no bad sectors or other errors, that's good.

If the scrub goes well, then you're good to go. I wouldn't worry about it at that point. The scrub will check your data.

This is roughly the conception I had but I am thankful for some clarification. I have done a bit more reading since and I think it was a loose cable but I haven't had any errors yet and panicked and figured I would lean on the community here a bit. So far the scrub has turned up no errors so I think I have my answer but side panels are still off my case in case I have to go back in.

Appreciate the response! I am feeling less panicked now than I was earlier but if the scrub comes back clean I will just write it off as a cable and if it pops up again on the same (or even a different) drive I will look into it more. Thanks again.
 

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
Let me just say glad to see someone take errors seriously for once! We have posters who come here and say they had drive errors the last 6 months, ignored them, and then wonder why they lost their pool.
 

jolness1

Dabbler
Joined
May 21, 2020
Messages
29
Let me just say glad to see someone take errors seriously for once! We have posters who come here and say they had drive errors the last 6 months, ignored them, and then wonder why they lost their pool.
As a software engineer, I figure if someone programmed it to throw an error there is a reason, from personal experience, users often ignore them . I would rather investigate a false alarm than have the array explode and deal with that mess.
 
Top