Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

Stripe volume suddenly degraded how is this even possible?

n8lbv

Newbie
Joined
Sep 12, 2017
Messages
58
I have a RAID0 volume two drives striped for speed and space.
I am suddenly getting alerts that the volume is "degraded".

One disk shows online and the other disk shows "degraded".

I don't get it.
A RAID0 can't be "degraded" it's either good or it's gone.

The disk itself does not show any problems or SMART errors/issues.

And of course the volume works just fine.

Any thoughts on what/why this is and how do I fix it?

I don;t care about the data on the volume.
It is temporary scratchpad storage space for nothing important.

Thanks!

root@freenas[~]# smartctl -a /dev/ada0
smartctl 7.0 2018-12-30 r4883 [FreeBSD 11.3-RELEASE-p5 amd64] (local build)
Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Device Model: ST2000DM008-2FR102
Serial Number: ********
LU WWN Device Id: 5 000c50 0c3ef6800
Firmware Version: 0001
User Capacity: 2,000,398,934,016 bytes [2.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 7200 rpm
Form Factor: 3.5 inches
Device is: Not in smartctl database [for details use: -P showall]
ATA Version is: ACS-3 T13/2161-D revision 5
SATA Version is: SATA 3.1, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is: Fri Mar 13 11:16:16 2020 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: ( 0) seconds.
Offline data collection
capabilities: (0x73) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
No Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 1) minutes.
Extended self-test routine
recommended polling time: ( 199) minutes.
Conveyance self-test routine
recommended polling time: ( 2) minutes.
SCT capabilities: (0x30a5) SCT Status supported.
SCT Data Table supported.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 082 064 006 Pre-fail Always - 172023192
3 Spin_Up_Time 0x0003 099 099 000 Pre-fail Always - 0
4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 6
5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0
7 Seek_Error_Rate 0x000f 072 061 045 Pre-fail Always - 15627525
9 Power_On_Hours 0x0032 099 099 000 Old_age Always - 1454 (85 185 0)
10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0
12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 6
183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 0
184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0
187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0
188 Command_Timeout 0x0032 100 100 000 Old_age Always - 0
189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0
190 Airflow_Temperature_Cel 0x0022 068 050 040 Old_age Always - 32 (Min/Max 23/36)
191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 0
192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 54
193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 235
194 Temperature_Celsius 0x0022 032 050 000 Old_age Always - 32 (0 6 0 0 0)
195 Hardware_ECC_Recovered 0x001a 082 064 000 Old_age Always - 172023192
197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0
240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 1417 (24 242 0)
241 Total_LBAs_Written 0x0000 100 253 000 Old_age Offline - 15673151314
242 Total_LBAs_Read 0x0000 100 253 000 Old_age Offline - 8239426854

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed without error 00% 1220 -
# 2 Extended offline Completed without error 00% 22 -
# 3 Short offline Completed without error 00% 16 -

SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
 

danb35

FreeNAS Wizard
Joined
Aug 16, 2011
Messages
10,890
A RAID0 can't be "degraded" it's either good or it's gone.
You don't have a RAID0, you have a ZFS stripe. And data in there can certainly be corrupted without a hard disk failure. zpool status -v may give you more information.
 

n8lbv

Newbie
Joined
Sep 12, 2017
Messages
58
Thanks,
I have never experienced this before unless a drive was bad or failing.
The drives look and test good.

I have no idea where the corruption may be, what caused it or how to recover from it other than replacing or rebuilding the pool from scratch.
And it appears to be working fine.
 

n8lbv

Newbie
Joined
Sep 12, 2017
Messages
58
I'm not any further ahead.
I don't know how the error occurred or how to to fix it or how to properly deal with it.
Meanwhile both drives test fine and all of the data appears to be intact and accessible.


zpool status Seagate22
pool: Seagate22
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://illumos.org/msg/ZFS-8000-9P
scan: scrub repaired 0 in 0 days 00:51:29 with 0 errors on Sat Mar 14 11:08:59 2020
config:

NAME STATE READ WRITE CKSUM
Seagate22 DEGRADED 0 0 40
gptid/a04359cf-389c-11ea-9644-001d092ae49d DEGRADED 0 0 80 too many errors
gptid/a1d3437c-389c-11ea-9644-001d092ae49d ONLINE 0 0 0

errors: No known data errors
 

n8lbv

Newbie
Joined
Sep 12, 2017
Messages
58
I figured it out myself.
People are always quick to point out stuff like "It's not RAID0"
Or "you shouldn't be using a stripe you're going to die!"
SMH.

I simply did a ZFS clear and scrub of the volume.

Still didn't learning anything.
No idea on how the problem developed or what caused it.
The drives themselves are healthy the server is healthy there were no power or any other types on interruptions I'm aware of that would have
caused an "error" on only one drive of the array.

Cheers.
 

sretalla

FreeNAS Expert
Joined
Jan 1, 2016
Messages
1,916
No idea on how the problem developed or what caused it.
The errors you were showing were Checksum, so this can indicate cabling or connectivity to that disk. If you came up clean on a scrub, looks like no permanent issues.

I would suggest a try at re-seating the cable to that disk and having a think about power to the system (how old is your power supply). Also think about anything that could have caused a bump sufficient to move the cable into a poor connection temporarily.

Of course all of that could just be cosmic radiation that targeted your house... did you upset the sun gods? lol.
 

n8lbv

Newbie
Joined
Sep 12, 2017
Messages
58
Thanks!!
Yep, checked all of that of course.
Not a cabling issue far as I can tell.
Definitely a neutrino event!
Yes!
 

kdragon75

FreeNAS Expert
Joined
Aug 7, 2016
Messages
2,449
Thanks!!
Yep, checked all of that of course.
Not a cabling issue far as I can tell.
Definitely a neutrino event!
Yes!
Neutrinos would not effect your pool. That's literally why there so hard to detect. It would be far more likely be gamma rays or x-rays.
 

pschatz100

FreeNAS Guru
Joined
Mar 30, 2014
Messages
860
A couple of years ago, I had an intermittent disk problem that was impossible to diagnose. I replugged SATA cables, moved the disks around in the case, even replaced the power supply. Just like your situation, the disks checked OK. Nothing helped and I began to think I had a motherboard problem. Out of sheer frustration, I decided to replace every SATA and power cable and Bingo!, the problem went away. Turns out my problem was caused by a bad molex to SATA power cable adapter. I don't know why it failed because it is just a cable adapter - but after replacing that part, my problem went away.
 
Top