alterfritz72
Dabbler
- Joined
- Dec 6, 2018
- Messages
- 11
Hi,
quite new to freenas. Build some freenas server on commodity hardware. I attached dmesg output.
I have build a pool based on 3x10 disks on RAIDz2.
I was coping some date overnight, all seems good. Im the morning, the pool was done. An lots of errors are in the /var/log/messages.
It seems to me that the disks are ok, 1st they are quite new, and second I removed on and did some smart test also in a windows pc. There are no permanent errors on those disks, like bad blocks.
However, it seems that all the times I try a zpool import, it hangs like this:
Oct 27 10:48:23 freenas1 nfsd: can't register svc name
Oct 27 10:48:23 freenas1 ntpd[1370]: restrict: ignoring line 10, address/host '0.freebsd.pool.ntp.org' unusable.
Oct 27 10:48:23 freenas1 ntpd[1370]: restrict: ignoring line 11, address/host '1.freebsd.pool.ntp.org' unusable.
Oct 27 10:48:23 freenas1 ntpd[1370]: restrict: ignoring line 12, address/host '2.freebsd.pool.ntp.org' unusable.
Oct 27 10:48:24 freenas1 kernel: re0: link state changed to UP
Oct 27 10:48:24 freenas1 kernel: re0: link state changed to UP
Oct 27 10:48:24 freenas1 nfsrv_cache_session: no session
**** here I start the import from another shell ****
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=3741448112455685757
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=4664470457175269676
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=8685132275913437468
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=8231545775239736065
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=4194145620947548577
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=6395447668334150041
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=3247671384774872601
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=16912913949041146380
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=4529362423629436642
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=12884665542282722123
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=12366843741750379281
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=18388331534813110478
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=17314746444696464833
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=2004171900423069180
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=1756946791324882113
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=13520873478109463645
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=8800604598952879030
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=9131285662896745831
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=4922909577835814258
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=12937972062917121217
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=12259229670344092347
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=4932022763362024582
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=14492331186460395707
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=5139867809909229828
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=3782388242098230347
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=14154430933037967269
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=13595744289878905008
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=4950188485396152737
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=14828035688722699969
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=3643098206646759999
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=3741448112455685757
Oct 27 11:01:31 freenas1 ahcich3: Timeout on slot 7 port 0
Oct 27 11:01:31 freenas1 ahcich3: is 00000000 cs 00000080 ss 00000000 rs 00000080 tfd 58 serr 00000000 cmd 0004c717
Oct 27 11:01:31 freenas1 ahcich3: Error while READ LOG EXT
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): WRITE_FPDMA_QUEUED. ACB: 61 10 90 02 40 40 00 00 00 00 00 00
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): ATA status: 00 ()
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): RES: 00 00 00 00 00 00 00 00 00 00 00
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): Retrying command
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): READ_FPDMA_QUEUED. ACB: 60 01 86 be c0 40 d1 01 00 00 00 00
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): ATA status: 00 ()
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): RES: 00 00 00 00 00 00 00 00 00 00 00
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): Retrying command
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): WRITE_FPDMA_QUEUED. ACB: 61 10 90 ba c0 40 d1 01 00 00 00 00
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): ATA status: 00 ()
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): RES: 00 00 00 00 00 00 00 00 00 00 00
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): Retrying command
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): WRITE_FPDMA_QUEUED. ACB: 61 10 90 bc c0 40 d1 01 00 00 00 00
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): ATA status: 00 ()
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): RES: 00 00 00 00 00 00 00 00 00 00 00
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): Retrying command
Oct 27 11:01:31 freenas1 (aprobe0:ahcich3:0:0:0): SETFEATURES SET TRANSFER MODE. ACB: ef 03 00 00 00 40 00 00 00 00 46 00
Oct 27 11:01:31 freenas1 (aprobe0:ahcich3:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:31 freenas1 (aprobe0:ahcich3:0:0:0): ATA status: 61 (DRDY DF ERR), error: 04 (ABRT )
Oct 27 11:01:31 freenas1 (aprobe0:ahcich3:0:0:0): RES: 61 04 00 00 00 40 00 00 00 46 00
Oct 27 11:01:31 freenas1 (aprobe0:ahcich3:0:0:0): Retrying command
Oct 27 11:01:31 freenas1 (aprobe0:ahcich3:0:0:0): SETFEATURES SET TRANSFER MODE. ACB: ef 03 00 00 00 40 00 00 00 00 46 00
Oct 27 11:01:31 freenas1 (aprobe0:ahcich3:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:31 freenas1 (aprobe0:ahcich3:0:0:0): ATA status: 61 (DRDY DF ERR), error: 04 (ABRT )
Oct 27 11:01:31 freenas1 (aprobe0:ahcich3:0:0:0): RES: 61 04 00 00 00 40 00 00 00 46 00
Oct 27 11:01:31 freenas1 (aprobe0:ahcich3:0:0:0): Error 5, Retries exhausted
Oct 27 11:01:31 freenas1 ada7 at ahcich3 bus 0 scbus3 target 0 lun 0
Oct 27 11:01:31 freenas1 ada7: <WDC WD40EFAX-68JH4N0 82.00A82> s/n WD-WX32D203U3JT detached
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): SETFEATURES ENABLE RCACHE. ACB: ef aa 00 00 00 40 00 00 00 00 00 00
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): ATA status: 61 (DRDY DF ERR), error: 04 (ABRT )
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): RES: 61 04 00 00 00 40 00 00 00 00 00
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): Error 5, Periph was invalidated
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): SETFEATURES ENABLE WCACHE. ACB: ef 02 00 00 00 40 00 00 00 00 00 00
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): ATA status: 61 (DRDY DF ERR), error: 04 (ABRT )
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): RES: 61 04 00 00 00 40 00 00 00 00 00
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): Error 5, Periph was invalidated
Oct 27 11:01:39 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=3741448112455685757
Oct 27 11:01:39 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=4664470457175269676
Oct 27 11:01:39 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=8685132275913437468
Oct 27 11:01:39 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=8231545775239736065
Oct 27 11:01:39 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=4194145620947548577
Oct 27 11:01:39 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=6395447668334150041
Oct 27 11:01:39 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=3247671384774872601
Oct 27 11:01:39 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=16912913949041146380
Oct 27 11:01:39 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=4529362423629436642
Oct 27 11:01:39 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=12884665542282722123
Oct 27 11:01:39 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=12366843741750379281
Oct 27 11:01:39 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=18388331534813110478
Oct 27 11:01:39 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=18388331534813110478
Oct 27 11:01:39 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=17314746444696464833
Oct 27 11:01:39 freenas1 ahcich3: Timeout on slot 17 port 0
Oct 27 11:01:39 freenas1 ahcich3: is 00000000 cs 00020000 ss 00000000 rs 00020000 tfd 58 serr 00000000 cmd 0004d117
Oct 27 11:01:39 freenas1 ahcich3: Error while READ LOG EXT
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): WRITE_FPDMA_QUEUED. ACB: 61 10 90 02 40 40 00 00 00 00 00 00
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): ATA status: 00 ()
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): RES: 00 00 00 00 00 00 00 00 00 00 00
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): Error 5, Periph was invalidated
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): READ_FPDMA_QUEUED. ACB: 60 01 86 be c0 40 d1 01 00 00 00 00
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): ATA status: 00 ()
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): RES: 00 00 00 00 00 00 00 00 00 00 00
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): Error 5, Periph was invalidated
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): WRITE_FPDMA_QUEUED. ACB: 61 10 90 ba c0 40 d1 01 00 00 00 00
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): ATA status: 00 ()
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): RES: 00 00 00 00 00 00 00 00 00 00 00
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): Error 5, Periph was invalidated
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): WRITE_FPDMA_QUEUED. ACB: 61 10 90 bc c0 40 d1 01 00 00 00 00
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): ATA status: 00 ()
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): RES: 00 00 00 00 00 00 00 00 00 00 00
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): Error 5, Periph was invalidated
Oct 27 11:01:39 freenas1 g_access(952): provider ada7 has error 6 set
Oct 27 11:01:39 freenas1 g_access(952): provider ada7 has error 6 set
Oct 27 11:01:39 freenas1 g_access(952): provider ada7 has error 6 set
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): WRITE_DMA. ACB: ca 00 90 02 40 40 00 00 00 00 10 00
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): ATA status: 61 (DRDY DF ERR), error: 04 (ABRT )
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): RES: 61 04 90 02 40 00 00 00 00 10 00
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): Retrying command
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): WRITE_DMA. ACB: ca 00 90 02 40 40 00 00 00 00 10 00
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): ATA status: 61 (DRDY DF ERR), error: 04 (ABRT )
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): RES: 61 04 90 02 40 00 00 00 00 10 00
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): Retrying command
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): WRITE_DMA. ACB: ca 00 90 02 40 40 00 00 00 00 10 00
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): ATA status: 61 (DRDY DF ERR), error: 04 (ABRT )
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): RES: 61 04 90 02 40 00 00 00 00 10 00
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): Retrying command
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): WRITE_DMA. ACB: ca 00 90 02 40 40 00 00 00 00 10 00
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): ATA status: 61 (DRDY DF ERR), error: 04 (ABRT )
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): RES: 61 04 90 02 40 00 00 00 00 10 00
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): Retrying command
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): WRITE_DMA. ACB: ca 00 90 02 40 40 00 00 00 00 10 00
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): ATA status: 61 (DRDY DF ERR), error: 04 (ABRT )
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): RES: 61 04 90 02 40 00 00 00 00 10 00
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): Error 5, Retries exhausted
This then goes on forever - the pool does not get online again, even in degraded state.
What I do not understand - if the disks are gone by physical defect I would understand, but if they are OK and its just the data - how that could happen? Again seems to me the disks are physically OK
And if there a way to fix that, somehow identify the bad data and zero is, so the pool comes online again, maybe in degraded state?
II also tried the following,:
1.) get a list of all disks / ada identifiers
2.) start a zpool import, identify the disks its complaining about, unplug stata, reboot...
3.) do that for the next disk, start with 1^.)
I also smartest data for all drives, also attached. In this smarttest.log you can see that those strange smart errors have been recorded on a few disks, like ada7:
##########
##########
##########
*******
* ada7*
*******
smartctl 7.0 2018-12-30 r4883 [FreeBSD 11.3-RELEASE-p14 amd64] (local build)
Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: Western Digital Red (SMR)
Device Model: WDC WD40EFAX-68JH4N0
Serial Number: WD-WX32D203U3JT
LU WWN Device Id: 5 0014ee 21272c9f7
Firmware Version: 82.00A82
User Capacity: 4,000,787,030,016 bytes [4.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 5400 rpm
Form Factor: 3.5 inches
Device is: In smartctl database [for details use: -P show]
ATA Version is: ACS-3 T13/2161-D revision 5
SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Tue Oct 27 15:19:41 2020 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: (36060) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 517) minutes.
Conveyance self-test routine
recommended polling time: ( 2) minutes.
SCT capabilities: (0x3039) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0
3 Spin_Up_Time 0x0027 198 195 021 Pre-fail Always - 3091
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 434
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0
9 Power_On_Hours 0x0032 098 098 000 Old_age Always - 1668
10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0
11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 417
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 325
193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 125
194 Temperature_Celsius 0x0022 115 090 000 Old_age Always - 32
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0030 100 253 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0
200 Multi_Zone_Error_Rate 0x0008 100 253 000 Old_age Offline - 0
SMART Error Log Version: 1
ATA Error Count: 30 (device log contains only the most recent five errors)
CR = Command Register [HEX]
FR = Features Register [HEX]
SC = Sector Count Register [HEX]
SN = Sector Number Register [HEX]
CL = Cylinder Low Register [HEX]
CH = Cylinder High Register [HEX]
DH = Device/Head Register [HEX]
DC = Device Command Register [HEX]
ER = Error register [HEX]
ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.
Error 30 occurred at disk power-on lifetime: 1668 hours (69 days + 12 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
04 61 46 00 00 00 40 Device Fault; Error: ABRT
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
ef 03 46 00 00 00 40 08 00:08:53.086 SET FEATURES [Set transfer mode]
ef 03 46 00 00 00 40 08 00:08:53.086 SET FEATURES [Set transfer mode]
ec 00 00 00 00 00 40 08 00:08:53.085 IDENTIFY DEVICE
Error 29 occurred at disk power-on lifetime: 1668 hours (69 days + 12 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
04 61 46 00 00 00 40 Device Fault; Error: ABRT
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
ef 03 46 00 00 00 40 08 00:08:53.086 SET FEATURES [Set transfer mode]
ec 00 00 00 00 00 40 08 00:08:53.085 IDENTIFY DEVICE
Error 28 occurred at disk power-on lifetime: 1668 hours (69 days + 12 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
04 61 00 00 00 00 40 Device Fault; Error: ABRT
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
ef 02 00 00 00 00 40 08 00:08:45.004 SET FEATURES [Enable write cache]
ef aa 00 00 00 00 40 08 00:08:45.003 SET FEATURES [Enable read look-ahead]
ef 03 46 00 00 00 40 08 00:08:45.003 SET FEATURES [Set transfer mode]
ef 03 46 00 00 00 40 08 00:08:45.003 SET FEATURES [Set transfer mode]
ec 00 00 00 00 00 40 08 00:08:45.002 IDENTIFY DEVICE
Error 27 occurred at disk power-on lifetime: 1668 hours (69 days + 12 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
04 61 00 00 00 00 40 Device Fault; Error: ABRT
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
ef aa 00 00 00 00 40 08 00:08:45.003 SET FEATURES [Enable read look-ahead]
ef 03 46 00 00 00 40 08 00:08:45.003 SET FEATURES [Set transfer mode]
ef 03 46 00 00 00 40 08 00:08:45.003 SET FEATURES [Set transfer mode]
ec 00 00 00 00 00 40 08 00:08:45.002 IDENTIFY DEVICE
Error 26 occurred at disk power-on lifetime: 1668 hours (69 days + 12 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
04 61 46 00 00 00 40 Device Fault; Error: ABRT
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
ef 03 46 00 00 00 40 08 00:08:45.003 SET FEATURES [Set transfer mode]
ef 03 46 00 00 00 40 08 00:08:45.003 SET FEATURES [Set transfer mode]
ec 00 00 00 00 00 40 08 00:08:45.002 IDENTIFY DEVICE
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed without error 00% 1646 -
# 2 Short offline Completed without error 00% 1646 -
# 3 Short offline Completed without error 00% 1646 -
# 4 Short offline Completed without error 00% 1645 -
# 5 Short offline Completed without error 00% 1623 -
# 6 Short offline Completed without error 00% 1622 -
# 7 Short offline Completed without error 00% 1598 -
# 8 Short offline Completed without error 00% 1574 -
# 9 Short offline Completed without error 00% 1550 -
#10 Short offline Completed without error 00% 1526 -
#11 Short offline Completed without error 00% 1502 -
#12 Short offline Completed without error 00% 1478 -
#13 Short offline Completed without error 00% 1454 -
#14 Short offline Completed without error 00% 1430 -
#15 Short offline Completed without error 00% 1410 -
#16 Short offline Completed without error 00% 1385 -
#17 Short offline Completed without error 00% 1362 -
#18 Short offline Completed without error 00% 1361 -
#19 Short offline Completed without error 00% 1337 -
#20 Short offline Completed without error 00% 1313 -
#21 Short offline Completed without error 00% 1293 -
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
##########
##########
##########
So in summary I think
1.) the disks are physically OK, but somehow the data got corrupted.
2.) I guess as that affected more then 2 disks in one vdev the data is gone, correct
3.) if there have been any way to recover even in degraded state zfs would have done when I execute zpool import
Before I delete and restart - please can someone with more experience tell me if there is any way to save the data, even partially I read somewhere you may detect the parts that are bad on each disk and zero them out. Is something like that possible?
Many thanks
Joerg
quite new to freenas. Build some freenas server on commodity hardware. I attached dmesg output.
I have build a pool based on 3x10 disks on RAIDz2.
I was coping some date overnight, all seems good. Im the morning, the pool was done. An lots of errors are in the /var/log/messages.
It seems to me that the disks are ok, 1st they are quite new, and second I removed on and did some smart test also in a windows pc. There are no permanent errors on those disks, like bad blocks.
However, it seems that all the times I try a zpool import, it hangs like this:
Oct 27 10:48:23 freenas1 nfsd: can't register svc name
Oct 27 10:48:23 freenas1 ntpd[1370]: restrict: ignoring line 10, address/host '0.freebsd.pool.ntp.org' unusable.
Oct 27 10:48:23 freenas1 ntpd[1370]: restrict: ignoring line 11, address/host '1.freebsd.pool.ntp.org' unusable.
Oct 27 10:48:23 freenas1 ntpd[1370]: restrict: ignoring line 12, address/host '2.freebsd.pool.ntp.org' unusable.
Oct 27 10:48:24 freenas1 kernel: re0: link state changed to UP
Oct 27 10:48:24 freenas1 kernel: re0: link state changed to UP
Oct 27 10:48:24 freenas1 nfsrv_cache_session: no session
**** here I start the import from another shell ****
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=3741448112455685757
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=4664470457175269676
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=8685132275913437468
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=8231545775239736065
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=4194145620947548577
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=6395447668334150041
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=3247671384774872601
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=16912913949041146380
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=4529362423629436642
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=12884665542282722123
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=12366843741750379281
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=18388331534813110478
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=17314746444696464833
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=2004171900423069180
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=1756946791324882113
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=13520873478109463645
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=8800604598952879030
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=9131285662896745831
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=4922909577835814258
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=12937972062917121217
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=12259229670344092347
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=4932022763362024582
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=14492331186460395707
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=5139867809909229828
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=3782388242098230347
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=14154430933037967269
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=13595744289878905008
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=4950188485396152737
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=14828035688722699969
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=3643098206646759999
Oct 27 11:01:23 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=3741448112455685757
Oct 27 11:01:31 freenas1 ahcich3: Timeout on slot 7 port 0
Oct 27 11:01:31 freenas1 ahcich3: is 00000000 cs 00000080 ss 00000000 rs 00000080 tfd 58 serr 00000000 cmd 0004c717
Oct 27 11:01:31 freenas1 ahcich3: Error while READ LOG EXT
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): WRITE_FPDMA_QUEUED. ACB: 61 10 90 02 40 40 00 00 00 00 00 00
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): ATA status: 00 ()
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): RES: 00 00 00 00 00 00 00 00 00 00 00
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): Retrying command
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): READ_FPDMA_QUEUED. ACB: 60 01 86 be c0 40 d1 01 00 00 00 00
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): ATA status: 00 ()
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): RES: 00 00 00 00 00 00 00 00 00 00 00
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): Retrying command
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): WRITE_FPDMA_QUEUED. ACB: 61 10 90 ba c0 40 d1 01 00 00 00 00
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): ATA status: 00 ()
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): RES: 00 00 00 00 00 00 00 00 00 00 00
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): Retrying command
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): WRITE_FPDMA_QUEUED. ACB: 61 10 90 bc c0 40 d1 01 00 00 00 00
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): ATA status: 00 ()
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): RES: 00 00 00 00 00 00 00 00 00 00 00
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): Retrying command
Oct 27 11:01:31 freenas1 (aprobe0:ahcich3:0:0:0): SETFEATURES SET TRANSFER MODE. ACB: ef 03 00 00 00 40 00 00 00 00 46 00
Oct 27 11:01:31 freenas1 (aprobe0:ahcich3:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:31 freenas1 (aprobe0:ahcich3:0:0:0): ATA status: 61 (DRDY DF ERR), error: 04 (ABRT )
Oct 27 11:01:31 freenas1 (aprobe0:ahcich3:0:0:0): RES: 61 04 00 00 00 40 00 00 00 46 00
Oct 27 11:01:31 freenas1 (aprobe0:ahcich3:0:0:0): Retrying command
Oct 27 11:01:31 freenas1 (aprobe0:ahcich3:0:0:0): SETFEATURES SET TRANSFER MODE. ACB: ef 03 00 00 00 40 00 00 00 00 46 00
Oct 27 11:01:31 freenas1 (aprobe0:ahcich3:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:31 freenas1 (aprobe0:ahcich3:0:0:0): ATA status: 61 (DRDY DF ERR), error: 04 (ABRT )
Oct 27 11:01:31 freenas1 (aprobe0:ahcich3:0:0:0): RES: 61 04 00 00 00 40 00 00 00 46 00
Oct 27 11:01:31 freenas1 (aprobe0:ahcich3:0:0:0): Error 5, Retries exhausted
Oct 27 11:01:31 freenas1 ada7 at ahcich3 bus 0 scbus3 target 0 lun 0
Oct 27 11:01:31 freenas1 ada7: <WDC WD40EFAX-68JH4N0 82.00A82> s/n WD-WX32D203U3JT detached
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): SETFEATURES ENABLE RCACHE. ACB: ef aa 00 00 00 40 00 00 00 00 00 00
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): ATA status: 61 (DRDY DF ERR), error: 04 (ABRT )
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): RES: 61 04 00 00 00 40 00 00 00 00 00
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): Error 5, Periph was invalidated
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): SETFEATURES ENABLE WCACHE. ACB: ef 02 00 00 00 40 00 00 00 00 00 00
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): ATA status: 61 (DRDY DF ERR), error: 04 (ABRT )
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): RES: 61 04 00 00 00 40 00 00 00 00 00
Oct 27 11:01:31 freenas1 (ada7:ahcich3:0:0:0): Error 5, Periph was invalidated
Oct 27 11:01:39 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=3741448112455685757
Oct 27 11:01:39 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=4664470457175269676
Oct 27 11:01:39 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=8685132275913437468
Oct 27 11:01:39 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=8231545775239736065
Oct 27 11:01:39 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=4194145620947548577
Oct 27 11:01:39 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=6395447668334150041
Oct 27 11:01:39 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=3247671384774872601
Oct 27 11:01:39 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=16912913949041146380
Oct 27 11:01:39 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=4529362423629436642
Oct 27 11:01:39 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=12884665542282722123
Oct 27 11:01:39 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=12366843741750379281
Oct 27 11:01:39 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=18388331534813110478
Oct 27 11:01:39 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=18388331534813110478
Oct 27 11:01:39 freenas1 ZFS: vdev state changed, pool_guid=6764645390058751664 vdev_guid=17314746444696464833
Oct 27 11:01:39 freenas1 ahcich3: Timeout on slot 17 port 0
Oct 27 11:01:39 freenas1 ahcich3: is 00000000 cs 00020000 ss 00000000 rs 00020000 tfd 58 serr 00000000 cmd 0004d117
Oct 27 11:01:39 freenas1 ahcich3: Error while READ LOG EXT
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): WRITE_FPDMA_QUEUED. ACB: 61 10 90 02 40 40 00 00 00 00 00 00
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): ATA status: 00 ()
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): RES: 00 00 00 00 00 00 00 00 00 00 00
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): Error 5, Periph was invalidated
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): READ_FPDMA_QUEUED. ACB: 60 01 86 be c0 40 d1 01 00 00 00 00
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): ATA status: 00 ()
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): RES: 00 00 00 00 00 00 00 00 00 00 00
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): Error 5, Periph was invalidated
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): WRITE_FPDMA_QUEUED. ACB: 61 10 90 ba c0 40 d1 01 00 00 00 00
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): ATA status: 00 ()
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): RES: 00 00 00 00 00 00 00 00 00 00 00
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): Error 5, Periph was invalidated
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): WRITE_FPDMA_QUEUED. ACB: 61 10 90 bc c0 40 d1 01 00 00 00 00
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): ATA status: 00 ()
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): RES: 00 00 00 00 00 00 00 00 00 00 00
Oct 27 11:01:39 freenas1 (ada7:ahcich3:0:0:0): Error 5, Periph was invalidated
Oct 27 11:01:39 freenas1 g_access(952): provider ada7 has error 6 set
Oct 27 11:01:39 freenas1 g_access(952): provider ada7 has error 6 set
Oct 27 11:01:39 freenas1 g_access(952): provider ada7 has error 6 set
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): WRITE_DMA. ACB: ca 00 90 02 40 40 00 00 00 00 10 00
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): ATA status: 61 (DRDY DF ERR), error: 04 (ABRT )
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): RES: 61 04 90 02 40 00 00 00 00 10 00
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): Retrying command
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): WRITE_DMA. ACB: ca 00 90 02 40 40 00 00 00 00 10 00
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): ATA status: 61 (DRDY DF ERR), error: 04 (ABRT )
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): RES: 61 04 90 02 40 00 00 00 00 10 00
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): Retrying command
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): WRITE_DMA. ACB: ca 00 90 02 40 40 00 00 00 00 10 00
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): ATA status: 61 (DRDY DF ERR), error: 04 (ABRT )
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): RES: 61 04 90 02 40 00 00 00 00 10 00
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): Retrying command
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): WRITE_DMA. ACB: ca 00 90 02 40 40 00 00 00 00 10 00
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): ATA status: 61 (DRDY DF ERR), error: 04 (ABRT )
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): RES: 61 04 90 02 40 00 00 00 00 10 00
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): Retrying command
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): WRITE_DMA. ACB: ca 00 90 02 40 40 00 00 00 00 10 00
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): CAM status: ATA Status Error
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): ATA status: 61 (DRDY DF ERR), error: 04 (ABRT )
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): RES: 61 04 90 02 40 00 00 00 00 10 00
Oct 27 11:01:39 freenas1 (ada27:ata9:0:0:0): Error 5, Retries exhausted
This then goes on forever - the pool does not get online again, even in degraded state.
What I do not understand - if the disks are gone by physical defect I would understand, but if they are OK and its just the data - how that could happen? Again seems to me the disks are physically OK
And if there a way to fix that, somehow identify the bad data and zero is, so the pool comes online again, maybe in degraded state?
II also tried the following,:
1.) get a list of all disks / ada identifiers
2.) start a zpool import, identify the disks its complaining about, unplug stata, reboot...
3.) do that for the next disk, start with 1^.)
I also smartest data for all drives, also attached. In this smarttest.log you can see that those strange smart errors have been recorded on a few disks, like ada7:
##########
##########
##########
*******
* ada7*
*******
smartctl 7.0 2018-12-30 r4883 [FreeBSD 11.3-RELEASE-p14 amd64] (local build)
Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: Western Digital Red (SMR)
Device Model: WDC WD40EFAX-68JH4N0
Serial Number: WD-WX32D203U3JT
LU WWN Device Id: 5 0014ee 21272c9f7
Firmware Version: 82.00A82
User Capacity: 4,000,787,030,016 bytes [4.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 5400 rpm
Form Factor: 3.5 inches
Device is: In smartctl database [for details use: -P show]
ATA Version is: ACS-3 T13/2161-D revision 5
SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Tue Oct 27 15:19:41 2020 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: (36060) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 517) minutes.
Conveyance self-test routine
recommended polling time: ( 2) minutes.
SCT capabilities: (0x3039) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0
3 Spin_Up_Time 0x0027 198 195 021 Pre-fail Always - 3091
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 434
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0
9 Power_On_Hours 0x0032 098 098 000 Old_age Always - 1668
10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0
11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 417
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 325
193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 125
194 Temperature_Celsius 0x0022 115 090 000 Old_age Always - 32
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0030 100 253 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0
200 Multi_Zone_Error_Rate 0x0008 100 253 000 Old_age Offline - 0
SMART Error Log Version: 1
ATA Error Count: 30 (device log contains only the most recent five errors)
CR = Command Register [HEX]
FR = Features Register [HEX]
SC = Sector Count Register [HEX]
SN = Sector Number Register [HEX]
CL = Cylinder Low Register [HEX]
CH = Cylinder High Register [HEX]
DH = Device/Head Register [HEX]
DC = Device Command Register [HEX]
ER = Error register [HEX]
ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.
Error 30 occurred at disk power-on lifetime: 1668 hours (69 days + 12 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
04 61 46 00 00 00 40 Device Fault; Error: ABRT
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
ef 03 46 00 00 00 40 08 00:08:53.086 SET FEATURES [Set transfer mode]
ef 03 46 00 00 00 40 08 00:08:53.086 SET FEATURES [Set transfer mode]
ec 00 00 00 00 00 40 08 00:08:53.085 IDENTIFY DEVICE
Error 29 occurred at disk power-on lifetime: 1668 hours (69 days + 12 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
04 61 46 00 00 00 40 Device Fault; Error: ABRT
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
ef 03 46 00 00 00 40 08 00:08:53.086 SET FEATURES [Set transfer mode]
ec 00 00 00 00 00 40 08 00:08:53.085 IDENTIFY DEVICE
Error 28 occurred at disk power-on lifetime: 1668 hours (69 days + 12 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
04 61 00 00 00 00 40 Device Fault; Error: ABRT
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
ef 02 00 00 00 00 40 08 00:08:45.004 SET FEATURES [Enable write cache]
ef aa 00 00 00 00 40 08 00:08:45.003 SET FEATURES [Enable read look-ahead]
ef 03 46 00 00 00 40 08 00:08:45.003 SET FEATURES [Set transfer mode]
ef 03 46 00 00 00 40 08 00:08:45.003 SET FEATURES [Set transfer mode]
ec 00 00 00 00 00 40 08 00:08:45.002 IDENTIFY DEVICE
Error 27 occurred at disk power-on lifetime: 1668 hours (69 days + 12 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
04 61 00 00 00 00 40 Device Fault; Error: ABRT
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
ef aa 00 00 00 00 40 08 00:08:45.003 SET FEATURES [Enable read look-ahead]
ef 03 46 00 00 00 40 08 00:08:45.003 SET FEATURES [Set transfer mode]
ef 03 46 00 00 00 40 08 00:08:45.003 SET FEATURES [Set transfer mode]
ec 00 00 00 00 00 40 08 00:08:45.002 IDENTIFY DEVICE
Error 26 occurred at disk power-on lifetime: 1668 hours (69 days + 12 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
04 61 46 00 00 00 40 Device Fault; Error: ABRT
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
ef 03 46 00 00 00 40 08 00:08:45.003 SET FEATURES [Set transfer mode]
ef 03 46 00 00 00 40 08 00:08:45.003 SET FEATURES [Set transfer mode]
ec 00 00 00 00 00 40 08 00:08:45.002 IDENTIFY DEVICE
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed without error 00% 1646 -
# 2 Short offline Completed without error 00% 1646 -
# 3 Short offline Completed without error 00% 1646 -
# 4 Short offline Completed without error 00% 1645 -
# 5 Short offline Completed without error 00% 1623 -
# 6 Short offline Completed without error 00% 1622 -
# 7 Short offline Completed without error 00% 1598 -
# 8 Short offline Completed without error 00% 1574 -
# 9 Short offline Completed without error 00% 1550 -
#10 Short offline Completed without error 00% 1526 -
#11 Short offline Completed without error 00% 1502 -
#12 Short offline Completed without error 00% 1478 -
#13 Short offline Completed without error 00% 1454 -
#14 Short offline Completed without error 00% 1430 -
#15 Short offline Completed without error 00% 1410 -
#16 Short offline Completed without error 00% 1385 -
#17 Short offline Completed without error 00% 1362 -
#18 Short offline Completed without error 00% 1361 -
#19 Short offline Completed without error 00% 1337 -
#20 Short offline Completed without error 00% 1313 -
#21 Short offline Completed without error 00% 1293 -
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
##########
##########
##########
So in summary I think
1.) the disks are physically OK, but somehow the data got corrupted.
2.) I guess as that affected more then 2 disks in one vdev the data is gone, correct
3.) if there have been any way to recover even in degraded state zfs would have done when I execute zpool import
Before I delete and restart - please can someone with more experience tell me if there is any way to save the data, even partially I read somewhere you may detect the parts that are bad on each disk and zero them out. Is something like that possible?
Many thanks
Joerg