Slow Read/Write performance after running ZFS Scrub

Status
Not open for further replies.

doverosx

Dabbler
Joined
Jan 4, 2013
Messages
33
Hello,

I had an alert show up in FreeNAS UI so I took the liberty to do some digging. The UI indicated no issues outside of the dialog reporting that the ZFS volume status was UNKOWN even though the status in the UI for Storage said ONLINE. Anyway, I ran zpool status command in the shell and got this:

Code:
 The volume Backup (ZFS) status is UNKNOWN: One or more devices has experienced an error resulting in data corruption. Applications may be affected.Restore the file in question if possible. Otherwise restore the entire pool from backup.
 
 pool: Backup                                                                  
 state: ONLINE                                                                  
status: One or more devices has experienced an error resulting in data          
        corruption.  Applications may be affected.                              
action: Restore the file in question if possible.  Otherwise restore the        
        entire pool from backup.                                                
   see: http://www.sun.com/msg/ZFS-8000-8A                                      
  scan: none requested                                                          
config:                                                                         
                                                                                
        NAME                                          STATE     READ WRITE CKSUM
        Backup                                        ONLINE       1     0     0
          gptid/6015bc9b-48ba-11e2-9622-0017314048ae  ONLINE       0     0     0
          gptid/60806027-48ba-11e2-9622-0017314048ae  ONLINE       0     0     0
          gptid/60e8525f-48ba-11e2-9622-0017314048ae  ONLINE       0     0     0
          gptid/6151e1af-48ba-11e2-9622-0017314048ae  ONLINE       0     0     0
          gptid/61b2f20a-48ba-11e2-9622-0017314048ae  ONLINE       1     0     0
                                                                                
errors: Permanent errors have been detected in the following files:             
                                                                                
        Backup/Backup:<0x6465e>


I figured, no issue there, let's run smart...and of course I got no errors. So I went ahead and scrubbed the data volume. Before this I was getting 50'ish MB/s and now I'm getting 18MB/s.

When I ran scrub, I waited until the system load was back to basically nothing and restarted the machine. Waited for load to drop again and started using the system. Performance is pretty sad now for a ZFS stripe.
 

Stephens

Patron
Joined
Jun 19, 2012
Messages
496
Backup/Backup:<0x6465e>

I've been wondering what the hex filenames are. I know what that'd mean under Windows, but not ZFS. I did some googling and came across Pool error in a hex filename. Specifically, this post.

zpool status shows the current errors and the last errors, so you have to clear twice to make them go away completely. After you destroyed the data, ZFS knew there was bad data, but could no longer tie it to a file name, hence the hex addresses.

Basically, the hex filenames are just like in Windows... ZFS knows there is allocated data, but it can no longer tie it back to a filename.

OK, so that's my curiousity somewhat sated. On to your issue... I wonder what you mean by there were "no" errors on your SMART test. I'd need to see the results. It's actually pretty unusual to have drives with NO errors (as opposed to tolerable ones). And why did you need to scrub in the first place? It actually sounds like you have a marginal or failing drive and that's why you're seeing the speed you're seeing. There's no "slow down my disks" option for zfs scrub.
 

doverosx

Dabbler
Joined
Jan 4, 2013
Messages
33
I've been wondering what the hex filenames are. I know what that'd mean under Windows, but not ZFS. I did some googling and came across Pool error in a hex filename. Specifically, this post.



Basically, the hex filenames are just like in Windows... ZFS knows there is allocated data, but it can no longer tie it back to a filename.

OK, so that's my curiousity somewhat sated. On to your issue... I wonder what you mean by there were "no" errors on your SMART test. I'd need to see the results. It's actually pretty unusual to have drives with NO errors (as opposed to tolerable ones). And why did you need to scrub in the first place? It actually sounds like you have a marginal or failing drive and that's why you're seeing the speed you're seeing. There's no "slow down my disks" option for zfs scrub.

lol, no problem. Maybe I should add that as a feature request in FreeNAS ;). I'll get that SMART result set.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Disk gptid/61b2f20a-48ba-11e2-9622-0017314048ae had a read error. The fact that it had one is cause to be wary and monitor the drive. Your zpool name "backup" implies you know you have a RAID0 and only intend to use it for backups. If that disk fails everything in your zpool will be lost. Not a big deal for backups unless you expect to use them very soon. Honestly, I'd consider doing a Z1 for backups. I know they aren't important, but you don't lose everything if you have a disk failure, so little recovery is needed for a failed disk without spending alot of money on redundancy disks.

You should run a long test of the disk in question. The typical SMART short test does not do a full disk surface scan.

You can clear the error with "zpool backup clear". I would definitely monitor that disk for a while. It could be a fluke or a sign of a soon to fail disk.
 

doverosx

Dabbler
Joined
Jan 4, 2013
Messages
33
Very good advice guys! Thank you very much.

I do get this from my Seagate LP (one of my newer drives too!)
Code:
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed: read failure       90%      7077         819909042
# 2  Short offline       Completed: read failure       90%      7069         819909042
# 3  Short offline       Completed: read failure       90%      7069         819909042
# 4  Short offline       Completed: read failure       90%      7069         819909042


Regardless, the alert is gone now cyberjock, after the scrubbing so I'll prepare for some file loss. This could explain why I couldn't access a particular file when trying to restore data recently; there was a kernel panic every time I'd try to copy it over. And also, that hex file name also baffled me, but I found those very same posts ;).

I'll be looking at going to JBOD, I think. I just don't want to invest into the RAM and hard disks that are required to 'properly' run RAIDZ setups just yet.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'm not sure what kind of backups you keep. I used Acronis TrueImage before, but currently use OO Diskimage. I know with both of those programs if the image file is corrupt in any way the restore will fail. Really sucks when you think you have a good backup, go to do a restore and your backup won't work.

Not sure what your system specs are, but if performance isn't a major pressing issue(shouldn't be for typical backups) you could probably get by with 8GB of RAM. Just don't go creating an extravagant jail of apps and you are probably fine. Of course, if you have less than 6GB I wouldn't go trying it unless you are willing to spend quite some time getting familiar with ZFS tweaks and how to get it to work well with less RAM.
 

doverosx

Dabbler
Joined
Jan 4, 2013
Messages
33
Output after long scan from SMART
Code:
[root@freenas] ~# smartctl -a /dev/ada1
smartctl 5.43 2012-06-30 r3573 [FreeBSD 8.3-RELEASE-p5 amd64] (local build)
Copyright (C) 2002-12 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF INFORMATION SECTION ===
Model Family:     Seagate Barracuda LP
Device Model:     ST31000520AS
Serial Number:    9VX0G6KV
LU WWN Device Id: 5 000c50 01a1df2ed
Firmware Version: CC32
User Capacity:    1,000,204,886,016 bytes [1.00 TB]
Sector Size:      512 bytes logical/physical
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   8
ATA Standard is:  ATA-8-ACS revision 4
Local Time is:    Tue Feb  5 18:26:12 2013 EST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00) Offline data collection activity
                                        was never started.
                                        Auto Offline Data Collection: Disabled.
Self-test execution status:      ( 121) The previous self-test completed having
                                        the read element of the test failed.
Total time to complete Offline
data collection:                (  633) seconds.
Offline data collection
capabilities:                    (0x73) SMART execute Offline immediate.
                                        Auto Offline data collection on/off support.
                                        Suspend Offline collection upon new
                                        command.
                                        No Offline surface scan supported.
                                        Self-test supported.
                                        Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine
recommended polling time:        (   1) minutes.
Extended self-test routine
recommended polling time:        ( 224) minutes.
Conveyance self-test routine
recommended polling time:        (   2) minutes.
SCT capabilities:              (0x103f) SCT Status supported.
                                        SCT Error Recovery Control supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   117   099   006    Pre-fail  Always       -       131380799
  3 Spin_Up_Time            0x0003   095   095   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   098   098   020    Old_age   Always       -       2061
  5 Reallocated_Sector_Ct   0x0033   100   100   036    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   072   060   030    Pre-fail  Always       -       15001529
  9 Power_On_Hours          0x0032   092   092   000    Old_age   Always       -       7079
 10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       114
183 Runtime_Bad_Block       0x0032   100   100   000    Old_age   Always       -       0
184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   050   050   000    Old_age   Always       -       50
188 Command_Timeout         0x0032   100   100   000    Old_age   Always       -       0
189 High_Fly_Writes         0x003a   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0022   056   051   045    Old_age   Always       -       44 (Min/Max 32/45)
194 Temperature_Celsius     0x0022   044   049   000    Old_age   Always       -       44 (0 13 0 0 0)
195 Hardware_ECC_Recovered  0x001a   050   032   000    Old_age   Always       -       131380799
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       5
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       5
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       227989748980029
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       1585284945
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       922471111

SMART Error Log Version: 1
ATA Error Count: 46 (device log contains only the most recent five errors)
        CR = Command Register [HEX]
        FR = Features Register [HEX]
        SC = Sector Count Register [HEX]
        SN = Sector Number Register [HEX]
        CL = Cylinder Low Register [HEX]
        CH = Cylinder High Register [HEX]
        DH = Device/Head Register [HEX]
        DC = Device Command Register [HEX]
        ER = Error register [HEX]
        ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.

Error 46 occurred at disk power-on lifetime: 6894 hours (287 days + 6 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 51 00 ff ff ff 0f  Error: UNC at LBA = 0x0fffffff = 268435455

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  25 00 00 ff ff ff 4f 00      01:12:55.801  READ DMA EXT
  25 00 00 ff ff ff 4f 00      01:12:52.078  READ DMA EXT
  25 00 00 ff ff ff 4f 00      01:12:48.232  READ DMA EXT
  25 00 00 ff ff ff 4f 00      01:12:44.499  READ DMA EXT
  25 00 00 ff ff ff 4f 00      01:12:40.787  READ DMA EXT

Error 45 occurred at disk power-on lifetime: 6894 hours (287 days + 6 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 51 00 ff ff ff 0f  Error: UNC at LBA = 0x0fffffff = 268435455

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  25 00 00 ff ff ff 4f 00      01:12:52.078  READ DMA EXT
  25 00 00 ff ff ff 4f 00      01:12:48.232  READ DMA EXT
  25 00 00 ff ff ff 4f 00      01:12:44.499  READ DMA EXT
  25 00 00 ff ff ff 4f 00      01:12:40.787  READ DMA EXT
  35 00 01 ff ff ff 4f 00      01:12:40.786  WRITE DMA EXT

Error 44 occurred at disk power-on lifetime: 6894 hours (287 days + 6 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 51 00 ff ff ff 0f  Error: UNC at LBA = 0x0fffffff = 268435455

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  25 00 00 ff ff ff 4f 00      01:12:48.232  READ DMA EXT
  25 00 00 ff ff ff 4f 00      01:12:44.499  READ DMA EXT
  25 00 00 ff ff ff 4f 00      01:12:40.787  READ DMA EXT
  35 00 01 ff ff ff 4f 00      01:12:40.786  WRITE DMA EXT
  35 00 09 ff ff ff 4f 00      01:12:40.786  WRITE DMA EXT

Error 43 occurred at disk power-on lifetime: 6894 hours (287 days + 6 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 51 00 ff ff ff 0f  Error: UNC at LBA = 0x0fffffff = 268435455

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  25 00 00 ff ff ff 4f 00      01:12:44.499  READ DMA EXT
  25 00 00 ff ff ff 4f 00      01:12:40.787  READ DMA EXT
  35 00 01 ff ff ff 4f 00      01:12:40.786  WRITE DMA EXT
  35 00 09 ff ff ff 4f 00      01:12:40.786  WRITE DMA EXT
  35 00 06 ff ff ff 4f 00      01:12:40.786  WRITE DMA EXT

Error 42 occurred at disk power-on lifetime: 6894 hours (287 days + 6 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 51 00 ff ff ff 0f  Error: UNC at LBA = 0x0fffffff = 268435455

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  25 00 00 ff ff ff 4f 00      01:12:40.787  READ DMA EXT
  35 00 01 ff ff ff 4f 00      01:12:40.786  WRITE DMA EXT
  35 00 09 ff ff ff 4f 00      01:12:40.786  WRITE DMA EXT
  35 00 06 ff ff ff 4f 00      01:12:40.786  WRITE DMA EXT
  35 00 04 ff ff ff 4f 00      01:12:40.786  WRITE DMA EXT

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed: read failure       90%      7077         819909042
# 2  Short offline       Completed: read failure       90%      7077         819909042
# 3  Short offline       Completed: read failure       90%      7069         819909042
# 4  Short offline       Completed: read failure       90%      7069         819909042
# 5  Short offline       Completed: read failure       90%      7069         819909042

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.


I keep per-file backups for the reason that you posted, nothing worse than not being able to restore. Makes all your fruits of labour worthless don't it? The system is using DDR RAM and I don't want to continue dropping money into it to go above 4GB, the next money spent will be on storage or sub-system redo (mobo, cpu, etc.).
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You may be able to get away with 4GB. I know alot of people do, especially Atom users. Of course, with <6GB the prefetch will be disabled(so read speeds will see a significant drop).

Even with 4GB I'd probably go ZFS to be honest and accept whatever speeds you get. I'm sure the speeds won't be so poor you won't be disappointed for backups.

This..

Code:
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       5


is a bad thing. That's usually an indicator of a failing disk.

I would definitely look at getting a new disk very soon. That disk is on it's "last leg". In fact, my daily email that I have customised gives me that parameter for all of my drives nightly with the temps. Those 2 will tell you how healthy a hard disk is.
 

doverosx

Dabbler
Joined
Jan 4, 2013
Messages
33
You may be able to get away with 4GB. I know alot of people do, especially Atom users. Of course, with <6GB the prefetch will be disabled(so read speeds will see a significant drop).

Even with 4GB I'd probably go ZFS to be honest and accept whatever speeds you get. I'm sure the speeds won't be so poor you won't be disappointed for backups.

Your thinking is along mine ;). I think it is a much better idea to get an alert or error indicating file corruption than failing silently (as in "simple" RAID0, 1, 1+0).
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I just edited my post above but you posted already.. but I'll repaste it here....


This..

Code:
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       5


is a bad thing. That's usually an indicator of a failing disk.

I would definitely look at getting a new disk very soon. That disk is on it's "last leg". In fact, my daily email that I have customised gives me that parameter for all of my drives nightly with the temps. Those 2 will tell you how healthy a hard disk is.
 

doverosx

Dabbler
Joined
Jan 4, 2013
Messages
33
I just edited my post above but you posted already.. but I'll repaste it here....


This..

Code:
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       5


is a bad thing. That's usually an indicator of a failing disk.

I would definitely look at getting a new disk very soon. That disk is on it's "last leg". In fact, my daily email that I have customised gives me that parameter for all of my drives nightly with the temps. Those 2 will tell you how healthy a hard disk is.

Is it bad because the threshold is 0...and the value is 100? :mad:
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I ignore the threshold and value stuff. Depending on the HD manufacturer those can be afu, not work right, or count backwards.

I just look at the actual value. In your case, 5. I drive in perfect condition should be zero. Once you get to 1 or 2 thats an indication that the drive should be monitored closely. Doing a scrub can often fix that problem. Basically that number means you have 5 sectors that the drive knows are bad and will relocate them when you write to that sector next time. There's instructions in the forum somewhere to identify and write to those 5 sectors to get that number back to zero. It's important to watch that number. If its going up thats a bad thing. If it stays steady (ideally at zero) then hard drive health isn't perfect, but its not getting worse. In your case, the fact that you have a value for 197 along with the rest of the SMART report is an indicator that the drive is on its way out.

I'd see if it is in warranty or not. If it is in warranty you likely won't be able to RMA it since the HD manufacturer's tools will probably say the drive is fine(but its really not.. what BS). But if you get a HD write, read, compare utility and do some tests you may see the drive completely fail and then you can RMA it. In any case, I wouldn't trust it with data anymore.
 
Status
Not open for further replies.
Top