Hey Folks,
I had a video card hooked up to my Freenas box for testing. I recently rebuilt my 3 disk raidz1 array as a 6 disk raidz2 array following some recommendations here. I was also trying to track down another hardware issue. Anyway, I noticed that some warnings were coming up on my console about temperature. I took out the video card and put the fans back in the case and I will be spacing the drives more evenly and adding another fan tomorrow to improve airflow, but I'm concerned that I have damaged my new hard drives. The following is the output from smartctl -a /dev on one of my drives.
It looks like the drives have all been over the threshold once. I am performing a scrub now and this is the output of zpool status
Can anyone make suggestions?
Thanks!
I had a video card hooked up to my Freenas box for testing. I recently rebuilt my 3 disk raidz1 array as a 6 disk raidz2 array following some recommendations here. I was also trying to track down another hardware issue. Anyway, I noticed that some warnings were coming up on my console about temperature. I took out the video card and put the fans back in the case and I will be spacing the drives more evenly and adding another fan tomorrow to improve airflow, but I'm concerned that I have damaged my new hard drives. The following is the output from smartctl -a /dev on one of my drives.
Code:
[root@freenas] ~# smartctl -A /dev/ada5 smartctl 5.43 2012-06-30 r3573 [FreeBSD 8.3-RELEASE-p5 amd64] (local build) Copyright (C) 2002-12 by Bruce Allen, http://smartmontools.sourceforge.net === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 114 100 006 Pre-fail Always - 76825240 3 Spin_Up_Time 0x0003 098 098 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 3 5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 100 253 030 Pre-fail Always - 221583 9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 80 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 3 183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 0 184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 0 189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 051 042 045 Old_age Always In_the_past 49 (0 52 50 47 0) 191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 0 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 2 193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 5 194 Temperature_Celsius 0x0022 049 058 000 Old_age Always - 49 (0 26 0 0 0) 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 92968862089296 241 Total_LBAs_Written 0x0000 100 253 000 Old_age Offline - 1686666804 242 Total_LBAs_Read 0x0000 100 253 000 Old_age Offline - 740888271
It looks like the drives have all been over the threshold once. I am performing a scrub now and this is the output of zpool status
Code:
[root@freenas] /var/log# zpool status pool: volume state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://www.sun.com/msg/ZFS-8000-9P scan: scrub in progress since Wed Mar 13 16:05:31 2013 2.36T scanned out of 4.24T at 498M/s, 1h5m to go 1.47M repaired, 55.77% done config: NAME STATE READ WRITE CKSUM volume ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gptid/6a444eda-899d-11e2-823d-50e549b3a1da ONLINE 0 0 11 (repairing) gptid/6ab74a08-899d-11e2-823d-50e549b3a1da ONLINE 0 0 9 (repairing) gptid/6b391d29-899d-11e2-823d-50e549b3a1da ONLINE 0 0 7 (repairing) gptid/6b97f34d-899d-11e2-823d-50e549b3a1da ONLINE 0 0 5 (repairing) gptid/6be28d09-899d-11e2-823d-50e549b3a1da ONLINE 0 0 6 (repairing) gptid/6c5396b1-899d-11e2-823d-50e549b3a1da ONLINE 0 0 9 (repairing)
Can anyone make suggestions?
Thanks!