BUILD C226, i3, 10 drive build

Status
Not open for further replies.

indy

Patron
Joined
Dec 28, 2013
Messages
287
hdd_fan-1.jpg
hdd_fan-2.jpg



This is the Lian-Li EX-23NB HDD adapter before and after modifying it.
In its default configuration the Fan actually had to run at 12V for the disks to stay under 40°C, 7V would not do it.

My Dremel made short work of it though and it got a Noctua PWM fan as a bonus.
It still runs at 12V, however the hard disks stay well under 40°C with the fan throttled by Noctua's 'low noise adapter'.
 

indy

Patron
Joined
Dec 28, 2013
Messages
287
nas_3-1.jpg


nas_3-2.jpg


nas_3-3.jpg


This is it ;)
Thanks to everyone that helped me getting to a build that I am very happy with!
 

indy

Patron
Joined
Dec 28, 2013
Messages
287
A few benchmarks I did with the WD Reds a while back:
File-system is ZFS

3x Intel port, stripe
Code:
[root@freenas] /mnt/test2# dd if=/dev/zero of=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 309.720594 secs (346680797 bytes/sec)
[root@freenas] /mnt/test2# dd if=tmp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 311.154500 secs (345083174 bytes/sec)



3x Lsi port, stripe
Code:
[root@freenas] /mnt/test1# dd if=/dev/zero of=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 303.553565 secs (353724004 bytes/sec)
[root@freenas] /mnt/test1# dd if=tmp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 306.963624 secs (349794484 bytes/sec)



5x Lsi port, stripe
Code:
[root@freenas] /mnt/test3# dd if=/dev/zero of=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 187.659917 secs (572174304 bytes/sec)
[root@freenas] /mnt/test3# dd if=tmp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 203.423305 secs (527836191 bytes/sec)



8x Lsi port, 3x Intel port, raidz3
Code:
[root@freenas] /mnt/vol1/benchmark# dd if=/dev/zero of=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 157.238142 secs (682876184 bytes/sec)
[root@freenas] /mnt/vol1/benchmark# dd if=tmp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 129.556497 secs (828782693 bytes/sec)
 

indy

Patron
Joined
Dec 28, 2013
Messages
287
So I had my first little scare:
Suddenly my CIFS-shares were no longer accessible and apparently the server had rebooted without a proper shutdown, thus locking the main pool.
No clue in the system log what happened, the server should not auto-restart in case of a power failure as well.

Anyway everything seems fine and I used the chance to update from 9.2.1.2 to 9.2.1.4.1.
 

indy

Patron
Joined
Dec 28, 2013
Messages
287
Code:
[root@freenas] ~# zpool status
  pool: tank
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://illumos.org/msg/ZFS-8000-9P
  scan: scrub repaired 16K in 0h0m with 0 errors on Thu May  1 00:00:41 2014
config:

        NAME                                          STATE     READ WRITE CKSUM
        tank                                          ONLINE       0     0     0
          gptid/3fd35755-a16f-11e3-bbce-002590f062ca  ONLINE       0     0     1


Seems like the ssd with syslog and jails on it is starting to fail.
It was already worn-out when I built the server however.
I think I will let it run its course since I am not to worried about loosing my jails, and I kind of want to see what happens.
Pretty awesome though how zfs is able to deal with these kinds of errors.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Just curious indy, can you post the SMART data for your drive? I'm curious to see what your SSD values are. ;)
 

indy

Patron
Joined
Dec 28, 2013
Messages
287
Sure, leave a comment on it though ;)

Code:
[root@freenas] ~# smartctl -a /dev/ada0
smartctl 6.2 2013-07-26 r3841 [FreeBSD 9.2-RELEASE-p4 amd64] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Indilinx Barefoot based SSDs
Device Model:     STT_FTM64GX25H
Serial Number:    P612102-MIBY-208A016
Firmware Version: 1916
User Capacity:    64,023,257,088 bytes [64.0 GB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    Solid State Device
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ATA8-ACS (minor revision not indicated)
Local Time is:    Thu May  1 10:44:26 2014 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00) Offline data collection activity
                                        was never started.
                                        Auto Offline Data Collection: Disabled.
Self-test execution status:      ( 240) Self-test routine in progress...
                                        00% of test remaining.
Total time to complete Offline
data collection:                (    0) seconds.
Offline data collection
capabilities:                    (0x1d) SMART execute Offline immediate.
                                        No Auto Offline data collection support.
                                        Abort Offline collection upon new
                                        command.
                                        Offline surface scan supported.
                                        Self-test supported.
                                        No Conveyance Self-test supported.
                                        No Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x00) Error logging NOT supported.
                                        General Purpose Logging supported.
Short self-test routine
recommended polling time:        (   0) minutes.
Extended self-test routine
recommended polling time:        (   0) minutes.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x0000   ---   ---   ---    Old_age   Offline      -       5
  9 Power_On_Hours          0x0000   ---   ---   ---    Old_age   Offline      -       14736
 12 Power_Cycle_Count       0x0000   ---   ---   ---    Old_age   Offline      -       1544
184 Initial_Bad_Block_Count 0x0000   ---   ---   ---    Old_age   Offline      -       22
195 Program_Failure_Blk_Ct  0x0000   ---   ---   ---    Old_age   Offline      -       0
196 Erase_Failure_Blk_Ct    0x0000   ---   ---   ---    Old_age   Offline      -       1
197 Read_Failure_Blk_Ct     0x0000   ---   ---   ---    Old_age   Offline      -       0
198 Read_Sectors_Tot_Ct     0x0000   ---   ---   ---    Old_age   Offline      -       24281577582
199 Write_Sectors_Tot_Ct    0x0000   ---   ---   ---    Old_age   Offline      -       17843796809
200 Read_Commands_Tot_Ct    0x0000   ---   ---   ---    Old_age   Offline      -       617641059
201 Write_Commands_Tot_Ct   0x0000   ---   ---   ---    Old_age   Offline      -       375991652
202 Error_Bits_Flash_Tot_Ct 0x0000   ---   ---   ---    Old_age   Offline      -       457953860
203 Corr_Read_Errors_Tot_Ct 0x0000   ---   ---   ---    Old_age   Offline      -       259956158
204 Bad_Block_Full_Flag     0x0000   ---   ---   ---    Old_age   Offline      -       0
205 Max_PE_Count_Spec       0x0000   ---   ---   ---    Old_age   Offline      -       10000
206 Min_Erase_Count         0x0000   ---   ---   ---    Old_age   Offline      -       584
207 Max_Erase_Count         0x0000   ---   ---   ---    Old_age   Offline      -       13433
208 Average_Erase_Count     0x0000   ---   ---   ---    Old_age   Offline      -       12618
209 Remaining_Lifetime_Perc 0x0000   ---   ---   ---    Old_age   Offline      -       99
211 SATA_Error_Ct_CRC       0x0000   ---   ---   ---    Old_age   Offline      -       0
212 SATA_Error_Ct_Handshake 0x0000   ---   ---   ---    Old_age   Offline      -       0
213 Indilinx_Internal       0x0000   ---   ---   ---    Old_age   Offline      -       0

Warning! SMART ATA Error Log Structure error: invalid SMART checksum.
SMART Error Log Version: 1
No Errors Logged

Warning! SMART Self-Test Log Structure error: invalid SMART checksum.
SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]


Selective Self-tests/Logging not supported
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Well, if you look at the drive, it's got a min/max/avg of 584/13433/12618 write cycles. But parameter 205 appears to say the drives is rated for just 10000. So you're basically using a drive that is beyond its expected lifespan.

I'd say that just based on the very low 584 number TRIM definitely isn't supported on FreeNAS(assuming that drive supports TRIM).

What I find particularly interesting is that 209 says the drive has 99% of it's life remaining. Where the hell is it getting that number from? LOL.
 

indy

Patron
Joined
Dec 28, 2013
Messages
287
So you're basically using a drive that is beyond its expected lifespan.
Yeah, I thought as much.
Thanks for the interpretation!
 

indy

Patron
Joined
Dec 28, 2013
Messages
287
Ok... the second to last scrub repaired 80k and the last scrub 144k, both about a week apart.
I did not expect the drive to be failing that fast.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Oh yeah, that's standard for both SSD as well as platter drives. When they 'start' going bad they go downhill like a rock.
 
Status
Not open for further replies.
Top