CIFS Copy and ARC Cache Demo Video (FreeNAS 9.2.1.3)

Status
Not open for further replies.

RchGrav

Dabbler
Joined
Feb 21, 2014
Messages
36
(I wish I titled my thread: I'm building a high performance home NAS out of older used and inexpensive parts, follow along for tips because I'm trying to do it the right way and on the cheap.)

Hi Guys!!

First Post... no wait.. Second Post.. First post was here... http://tinyurl.com/hi-there-freenas-crew

Anyway, I just wanted to share a little video with everyone. I'm pretty sure I'm on the right track getting this thing tuned up and going fast.. but I still feel like I have a bit of work to do...

Sorry about the freeware screen capture software I was using, admittedly it was a little iffy at times..

I could probably start to write up a "Dummies Guide to Tuning FreeNAS Network Performance" with some of the things I learned doing this for the past couple days I dug in deep on this, or atleast offer a few tips I couldn't find within the confines of this forum. (Its possible I just suck at searching.)

I do have a few questions for someone really "In the know" because there is a few "anomalies" I haven't figured out yet.

And now on to the show...
View: http://youtu.be/05NMoBstL24


Rich
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Howdy Rich.

Hell of a system you've got there. Was that pool backed by the Velociraptors or the WD SEs? I didn't watch the entire video in deep detail, but just curious on your pool config.

I'm guessing the "anomalies" you're hinting at are related to the stutter/chug you're seeing on the writes?
 

RchGrav

Dabbler
Joined
Feb 21, 2014
Messages
36
Howdy Rich.

Hell of a system you've got there. Was that pool backed by the Velociraptors or the WD SEs? I didn't watch the entire video in deep detail, but just curious on your pool config.

I'm guessing the "anomalies" you're hinting at are related to the stutter/chug you're seeing on the writes?

The biggest anomaly which was bugging me was after I had achieved the 950MB/s, I had tried to replicate the 950MB/s again following a reboot and saturating my ARC cache but no matter what I did I was well below 1/3 of that speed. Oddly, when I went into FreeNAS GUI to check if my MTU was still set at 9000 on that adapter I saw that it was, but I simply reapplied the settings again.. and boom, I was hitting the 950MB/s transfer speeds again. I don't know if it was the MTU setting itself, or possibly something in the backend scripts process of reapplying the settings to the card caused it to kick in again. This certainly deserves some more investigation on my part to prove this is actually happening, but certainly if I can replicate the effect consistently its something which the dev's should be very interested at looking into.

I actually didn't have the pool with the Velociraptor's running in that video. I was just THRILLED to overcome the limitations I was hitting attempting to saturate the 10gbe link with CIFS. That was just the WD SE drives (which in my mind are just WD Blacks with time limited error recovery enabled).

Apparently that L5640 overclocks incredibly well, ON AIR, it was really nothing to take it all the way to 3.15Ghz (Actually even higher, I did have it at 3.5+ for a bit) without changing any voltages or any dangerous tweaks.. I'm not big on overclocking so I dropped it back down to stock when I saw that increasing the speed wasn't showing ANY improvements in transfer speed, whether from ARC or from the drives directly...

Initially I could not figure out how to exceed 173MB/s... It dawned on me later how the order in which I installed the cards in the NAS really mattered... A LOT.

My first VERY important tweak involved moving the Intel x520 into one of the top two PCIE slots, I downloaded a PDF of the SabertoothX58 manual and saw that the slot I had my controller was actually PCIE 1.0 Spec & not PCIE2.0.. This got me to the maximum of 278MB/s, but nowhere near the 950+MB/s you can see towards the end of the video.

The X58 chipset doesn't have onboard video, so instead of using the fastest PCIe slot for video, I decided it was prudent to put my video card into the tiny PCIE 1x slot on the board using a cable that I dremeled out to fit the card, I also had to short pin A1 to B17 to have the card work in the 1x slot (Did this on the cable, not the card) because if you think about the purposes of a NAS, video isnt something we need a lot of bandwidth for. ;-)

Hardware Rule #1 - Put your controllers (Ethernet and SAS) in the fastest x16 slots on the board. In this early pic you can see my Ethernet controller is located in the brown PCIe slot.. (IE. not the best place.)

112ekh1.jpg


Above: If you look to the right of that Brown slot you will see a small blue board.. this is an Embedded 2GB Flash Drive connected to a USB header on the board.. this is the "Firmware" boot device holding the FreeNAS image.

mw2vjk.jpg


Above: This edge connector was modified to fit this full length PCIe video card and relocate it into the slowest slot on the board. You can see the one side of the short wire Jumper connected to pin B17, it leads around to the pin just beyond your view on the left opposite side (pin A1). This activates the PCIe 1x Presence Detect and allows any PCIe video card to function quite nicely as a text console.

Most people wouldn't need to go to this extreme in terms of modding stuff. I did it because of the limited availability of full length PCIe slots, and I have a second HBA available..

Advice: Install your cards in a way which which is relevant to the system.. in this case Network Attached Storage.... I have learned that this is especially true for the way FreeBSD works and its kernel, more than you would think based upon experiences with other OS's.
 
Last edited:

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
If you can replicate it then absolutely open a bug and have the devs look into it. Maybe reboot, then check the results of an ifconfig -a before and after applying the MTU via GUI again and see what else (if anything) changes.

PCIe bus issues aren't typically an issue for most users since they only use 1Gbps links, which can be easily handled by even a single PCIe 1.0 x1 lane, but when you start reaching for serious performance like this, it absolutely has an impact as you found out! Not just the 1.0 vs 2.0, but the difference between physical slot size vs electrical slot configuration, as well as if those lanes are connected to the CPU socket or the chipset.

What device are you copying to on the other end though that can sustain 1GB/s writes? RAID0 SSD? RAM drive?
 

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358
That was just the WD SE drives (which in my mind are just WD Blacks with time limited error recovery enabled).
The SE drives have a URE rating of 1E15 vs the Black URE rating of 1E14. The RE drives go up to 1E16.
 

RchGrav

Dabbler
Joined
Feb 21, 2014
Messages
36
In my case, the SINGLE BIGGEST IMPROVEMENT TWEAK to CIFS transfer speeds was not found on the FreeNAS itself.. but rather in the settings for the 10GbE card installed on my desktop (Intel x520-DA1).

2wgb036.jpg


I'm configured as direct connect (SFP+ is an auto crossover topology) between my Desktop and the NAS using the SFP+ Direct Connect cable, 10GbE Switches are HELLA expensive, and my main purpose was to have fast access to MY desktop... (Screw everyone else in my household, no but seriously, no one else has 10Gig here, this is a HOME RIG, and I wanna copy my ISO's and Videos quickly.)..

The secondary 10GbE port on the controller installed in the FreeNAS (Intel x520-DA2) is currently populated with a 1Gbase-T module.

1538935.jpg

Avago ABCU-5710RZ


(I got 5 of these extra, if anyone needs one hit me up, these are on tested on Intel SFP+ cards and work quite nicely for that second port on a x520-DA2)..
I figure if I ever do get a 10Gbe switch I'll yank that little guy out, and get another SFP+ Direct Connect cable. 10Gbe to the home switch...
 
Last edited:

RchGrav

Dabbler
Joined
Feb 21, 2014
Messages
36

RchGrav

Dabbler
Joined
Feb 21, 2014
Messages
36
I'm planning to POSSIBLY use a few of the Intel DC S3500 240GB's (500MB/s read, 260MB/s write) as L2ARC and or ZIL. I want to revisit the caveats on this in terms of how it could effect reliability of the array, and also the reversibility of the implementation.
I couldn't resist plugging in 8 of these for a day in RAID-0, as an experiment.. it achieved 2.1GB/s write and 3.5GB/s read.. interestingly they didn't show any improvement in IOPS, even on the QD32 4k tests in crystal disk mark. While there is a ton more drives able to service requests, the controller itself was also working pretty hard also. The IOPS on SSD's are already so much higher than even the fastest HDD, and if there was an improvement in IOPS to be seen its well above the queue depth of the benchmark. The controller was the LSI 9207-8i w/ temporary IR firmware. (This test was more related to my knowledge of the 240GB Intel DC S3500 SSD's themselves...)


vo6f53.jpg

Don't deploy SSD's like this, it was only a temporary test rig so they weren't flopping around getting scratched. Even though these drives are rated to 70 degree C and SSD's are usually only warm to the touch, those drives in the middle would experience some serious heat over time. This brick felt a bit hot after some benching in an 8 drive stripe and was promptly separated after I had a bit of fun.
 

RchGrav

Dabbler
Joined
Feb 21, 2014
Messages
36
The SE drives have a URE rating of 1E15 vs the Black URE rating of 1E14. The RE drives go up to 1E16.


That's good, so WD is specifying these drives to get only 1/10 the errors of a WD Black. I still wonder how much these ratings have to do with the intended usage scenario in combination with the firmware and the way the drive has been programmed to respond to bit errors.

Regarding the RE drives.. are you referring to the newer SAS edition of the RE drives, or the original SATA edition RE drives... It seems WD dropped SATA RE, Introduced SE (Which I assumed was the replacement for the old RE models), and reintroduced a higher end RE as a SAS only drive.

Nevermind, it WAS the RE4 drives which became the SE.. I just always called them the WD RE drives since they were released....

The UBER (URE) on the RE4 is the same as the SE is today...
 

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358
Status
Not open for further replies.
Top