BUILD Home ECC build

Status
Not open for further replies.

KMR

Contributor
Joined
Dec 3, 2012
Messages
199
Currently I have all of my fans flowing in one direction.. I have heard of people placing fans on one side in one direction and the opposite direction for fans on the other side. I may buy four 5.25" hard drive coolers to place between my drives but has anyone had any success with alternative flow direction?

Four hard drive coolers in my case would be arranged something like this:
[HDD1] [HDD2]
[FAN1] [FAN2]
[HDD3] [HDD4]
[FAN3] [FAN4]
[HDD5] [HDD6]
The only potential issue I see here is that HDD's 3 & 4 will receive twice the cooling as the other four because they have fans above and below them.
 

KMR

Contributor
Joined
Dec 3, 2012
Messages
199
So I enabled Auto-tune because my performance has dropped to about 40 - 50 MB/s from the 80 - 90 MB/s the old build had. Now when I start a large file transfer it will start at 100 MB/s or so and then drop off to 40 - 50 MB/s where it will stay. Are there any suggestions to improve performance?
 

Stephens

Patron
Joined
Jun 19, 2012
Messages
496
There are threads on the forum with how to go about optimizing your system. Search for some common terms that might describe your problem. Some things to consider as sources

- Pool slow? Use DD to test.
- Networking issues? Use iperf (NAS) and iperf/jperf (client) to test. The problem could be at any point along the data path.

To really address your issues, you have to test things, not say "it was working before". There may be some tuning things you can do, but tuning without knowing the problem is a crapshot.
 

KMR

Contributor
Joined
Dec 3, 2012
Messages
199
Iperf results:
53 - 54 MB/s speed.

DD results:
[root@freenas] /mnt/volume# dd if=/dev/zero of=testfile bs=1024 count=50000
50000+0 records in
50000+0 records out
51200000 bytes transferred in 0.297915 secs (171861112 bytes/sec)
[root@freenas] /mnt/volume# dd if=testfile of=/dev/zero bs=1024 count=50000
50000+0 records in
50000+0 records out
51200000 bytes transferred in 0.122954 secs (416415455 bytes/sec)

Time for some more reading.

Update:
This motherboard has two Intel NICs built it so I swapped the cable to the other interface but got the same results.
I swapped the cables to ensure it wasn't something simple and I also removed the switch from the equation by running a cable from the server to the desktop. Same results. FWIW the switch is a Netgear GS108E.

Update 2:
After adjusting the TCP window size from the iperf default (64kbyte?) to 1024 in jperf the throughput jumped right up to 112MB/s. Is there a variable I can adjust for this?

Also, the current sysctls are:
kern.ipc.maxsockbuf = 2097152
net.inet.tcp.delayed_ack = 0
net.inet.tcp.recvbuf_max = 2097152
net.inet.tcp.sendbuf_max = 2097152
 

Stephens

Patron
Joined
Jun 19, 2012
Messages
496
After adjusting the TCP window size from the iperf default (64kbyte?) to 1024 in jperf the throughput jumped right up to 112MB/s. Is there a variable I can adjust for this?

A similar thing happened for me when I first tried jperf. So I had to adjust TCP Window size from the default 56k to (I believe it was) 64K. Then I could max my connection. That led me to customizing my CIFS buffers similar to what you've done above.

CIFS auxillary parms:
Code:
socket options = TCP_NODELAY IPTOS_LOWDELAY SO_KEEPALIVE SO_RCVBUF=131072 SO_SNDBUF=131072


I've been told some of these are defaults now and don't need to be specified, but I'm not using the latest versions of FreeNAS so they're necessary for that particular machine. That does NOT mean you should be using my options. First we just needed to make sure your network could support gigabit line speed. You've shown it can under the right circumstances. Now you have to dig a little deeper. This seems to have strayed outside the realm of "ECC build" though.
 

KMR

Contributor
Joined
Dec 3, 2012
Messages
199
Maybe a mod can chime in on the last point; I was hoping to document the trials of my build but if I should start another thread for the separate issues then I will.

I will test those options out tomorrow to see if they help. I am assuming this is just a setting that needs tweaked as there is a decent switch and intel NICs on both ends.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Here's your mod chiming in(for what i'm worth)...

Those settings by Stephens were good settings back in 8.0.4. But with Samba updates if you are using FreeNAS 8.2 the defaults have changed. Those settings are defaults except for the buffer size which I think is defaulted to 256k now. So if you used those settings you might be limiting yourself more than the defaults.

I know increasing the TCP window size can matter(but at the expense of CPU resources) but I'm not sure how you change it exactly. The TCP window size should be a system wide parameter(if I remember correctly), so it won't go into Samba. For most people changing the TCP window size is like trying to "overclock" your network settings. This isn't a real overclock because you still can't exceed 1Gb/sec, but it can help alleviate some bottlenecks. There is diminishing returns, so setting it absurdly high will use alot of CPU resources but provide exponentially decreasing benefits.

The only "tweaks" I have done was autotune was turned on(then reboot). It will add 3 sysctls that can help network performance in certain circumstances and with certain cards. I always recommend autotune be turned on if you have >12GB of RAM, which you do.

Based on your iperf results I'd bet you are using a Realtek NIC. Intel can give you some significant performance increase and are cheap. If you want to spend enough time with tweaking(don't blindly start adding stuff... read up and understand what they do and how they can help and/or hurt you) you should be able to get at least 750Mbit/sec from your NIC. I can't provide any guidance with this because I found it easier to just buy a $25 Intel NIC and be done with the problem. If you search the forum when people say they have network performance issues the default answer is "buy an Intel" NIC. It's the easiest answer to the problem unless you have a crappy CPU or insufficient RAM. You aren't in that category since you have a more powerful system than I do and I can saturate both of my Intel Gb ports simultaneously!
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
Here's your mod chiming in(for what i'm worth)...

I think he was referring to splitting the thread since its kind of gone off topic. It might be better to start a new thread, still post your specs in the new thread and then add a reference to this thread. You could also post a link at the end of this thread pointing to the new one.
 

KMR

Contributor
Joined
Dec 3, 2012
Messages
199
Based on your iperf results I'd bet you are using a Realtek NIC. Intel can give you some significant performance increase and are cheap. If you want to spend enough time with tweaking(don't blindly start adding stuff... read up and understand what they do and how they can help and/or hurt you) you should be able to get at least 750Mbit/sec from your NIC. I can't provide any guidance with this because I found it easier to just buy a $25 Intel NIC and be done with the problem. If you search the forum when people say they have network performance issues the default answer is "buy an Intel" NIC. It's the easiest answer to the problem unless you have a crappy CPU or insufficient RAM. You aren't in that category since you have a more powerful system than I do and I can saturate both of my Intel Gb ports simultaneously!

The Supermicro board I am using comes with an Intel® 82579LM and 82574L NIC. The desktop PC I am using is based on an Intel board so it also uses an Intel NIC. The iperf test with default TCP window size yields 467Mb/s and with a 1024 windows size yields 941Mb/s. I have enabled Autotune which placed in most of those sysctls settings. I will start a new thread and re-organize the info in this thread.

Does anyone know what the TCP window size setting is called? inet.net.tcp.***X?
 

KMR

Contributor
Joined
Dec 3, 2012
Messages
199
Status
Not open for further replies.
Top