10 gbe Direct Connection Tuning - TCP Window Size

Status
Not open for further replies.

orionsbelt0

Cadet
Joined
Jun 21, 2016
Messages
6
So I got my server in yesterday and have been tweaking and playing since then working on tests and configurations.

Real world I am getting about 200MB/s upload to the server and 300MB/s download from the server from RAM disk on client.

When I run iPerf in the standard configuration these are my results. (Running "iperf -sD" on server and "bin/iperf.exe -c 192.168.0.10 -P 1 -i 1 -p 5001 -f g -t 10" on the client)
[240] local 192.168.0.11 port 60552 connected with 192.168.0.10 port 5001

[ ID] Interval Transfer Bandwidth
[240] 0.0- 1.0 sec 0.18 GBytes 1.58 Gbits/sec

HOWEVER

When I run with a 512k TCP window size on the server I get full speed. (Running "iperf -sD -w 512k" on server and "bin/iperf.exe -c 192.168.0.10 -P 1 -i 1 -p 5001 -f g -t 10" on the client.

[244] local 192.168.0.11 port 60557 connected with 192.168.0.10 port 5001
[ ID] Interval Transfer Bandwidth
[244] 0.0- 1.0 sec 1.15 GBytes 9.89 Gbits/sec

With this knowledge what tweaks do I need to make? I cannot seem to find how to change the TCP size on Freenas, or even if this is a good idea....

Thanks!!

Server Specs
4U Supermicro SMC#SSG-6048R-E1CR24L
X10DRH-iT
E5-2637 V4 (One for now)
64GB ECC Ram
6 x 6TB Toshiba Enterprise Grade Drives in RAIDZ2

Client Specs
i7-4790k @ 4Ghz
GTX970
Intel X540 T2 NIC (Might be fake....)
32 GB Ram
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
iperf runs by default with a small window size; this is an example of a tool dating from a time when that might possibly have been reasonable, but no longer is. You are expected to set reasonable parameters for iperf. Running iperf without proper parameters yields no knowledge, only misinformation.

FreeNAS is already well-tuned to deliver good speeds and there is no reason for you to believe that the naive default window size requested by iperf has any relationship to what FreeNAS uses. In general, you want the system to use the existing autoscaling to automatically consume a reasonable amount of buffering based on the observed characteristics.

If your hardware is somewhat slowish, there may be some extra advantage to trading more memory for buffering as this can help the system queue up larger chunks of data and keep things moving. That doesn't seem to describe your system. Doing this robs you of some ARC space, which may have the side effect of making things slower. But you can try it. Even if your hardware is fairly fast like yours. See

https://forums.freenas.org/index.ph...rdware-smb-cifs-bottleneck.42641/#post-277350

If that doesn't make things substantially faster, UNDO THE SETTINGS because they WILL rob you of memory that can be better used for other things.
 

orionsbelt0

Cadet
Joined
Jun 21, 2016
Messages
6
Ha! It works!! I am getting about 700 MB/s upload to the server. I am still only getting about 300MB/s Download from the server, I have the same results in iperf going the other way, 2.2GBytes/sec default and full wire speed if I raise the TCP Window size to 1024, I get about 6.5GBytes/sec when I increase it to 512KBytes.

I tried a few tools which tweak the registry TCP settings but none of them seem to work. I also read that you cannot tweak these values much due to the way windows scaling works.

Any thoughts? Any ideas on what I need to tweak on the Windows side?

Thanks!!!!!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Be aware that if your system is having to retrieve data from the pool, the 300MB/sec could actually be reasonable. It isn't unusual for ZFS to write faster than it reads, because when writing it caches it in memory and then pushes out a monster transaction group asynchronously, while reads have a tendency to be synchronous when the data isn't in ARC already. Hard to say.
 

orionsbelt0

Cadet
Joined
Jun 21, 2016
Messages
6
Oh yes, 300MB/s is quite fast for pool retrieval and I am happy with the speeds I have achieved. I did get sustained write speeds of 500-600MB/s for 250GB from my client SSD to the Pool, far larger than my RAM size.

However, Ideally I am trying to identify my bottleneck in an effort to both know where it is and obtain optimal performance. Also I was doing read tests of the same 10GB of large video files for over an hour so I suspect they would me in the ARC, no way to know for sure though.

Based on my iperf testing after tuning the settings you listed on the Freenas box.
Client >>> TO >>>Server (Freenas) 6.5 GBytes/Sec iperf default settings
Client <<< From <<<Server (Freenas) 2.2GBytes/Sec iperf default settings
Client <<< From <<<Server (Freenas) 9.8GBytes/Sec iperf when changing the client (acting as iperf server) TCP Window size to 1024.

This leads me to believe that my network connection between the machines is optimized for "upload" to the server, but that some more tweaks could be done to the receive side of the server.

My goal ideally is to not have the network be a bottleneck for any future upgrades I want to make.

Thanks!!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Things are in the ARC when they're reading at 10x++ times the speed you'd expect the physical hardware to be capable of. You can try to coerce a file into ARC and verify that it is from the command line.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
What network fabric are you using to get 6-10 "GBytes/Sec"? 100gbe?

Or maybe you mean 6.5 Gigabits/second ;)

You can test your pool speed without networking by using dd to transfer to /dev/null locally through ssh
 
Status
Not open for further replies.
Top