Network slowdown

Status
Not open for further replies.

HAL 9000

Dabbler
Joined
Jan 27, 2012
Messages
42
I' trying to tune my small NAS based on Atom D525 mobo, 4 GB RAM, 4x2TB RAID-Z1 disks.
Network speed measured with iperf or dd|nc is over 100MB/s in both directions, ZFS pool read 260MB/s, write 160MB/s.
Everything seems perfect, but when I try simultaneously read from disk and send to LAN network speed drops to ~25MB/s.
At first I thought it is PCIe problem but it is not - when iperf or dd|nc is sending data over network even simple dd if=/dev/zero of=/dev/null causes LAN slowdown to 25% of its speed.
In both cases (disk read or zero->null copy) CPU is about 55% busy, interrupt rate shown by top about 7-8%.
When NAS is receiving data from network running dd from disk or /dev/zero does not cause such a big slowdown - LAN speed drops from 100MB/s to 85MB/s.
In the real use above phenomenon can be observed as much faster write then read over NFS or SMB (70MB/s write, 30MB/s read).
Tests were made with FreeNAS 8.0.4-BETA1 and FreeNAS 0.7.5.9496 (which is even worse then v8).
Anyone have an idea what's going on? Help!
 

xbmcg

Explorer
Joined
Feb 6, 2012
Messages
79
Write is much faster, because the NAS reads everything as fast as it can to RAM cache and writes the stuff later. If you want to check the real troughput, you must sent traffic, that exceeds your 4 GB RAM by some factor (e.G sent some files larger than 12 GB and watch the traffic drop to the end of the transfer. Windows 7 is even much better in CIFS than the unix SAMBA implementation in regards of optimization for speed. I think, you can also force write-trough to eliminate the write caches, if you want, by setting some parameters for the file system and on the disk controller drivers....

When reading, the NAS reads the data from disk, even the striped raidz eliminates head seek waits by simultanous reads and has some optimization (read ahead cache), it has to wait for the hardware to be ready before it can sent the data - this slows down a little, so the cache does not count in first place. I think, subsequent calls for the same file will show an improved read performance significantly, if the data is still in the caches....
 

HAL 9000

Dabbler
Joined
Jan 27, 2012
Messages
42
Write is much faster, because the NAS reads everything as fast as it can to RAM cache and writes the stuff later.

It is not related to cache. Same slowdown happens on TCP transfer without disk write (iperf or dd|nc).
When testing disk performane I've used files several times bigger then RAM.

If you want to check the real troughput, you must sent traffic, that exceeds your 4 GB RAM by some factor

This is exactly what I did.

Windows 7 is even much better in CIFS than the unix SAMBA implementation in regards of optimization for speed.

Problem occurs even when benchmarking bare TCP transfer. CIFS/NFS speed drop is only consequence of this.

I think, you can also force write-trough to eliminate the write caches, if you want, by setting some parameters for the file system and on the disk controller drivers....

Encountered slowdown is NOT related to disks. It happens also when transferring data from /dev/zero to dev/null over LAN.
Seems like some hardware/OS limitation or NIC driver problem (it is Realtek RTL8111E).
 

sjieke

Contributor
Joined
Jun 7, 2011
Messages
125
Hi,

I also have an Atom D525 mobo with RTL8111E nic. I don't see the slowdowns you mention. I get read speeds up to 85MB/s and write speeds around 70MB/s (+90MB/s write if the data is smaller than the RAM).
Iperf gives me results around the 800Mbits/second in both directions, if I use the -w options to increase the TCP window size (socket buffer size).
If I don't increase the TCP window size, speeds slow down to 300-400Mbits/second in 1 direction (can't remember which one).

Maybe this gives you some inspiration to look further.
Mentioned speeds are from memory, but if you want I can run some tests and post the results here for your reference.
 

HAL 9000

Dabbler
Joined
Jan 27, 2012
Messages
42
Maybe this gives you some inspiration to look further.
Mentioned speeds are from memory, but if you want I can run some tests and post the results here for your reference.

I have almost the same config as yours - same mobo and even case :smile:

Could you pls make this test:

YourPC: iperf -s -w <size>
FreeNAS: iperf -c <YourPC_IP> -w <size> -t 100 -i 1

and when above is running execute simultaneously in another shell:

FreeNAS: dd if=/dev/zero of=/dev/null bs=1024k count=10000

and see how transfer speed behaves? (In my case it drops from 100MB/s to 25MB/s)

Which version of FreeNAS are you using?
Have you made any changes in BIOS settings or FreeNAS tuning?

Thanks!
 

sjieke

Contributor
Joined
Jun 7, 2011
Messages
125
I will run the tests when I get home (will be in about 4 hours).

FreeNAS: dd if=/dev/zero of=/dev/null bs=1024k count=10000
I will do the above, but besides stressing the cpu I wondering what's the reason for this extra command. All it will do is copy zero's (from /dev/zero) to nothing (/dev/null)

I'm currently running FreeNAS 8.0.3 p1, multimedia release.
No changes in BIOS.
The only tuning I did is enabled prefetch in loader.conf and enable SMB2 in CIFS
 

HAL 9000

Dabbler
Joined
Jan 27, 2012
Messages
42
I will do the above, but besides stressing the cpu I wondering what's the reason for this extra command.

Exactly as you said - small CPU load (25% on my system) but kills network throughput...
The same happens when copying from disk to /dev/null or reading files through LAN.
 

sjieke

Contributor
Joined
Jun 7, 2011
Messages
125
On my PC:
Code:
joachim@Speedy ~ $ iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 192.168.0.100 port 5001 connected with 192.168.0.2 port 63522
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-22.4 sec  2.04 GBytes   783 Mbits/sec
^Cjoachim@Speedy ~ $ 

On the freenas box session 1 (using ssh):
Code:
[joachim@crashbox ~]$ iperf -c 192.168.0.100 -t 100 -i 1
------------------------------------------------------------
Client connecting to 192.168.0.100, TCP port 5001
TCP window size: 32.5 KByte (default)
------------------------------------------------------------
[  3] local 192.168.0.2 port 63522 connected with 192.168.0.100 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec  93.8 MBytes   786 Mbits/sec
[  3]  1.0- 2.0 sec  94.6 MBytes   794 Mbits/sec
[  3]  2.0- 3.0 sec  95.0 MBytes   797 Mbits/sec
[  3]  3.0- 4.0 sec  95.0 MBytes   797 Mbits/sec
[  3]  4.0- 5.0 sec  95.0 MBytes   797 Mbits/sec
[  3]  5.0- 6.0 sec  94.6 MBytes   794 Mbits/sec
[  3]  6.0- 7.0 sec  92.5 MBytes   776 Mbits/sec
[  3]  7.0- 8.0 sec  92.9 MBytes   779 Mbits/sec
[  3]  8.0- 9.0 sec  92.2 MBytes   774 Mbits/sec
[  3]  9.0-10.0 sec  93.2 MBytes   782 Mbits/sec
[  3] 10.0-11.0 sec  93.1 MBytes   781 Mbits/sec
[  3] 11.0-12.0 sec  87.9 MBytes   737 Mbits/sec
[  3] 12.0-13.0 sec  83.4 MBytes   699 Mbits/sec
[  3] 13.0-14.0 sec  91.0 MBytes   763 Mbits/sec
[  3] 14.0-15.0 sec  94.9 MBytes   796 Mbits/sec
[  3] 15.0-16.0 sec  95.0 MBytes   797 Mbits/sec
[  3] 16.0-17.0 sec  94.8 MBytes   795 Mbits/sec
[  3] 17.0-18.0 sec  95.0 MBytes   797 Mbits/sec
[  3] 18.0-19.0 sec  95.0 MBytes   797 Mbits/sec
[  3] 19.0-20.0 sec  95.0 MBytes   797 Mbits/sec
[  3] 20.0-21.0 sec  95.0 MBytes   797 Mbits/sec
[  3] 21.0-22.0 sec  95.0 MBytes   797 Mbits/sec
^C[  3]  0.0-22.4 sec  2.04 GBytes   783 Mbits/sec
[joachim@crashbox ~]$ 

On Freenas box session 2 (using ssh):
Code:
[joachim@crashbox ~]$ dd if=/dev/zero of=/dev/null bs=1024k count=10000
10000+0 records in
10000+0 records out
10485760000 bytes transferred in 6.700370 secs (1564952366 bytes/sec)
[joachim@crashbox ~]$ 


You see a small drop from 95.0 MBytes/sec to 83 MBytes/sec. This is during the dd command

Some more tests (mounted using autofs and cifs):
Code:
joachim@Speedy ~ $ dd if=/dev/zero of=/mnt/crashbox/tank_cifs/Home/joachim/temp.dat bs=2M count=6k
6144+0 records in
6144+0 records out
12884901888 bytes (13 GB) copied, 168.685 s, 76.4 MB/s
joachim@Speedy ~ $ dd of=/dev/null if=/mnt/crashbox/tank_cifs/Home/joachim/temp.dat bs=2M
6144+0 records in
6144+0 records out
12884901888 bytes (13 GB) copied, 160.853 s, 80.1 MB/s
joachim@Speedy ~ $ 


So 76,4MB/s write and 80.1MB/s read

Note that my network consist of Cat5e cables.
My client is a SabayonLinux KDE install (sabayon is derived from gentoo)
 

HAL 9000

Dabbler
Joined
Jan 27, 2012
Messages
42
You see a small drop from 95.0 MBytes/sec to 83 MBytes/sec. This is during the dd command

Thanks. Lucky you...
During the same test my LAN speed drops from over 100MB/s to very unstable 10-40 MB/s.
Like this:
Code:
[  3] 32.0-33.0 sec   110 MBytes   919 Mbits/sec
[  3] 33.0-34.0 sec   109 MBytes   918 Mbits/sec
[  3] 34.0-35.0 sec   110 MBytes   919 Mbits/sec
[  3] 35.0-36.0 sec   110 MBytes   919 Mbits/sec
[  3] 36.0-37.0 sec   110 MBytes   922 Mbits/sec
[  3] 37.0-38.0 sec   109 MBytes   911 Mbits/sec
[  3] 38.0-39.0 sec  40.9 MBytes   343 Mbits/sec
[  3] 39.0-40.0 sec  16.5 MBytes   138 Mbits/sec
[  3] 40.0-41.0 sec  19.2 MBytes   161 Mbits/sec
[  3] 41.0-42.0 sec  22.4 MBytes   188 Mbits/sec
[  3] 42.0-43.0 sec  18.1 MBytes   152 Mbits/sec
[  3] 43.0-44.0 sec  10.9 MBytes  91.2 Mbits/sec


WTF?!?
 

sjieke

Contributor
Joined
Jun 7, 2011
Messages
125
That is indeed very strange. How fast goes the dd command (on the Freenas box) during that command? Mine finishes in 6-7 seconds

I must say that iperf uses a lot of cpu processing power on my machines...

Do dd commands from a client to the freenas box also drop in speeds (even if it is only 1 connection)? Maybe it's just something iperf related...
You could run the same commands as I have and see if you get the same speeds...
 

HAL 9000

Dabbler
Joined
Jan 27, 2012
Messages
42
That is indeed very strange. How fast goes the dd command (on the Freenas box) during that command? Mine finishes in 6-7 seconds

A bit faster: 5.404507 secs

Do dd commands from a client to the freenas box also drop in speeds (even if it is only 1 connection)? Maybe it's just something iperf related...

When transfering data from Ubuntu to FreeNAS using:
Code:
nc -l 0 2000 >/dev/null
dd if=/dev/zero bs=1024k count=10000 | nc <IP> 2000

speed drop is like yours (from 100MB/s to 85MB/s).

When transfering from FreeNAS to Ubuntu speed drops to average 25MB/s (lowest 10MB/s).

So this is not iperf related.
 

HAL 9000

Dabbler
Joined
Jan 27, 2012
Messages
42
I've made some tests using iperf over UDP (iperf -b 1000m).
No slowdown...
It seems that proplem is TCP related.
 

sjieke

Contributor
Joined
Jun 7, 2011
Messages
125
I'm still suspecting the TCP window size used by client and/or server.
Could you post the results of following iperf tests (only the iperf tests, no stressing the cpu or anything)

Test 1:
On Freenas Box: iperf -s
On client: iperf -c <freenas Ip> -r -i 1

Test 2:
On client: iperf -s
On Freenas Box: iperf -c <freenas Ip> -r -i 1

Post the results including the iperf header info with the TCP window size used
For your information the '-r' option does a Bi-directional bandwidth measurement.
 

HAL 9000

Dabbler
Joined
Jan 27, 2012
Messages
42
I'm still suspecting the TCP window size used by client and/or server.
Could you post the results of following iperf tests (only the iperf tests, no stressing the cpu or anything)

I have no access to my NAS box at the moment. Just from memory:
Ubuntu->NAS: 900 Mbit/s
NAS->Ubuntu: 950 Mbit/s
TCP window on Ubuntu is something like 16k for write and 80k for read.
On FreeNAS windows size diplayed by iperf if 32k/64k AFAIR.

I will check exact numbers later. I also suspect TCP window size (or autotuning) as culprit,
especially when UDP tests showed no slowdown under CPU load.
 

HAL 9000

Dabbler
Joined
Jan 27, 2012
Messages
42
I'm still suspecting the TCP window size used by client and/or server.
Could you post the results of following iperf tests (only the iperf tests, no stressing the cpu or anything)

You can call yoursef "Sijeke the TCP detective" ;-)

My iperf results using default TCP window size (192.168.1.2 is FreeNAS, 192.168.1.6 Ubuntu) without CPU stress:

Code:
Ubuntu: iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------

FreeNAS: iperf -c 192.168.1.6 -i 1
------------------------------------------------------------
Client connecting to 192.168.1.6, TCP port 5001
TCP window size: 36.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.2 port 20698 connected with 192.168.1.6 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec   108 MBytes   906 Mbits/sec
[  3]  1.0- 2.0 sec   108 MBytes   906 Mbits/sec
[  3]  2.0- 3.0 sec   109 MBytes   915 Mbits/sec
[  3]  3.0- 4.0 sec   109 MBytes   916 Mbits/sec
[  3]  4.0- 5.0 sec   109 MBytes   916 Mbits/sec
[  3]  5.0- 6.0 sec   110 MBytes   921 Mbits/sec
[  3]  6.0- 7.0 sec   108 MBytes   906 Mbits/sec
[  3]  7.0- 8.0 sec   108 MBytes   905 Mbits/sec
[  3]  8.0- 9.0 sec   109 MBytes   912 Mbits/sec
[  3]  9.0-10.0 sec   107 MBytes   901 Mbits/sec
[  3]  0.0-10.0 sec  1.06 GBytes   911 Mbits/sec


In opposite direction:

Code:
FreeNAS: iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------

Ubuntu: iperf -c 192.168.1.2 -i 1
------------------------------------------------------------
Client connecting to 192.168.1.2, TCP port 5001
TCP window size: 25.4 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.6 port 39464 connected with 192.168.1.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec   118 MBytes   989 Mbits/sec
[  3]  1.0- 2.0 sec   117 MBytes   984 Mbits/sec
[  3]  2.0- 3.0 sec   117 MBytes   979 Mbits/sec
[  3]  3.0- 4.0 sec   117 MBytes   980 Mbits/sec
[  3]  4.0- 5.0 sec   118 MBytes   988 Mbits/sec
[  3]  5.0- 6.0 sec   118 MBytes   986 Mbits/sec
[  3]  6.0- 7.0 sec   118 MBytes   988 Mbits/sec
[  3]  7.0- 8.0 sec   118 MBytes   988 Mbits/sec
[  3]  8.0- 9.0 sec   117 MBytes   981 Mbits/sec
[  3]  9.0-10.0 sec   118 MBytes   988 Mbits/sec
[  3]  0.0-10.0 sec  1.15 GBytes   985 Mbits/sec


And with stupid dd if=/dev/zero of=/dev/null running on Ubuntu:

Code:
Ubuntu: iperf -c 192.168.1.2 -i 1
------------------------------------------------------------
Client connecting to 192.168.1.2, TCP port 5001
TCP window size: 25.4 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.6 port 39468 connected with 192.168.1.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec  91.5 MBytes   768 Mbits/sec
[  3]  1.0- 2.0 sec  90.1 MBytes   756 Mbits/sec
[  3]  2.0- 3.0 sec  90.1 MBytes   756 Mbits/sec
[  3]  3.0- 4.0 sec  89.9 MBytes   754 Mbits/sec
[  3]  4.0- 5.0 sec  90.0 MBytes   755 Mbits/sec
[  3]  5.0- 6.0 sec  90.0 MBytes   755 Mbits/sec
[  3]  6.0- 7.0 sec  93.4 MBytes   783 Mbits/sec
[  3]  7.0- 8.0 sec  89.5 MBytes   751 Mbits/sec
[  3]  8.0- 9.0 sec  89.8 MBytes   753 Mbits/sec
[  3]  9.0-10.0 sec  81.4 MBytes   683 Mbits/sec
[  3]  0.0-10.0 sec   896 MBytes   751 Mbits/sec

FreeNAS: iperf -c 192.168.1.6 -i 1
------------------------------------------------------------
Client connecting to 192.168.1.6, TCP port 5001
TCP window size: 36.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.2 port 27886 connected with 192.168.1.6 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec  18.2 MBytes   153 Mbits/sec
[  3]  1.0- 2.0 sec  18.4 MBytes   154 Mbits/sec
[  3]  2.0- 3.0 sec  16.4 MBytes   137 Mbits/sec
[  3]  3.0- 4.0 sec  12.2 MBytes   103 Mbits/sec
[  3]  4.0- 5.0 sec  13.5 MBytes   113 Mbits/sec
[  3]  5.0- 6.0 sec  30.6 MBytes   257 Mbits/sec
[  3]  6.0- 7.0 sec  16.5 MBytes   138 Mbits/sec
[  3]  7.0- 8.0 sec  32.6 MBytes   274 Mbits/sec
[  3]  8.0- 9.0 sec  27.1 MBytes   228 Mbits/sec
[  3]  9.0-10.0 sec  19.2 MBytes   161 Mbits/sec
[  3]  0.0-10.0 sec   205 MBytes   172 Mbits/sec


A little CPU load caused small slowdown in Ubuntu->FreeNAS transfer but killed FreeNAS->Ubuntu speed.

And after setting 64k window size on FreeNAS (Ubuntu still on defaults):

Code:
Ubuntu: iperf -c 192.168.1.2 -i 1
------------------------------------------------------------
Client connecting to 192.168.1.2, TCP port 5001
TCP window size: 25.4 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.6 port 39575 connected with 192.168.1.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec  94.2 MBytes   791 Mbits/sec
[  3]  1.0- 2.0 sec  89.6 MBytes   752 Mbits/sec
[  3]  2.0- 3.0 sec  89.2 MBytes   749 Mbits/sec
[  3]  3.0- 4.0 sec  89.5 MBytes   751 Mbits/sec
[  3]  4.0- 5.0 sec  90.6 MBytes   760 Mbits/sec
[  3]  5.0- 6.0 sec  96.6 MBytes   811 Mbits/sec
[  3]  6.0- 7.0 sec  89.6 MBytes   752 Mbits/sec
[  3]  7.0- 8.0 sec  89.0 MBytes   747 Mbits/sec
[  3]  8.0- 9.0 sec  89.5 MBytes   751 Mbits/sec
[  3]  9.0-10.0 sec  89.5 MBytes   751 Mbits/sec
[  3]  0.0-10.0 sec   908 MBytes   761 Mbits/sec

FreeNAS: iperf -c 192.168.1.6 -i 1 -w 64k
------------------------------------------------------------
Client connecting to 192.168.1.6, TCP port 5001
TCP window size: 66.0 KByte (WARNING: requested 64.0 KByte)
------------------------------------------------------------
[  3] local 192.168.1.2 port 58736 connected with 192.168.1.6 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec   105 MBytes   881 Mbits/sec
[  3]  1.0- 2.0 sec   108 MBytes   908 Mbits/sec
[  3]  2.0- 3.0 sec  76.6 MBytes   643 Mbits/sec
[  3]  3.0- 4.0 sec  63.8 MBytes   535 Mbits/sec
[  3]  4.0- 5.0 sec   108 MBytes   902 Mbits/sec
[  3]  5.0- 6.0 sec   106 MBytes   893 Mbits/sec
[  3]  6.0- 7.0 sec   108 MBytes   906 Mbits/sec
[  3]  7.0- 8.0 sec  60.1 MBytes   504 Mbits/sec
[  3]  8.0- 9.0 sec  83.4 MBytes   699 Mbits/sec
[  3]  9.0-10.0 sec  69.0 MBytes   579 Mbits/sec
[  3]  0.0-10.0 sec   888 MBytes   745 Mbits/sec


As you can see there is some throughput fluctuation but on average is over 4x better :smile:

Changing TCP window size on Ubuntu box does not make any change in LAN speed.
Setting TCP window size on FreeNAS smaller or bigger then 64k does not improve results.
64k seems to be optimal TCP window in tests although TCP window calculated theoretically (based on BDP) should be much smaller.
Probably there is some relationship of TCP window size not only with LAN bandwidth and latency but CPU load also.
Maybe I have to turn off TCP window autotuning and set 64k permanently?

Thanks for your help!!!
 

sjieke

Contributor
Joined
Jun 7, 2011
Messages
125
I won't have time untill tomorow, but I will post some of my results. But from memory I had also bad results without specifying the TCP window size in 1 direction.
I then also played with tcp settings and such. I managed to get great iperf results, but actual transfers got a slowdown. So I removed all my 'tcp tuning' and kept the defaults. I then tested using real transfers using NFS and CIFS with results as you can see in 1 of my previous posts.
I think the transfer protocols may have there own way to influence tcp window sizes, which could explain the results I get.

So I would use iperf (if needed with specified window size) to test if you have network problems like a bad cable, a bad switch, a bad nic,...
and some simple dd commands on some mounted shares to test the throughput. And in the meanwhile stress the CPU if you want to...

To mount my shares (both NFS and CIFS for testing) I use 'autofs'. I do set some extra parameters during the mount related to the window size:
* rsize: default network read size / maximum number of bytes in each network READ request
* wsize: default network write size / The maximum number of bytes per network WRITE request

I can't remember the actual settings from my head, but maybe you can play with dose.
 

HAL 9000

Dabbler
Joined
Jan 27, 2012
Messages
42
I then also played with tcp settings and such. I managed to get great iperf results, but actual transfers got a slowdown. So I removed all my 'tcp tuning' and kept the defaults. I then tested using real transfers using NFS and CIFS with results as you can see in 1 of my previous posts.

I've started with "real world" tests using NFS and Samba only to find that read speed from NAS is much below expected.
To isolate the problem I've benchmarked local ZFS speed which has turned out to be good and mesured LAN speed (iperf and dd|nc) which is great.
But these two things running together resulted in terrible loss of network throughput (especially in NAS->client direction).
NFS and Samba performance exaclty reflect these tests - they can perform only worse, never better.
 

sjieke

Contributor
Joined
Jun 7, 2011
Messages
125
I'm sorry, but I don't have any more input that could help you for the moment.
If you want me to run some tests, post some settings, configs,... or the like. Just ask for it :)
If you do find something or make any progress, please keep posting them, as I'm interested in what might be the cause of this odd behaviour. Maybe I could even benefit from possible tunings you find as we have the same hardware :)
 

HAL 9000

Dabbler
Joined
Jan 27, 2012
Messages
42
Changing TCP window size improved performance for iperf, but when transferring files from ZFS pool over network something is still slowing down.
While copying files over network (using dd|nc) speed is about 70MB/s - CPU load is 40% and ZFS pool can do ~250MB/s easily.
Network is not saturated but if I run second copying process (dd|nc from /dev/zero) LAN speed raises over 105MB/s at cost of few % CPU load.
So CPU 40% busy, disks are working at 25% of their speed and there is still unused network throughput reserve.
Were is a bottleneck?
 

sjieke

Contributor
Joined
Jun 7, 2011
Messages
125
A possible explanation that comes to my mind:

You have a dual core cpu with hyperthreading (enabled or disabled?). Let's asume hyperthreading is enabled (the default on your board), then you have 2 threads per core, so a total of 4 threads, each capable of using 25% of the cpu.
If the process used to handle your transfer uses 2 threads, it would be able to use a maximum of 50% of your cpu. Taking some overhead into acount when reading from disks, syncing the threads etc., the 40% seems reasonable to me.

So I think the CPU is your bottleneck now, as I am seeing similar speed results when reading a file (depending on the protocol).

Starting the second copying process results in new threads being used. So in this case the LAN is becoming the bottleneck.

Multicore and hyperthreading prevents a single process from hogging the cpu, so other processes have the free core/thread to run on.
If your freenas system doesn't need to perform multiple tasks at the same time (streaming movies, doing backups, downloading stuff,...) you could try and turn of hyperthreading in the BIOS. This could give a little bit more processing power to your copying process at the cost of a slow down if you need to perform multiple tasks at once.

I did some tests in the past by disabling hyperthreading. It only made a significant difference with ftp, boosting the speed from +-70MB/s to almost 90MB/s (numbers are from memory). But since I do multiple tasks at once (streaming a movie for the kids, while ripping new dvd's to the nas, while listening to music and also working on some files) and use NFS and CIFS on the local network and FTP only for remote access, I enabled hyperthreading again.

I also found some strange behaviour. Copying a files to the nas using the dolphin file manager gives me speeds of +-20MB/s. Copying the same set of files to the same mounted share on the nas using simple command line 'cp' gives me 60-70 MB/s. So the client makes a big difference. Any idea what the issue could be?

What settings did you change for the tcp window size, maybe this can solve my dolphin issue.
 
Status
Not open for further replies.
Top