How to boost network speed?

Status
Not open for further replies.

Nik

Explorer
Joined
May 28, 2011
Messages
50
Hi guys

I struggling with some performance problems which I think are related to the network.

This is the speed of my RAIDZ:
nas# dd if=/dev/zero of=/mnt/tank/polycom/testfile bs=8192k count=1000
1000+0 records in
1000+0 records out
8388608000 bytes transferred in 58.916148 secs (142382153 bytes/sec)

So I get around 140Mb/sec.

And this is the speed from my Macbook Pro 2011 to my Freenas RAIDZ using AFP (SMB is a tini bit slower):
niks-macbook-lan:~ nisele$ dd if=/dev/zero of=/Volumes/polycom-4/testfile bs=8192k count=1000
1000+0 records in
1000+0 records out
8388608000 bytes transferred in 245.361209 secs (34188811 bytes/sec)

I get around 30-34Mb/sec.

I believe via a Gigabyte network adapter I should easily be able to get network transfer speeds beyond 100Mb/sec. Or is that impossible?

Any suggestions how I can boost the network speed?

Btw. this has been tested via a direct link using Cat6 cable.

Cheers,
Nik
 
Joined
May 27, 2011
Messages
566
Samba, is not that fast. my raidz2 can read and write at about 250 MB/s, well above gigabit. i can get about 30MB/s to 60MB/s with samba and 90MB/s to 100MB/s with a simple FTP.

the realistic maximum for gigabit is about 100MB/s so you'll never get beyond that, 30-40 is a bit low, but you may have other limiting factors. while transferring a large file (very large) log into the system with ssh and run the 'top' command, it will list the processes that are using the most CPU time. take a look for smbd. let me know what it's running at and how many logical cores your CPU has. your processor may not be up to par.


also make sure you have 'Large RW support' enabled as well as 'Send files with sendfile(2)' and 'Enable AIO' they can give you a bit more performance with samba.

EDIT: the 30-60MB/s for samba was what i was getting with my AMD 4450e system, now i can get 90-100 GB/s with samba.
 

Nik

Explorer
Joined
May 28, 2011
Messages
50
I tested again with Samba and FTP. Samba speed doesn't go over 35Mb/sec even if I check 'Large RW support', 'Send files with sendfile(2)' and 'Enable AIO'. SMBD process is during transfer 50-60% in the 'top' (uses the one from the GUI).
FTP is around 30-32 MB/sec read and 40 MB/sec (average) write but jumps to 50-52 MB/sec at the beginning of a transfer. With Proftpd at around 80-85% and ntpd up to 70-75% around Not really what I would expect.
Tested on a Macbook Pro - Early 2011

CPU is Intel Atom 330 dual core with hyperthreading 1,6Ghz.

Any ideas?
 
Joined
May 27, 2011
Messages
566
Samba is single threaded, so the 50% utilization you are seeing is your CPU running as fast as it can. you'll need a faster CPU to get any more. I'd be the FTP is a similar situation, you're CPU is the limiting factor.

For reference, i have an e7400 over clocked to 3.5 Ghz, when I download at 90-100 MB/s smbd sits at 9%. FTP at 95 MB/s proftp is at 22%.
 

rwsheldon

Cadet
Joined
May 29, 2011
Messages
1
What I also have found what makes a diff is the duplex setting to full or auto and speed set to 10/100/1000 or auto. Depending on the switch and the NIC you will get diff results. At work w/Cisco switches set to 100Mb full duplex and workstations set to Full duplex auto on 3COM cards only xfer's about 20% max bandwidth. It suks. We changed the NIC setting to Full duplex 100Mb and jumped up to 80%. At the house I discovered same issue with 3COM/linksys/belkin/zonet switches. If you have a 100Mb connection your not going to go any faster unless you aggregate/trunk them together. matthewowen01 has a good real world example, one I would heed.
 

Nik

Explorer
Joined
May 28, 2011
Messages
50
matthewowen01 - Good point. Sounds like I need some more horsepower. I just wonder how somebody can max out a 10GbE connection. Should be hard to find a CPU that is capable to provide that performance with a single core. Vendors like QNAP seem not to use Samba then because their devices make >100MB/sec with a Intel Atom CPU.

rwsheldon - thanks for you suggestions but that's definitely not my problem. I tested with a direct connection with auto settings and 1000/full on both ends.
 

Nik

Explorer
Joined
May 28, 2011
Messages
50
Does somebody know why the NTPD process causes a high cpu load during AFP and SMB transfers (between 40-50%)?

Processes.jpg
 
Joined
May 27, 2011
Messages
566
matthewowen01 - Good point. Sounds like I need some more horsepower. I just wonder how somebody can max out a 10GbE connection. Should be hard to find a CPU that is capable to provide that performance with a single core.....

no one uses samba for a 10GbE connection.
 

esamett

Patron
Joined
May 28, 2011
Messages
345
If two users are using samba do they share the same thread or can they use different threads and different cpu cores?
 

esamett

Patron
Joined
May 28, 2011
Messages
345
did it raise cpu usage above the 1 core value _ eg 50% for dual core?
 
Joined
May 27, 2011
Messages
566
did it raise cpu usage above the 1 core value _ eg 50% for dual core?
No my system does not have a cpu bottleneck. they ran at about 10% each while filling my gigabit pipe. But each one is a separate processes so they can exist on different cores just fine.
 

Nik

Explorer
Joined
May 28, 2011
Messages
50
No my system does not have a cpu bottleneck. they ran at about 10% each while filling my gigabit pipe. But each one is a separate processes so they can exist on different cores just fine.

How fast (per thread), 60MB/sec too? What hardware do you have (I know the CPU but how much RAM and which chipset, network card)?
 
Joined
May 27, 2011
Messages
566
i have 2 gigabit NIC's, i can have 4 users 2 reading and 2 writing at 100MB/s each. CPU is around 50%
 
I

ixdwhite

Guest
Does somebody know why the NTPD process causes a high cpu load during AFP and SMB transfers (between 40-50%)?
I've seen this testing NFS over UDP mounts between machines. I'm not sure if its a bug in the kernel or not; it may just be a byproduct of how UDP listeners work. I need to chase after this again since it does cause performance problems. On the plus side, CIFS and NFS now use TCP pretty much exclusively and TCP won't trigger this.

I thought AFP uses TCP nowadays though.
 

mrcola

Cadet
Joined
Jun 8, 2011
Messages
5
Hi guys, you may get better performance if you turn jumbo frame on, but you will need to enable it everywhere in your network and some nics don't support jumbo
 

nanobyte

Cadet
Joined
Jul 1, 2011
Messages
1
i have 2 gigabit NIC's, i can have 4 users 2 reading and 2 writing at 100MB/s each. CPU is around 50%

Would you please post your freenas box full specs? which can pulls 1 gbps of data?. If possible what router/switch did you use in your home?
 
M

mirkoj

Guest
Hey guys I could use some help as well.
I just set up new freenas8 to my server/storage.
Had older version but due to combination of catastrophic events I've lost everything.
But that is not the point. On old settings I had easily speeds 100-150Mb sec.
But now nothing goes over 10mb/sec!!! 8-10mb.sec is really low.
Are there any settings like we had in old freenas to get rid of those spikes in transfer and keep constant transfer speed and normal speed?
configuration:
15x1Tb hard drives on raid card, setup as JBOD and all of them to raidz2 in one 12.6Tb volume. Server is proliant quad core xeon 2.0Ghz, 4mb ram.
This is really really slow speed and I was wondering if I missed something in setup?
Also everything is on, large files, AIO, send to 2....
Help please?
 

ptmixer

Dabbler
Joined
Nov 2, 2011
Messages
12
I've seen this testing NFS over UDP mounts between machines. I'm not sure if its a bug in the kernel or not; it may just be a byproduct of how UDP listeners work. I need to chase after this again since it does cause performance problems. On the plus side, CIFS and NFS now use TCP pretty much exclusively and TCP won't trigger this.

I thought AFP uses TCP nowadays though.

+1 for an investigation on this please. I'm currently moving a bunch of data over AFP to my newly set up FreeNAS 8.0.2-RELEASE box. Due to a bug (in FreeNAS 8 I believe) the prevents FreeNAS from operating correctly as a virtual machine in VMWare ESXi with more than one virtual CPU allocated, I am a bit CPU-challenged. While this transfer is going on, looking at TOP, I see afpd at about 17% WCPU and ntpd at about 15% -- both are varying and afpd usually stays ahead by a few percent, but I have to assume I would see performance increases if ntpd could get under control (isn't that the network time daemon.... why?).

The transfer also gets interrupted -- watching the network graph on the source machine it will sustain about 42 megabytes per second (best case) but then sometimes the flow will stop completely for up to 3 seconds or so, then resume. Still, after all the crashing with 2 vCPUs, I'm happy that it is finally moving along.

Thanks.
 
Status
Not open for further replies.
Top