Speed (Transferrate) issue with CIFS

Status
Not open for further replies.

ellupu

Dabbler
Joined
Mar 26, 2014
Messages
13
Hey there,
I'm new to freeNAS. Got myself a HP Microserver N40L and added additional RAM as described in the Guide. Well I finally managed to create a CIFS Share and was able to transfer files on a shared dataset but the transferrate is a bit disappointing.

I've tested it with a 1.3GiB rar-file in my local network.
UP something between 42 - 23MiB/s
DOWN (desktop) ~53MiB/s

I've also deactivated LZ4 compression for testing purposes but that changed nothing. I'm using a Cisco Small Business 8-port GB-switch between them. I hope someone could help me. Thanks in advance!
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Hi there.

I know the HP has a Broadcom NIC, but what's in your desktop? Realtek and Broadcom are generally speaking not as high performing as Intel. 50% throughput though is low even for that.

Have you tried swapping cables just to test if you have a bad one somewhere?
 

ellupu

Dabbler
Joined
Mar 26, 2014
Messages
13
My desktop is a Shuttle XPC with Generic Marvell Yukon ETH Controller. I've swapped the cable between my my desktop and the switch (to a CAT 5E), still no difference. The cable between the HP and the switch is a new CAT6 but I've tested it with a different CAT 5E cable just in case - still bad performance (no difference in UP and DOWN rates)
 

ellupu

Dabbler
Joined
Mar 26, 2014
Messages
13
I'm using Windows 7 (german) so hopefully this helps what I got from ipconfig /all

"Generischer Marvell Yukon 88E8001/8003/8010-basierter Ethernet-Controller"
It's from Microsoft, Driver-version 11.0.5.3
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
It may just be a combination of slow controllers on both ends. I know that the Realtek/Broadcom cards like to lean on the CPU for their horsepower, and the E-350 there might be a bit short.

But let's use iperf for network testing, and dd for local disk testing.

If you don't have a Linux/BSD/etc machine to run it, the Windows iperf client can be found here:
http://code.google.com/p/iperf-cygwin/downloads/list

From the FreeNAS shell run:
iperf -s

From a client machine:
iperf -c your.freenas.ip.address -i 10 -t 60

For dd, from the FreeNAS shell navigate to your shared folder and do:

dd if=/dev/zero of=testfile bs=1M count=1000

Assuming you still have compression=off this will give you a nice 1G file of zeroes and the raw write rate of your (single disk) pool, then do

dd if=testfile of=/dev/null

And there's your raw read speed.
 

ellupu

Dabbler
Joined
Mar 26, 2014
Messages
13
Okay, that's what I got from running iperf.
Code:
------------------------------------------------------------
Client connecting to 192.168.100.101, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.100.2 port 50355 connected with 192.168.100.101 port 5001
[ ID] Interval      Transfer    Bandwidth
[  3]  0.0-10.0 sec  384 MBytes  322 Mbits/sec
[  3] 10.0-20.0 sec  387 MBytes  325 Mbits/sec
[  3] 20.0-30.0 sec  388 MBytes  325 Mbits/sec
[  3] 30.0-40.0 sec  387 MBytes  325 Mbits/sec
[  3] 40.0-50.0 sec  385 MBytes  323 Mbits/sec
[  3] 50.0-60.0 sec  386 MBytes  324 Mbits/sec
[  3]  0.0-60.0 sec  2.26 GBytes  324 Mbits/sec


By-the-way I've tested also a notebook with an Intel Chipset and as you determined the performance was better. I reached, with transfers between both machines, transferrates between 65-75MB/s, but realistic should be between 90 to 110MB/s or am I wrong?

Unfortunatley I feel like such a noob - I can't interpret the raw speed. Here is what I got:

Code:
[root@freenas /mnt/HDD01/Apps]# dd if=/dev/zero of=testfile bs=1M count=1000
1000+0 records in                                                         
1000+0 records out                                                         
1048576000 bytes transferred in 4.763739 secs (220116168 bytes/sec)       
[root@freenas /mnt/HDD01/Apps]# dd if=testfile of=/dev/null               
2048000+0 records in                                                       
2048000+0 records out                                                     
1048576000 bytes transferred in 11.576156 secs (90580674 bytes/sec)  
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
iperf numbers are low, you're only getting a third of the theoretical max of gigabit speeds. Sounds like it's your Shuttle then; either the system or the Marvell NIC isn't fast enough to keep up.

Realistically with the overhead of CIFS or any other filesharing protocol you will probably only see 90-100MB/s maximum, and you likely won't get that from just a single drive.

Looking at your dd results (check the last line, the numbers in brackets, divide by 1048576 to get MB/s) you're getting about 210MB/s write and only 86MB/s read. Did you leave LZ4 compression on for this test? If you did, it will squash those zeroes down to nothing.
 

ellupu

Dabbler
Joined
Mar 26, 2014
Messages
13
The compression was off during this test. Okay.. I'm shocked.. never thought that a 1Gbit ETH Port will bring 300Mbit effectively.. about the dd results.. are those write and read results directly from the harddrive in my Microserver? Cause I can't believe that there is such a gap between those results.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
It shouldn't. I can push ~70MB/s to mine and pull about ~90MB/s from it. Mind you, both my desktop and my FreeNAS box use Intel NICs. If I set up an all-SSD test pool out of those SLOG-for-giggles disks I get over 100MB/s both ways. Now mind you, that's "from SSD, to SSD, large files, sequential, with nothing else going on." Benchmark-racing, if you will.

The dd tests are indeed local to your Microserver. The gap isn't what concerns me as much as the fact that you're supposedly writing at 200MB/s. I still think compression is on somewhere.
 

ellupu

Dabbler
Joined
Mar 26, 2014
Messages
13
I've tried to update my driver with an original from Marvell, which turned into weird results. While with Microsofts driver it started with 90MB/s and went down to its 30-40MB/s, Marvells original driver started the transfer with 3.5MB/s and remained on that state about 15 sec until it raised the speed up to 25-30MB/s ... sooo perhaps it's the fault of Shuttle (BIOS setting!?) or Marvells port.

Here is a screenshot of my compression setup:
Fo9CnaP.png


Anyway, thanks for your effort HoneyBadger - even if I didn't solved my problem, you helped me a lot. So currently I will deal with my transferrate - still better than 9-11 MB/s at 100Mbit LAN. :smile:
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
BTW, you might want to consider putting an Intel Pro/1000 NIC in your Shuttle. An OEM version only costs about $30 USD. I had one in my old XPC which ran Server 2003, before I replaced the hardware and migrated to FreeNAS.
 

ellupu

Dabbler
Joined
Mar 26, 2014
Messages
13
@gpsguy I might consider buying such a card, 30$ isn't that much and seems to be a good investigation - never thought that I would ever buy a dedicated NIC again. :smile:
I'm just asking myself if it would be enough to buy such a card only for my shuttle or may I also need one for my HP - if so, then do I need a driver to get that card installed?
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
With the Intel Pro/1000's, drivers for FreeNAS and Windows are included with the OS. The Broadcom NIC on your N40L isn't bad - Realtek and Marvel are generally considered bad. Due to the problems you had with the Marvell drivers, I'd certainly replace it.

That being said, don't expect miracles from your N40L. 40-50MB/s is about par for the course. CIFS is single threaded and likes a fast CPU. Our's are only 1.5GHz, HoneyBadger's Xeon is 2.66GHz. Since HoneyBadger mentioned a SSD to SSD test, I presume he probably has one is his desktop. My desktops have spinning rust, and my performance is in the 40's.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I second the idea to replace the desktop's Marvell NIC if just to sort out the driver issues. You can probably pick one up secondhand for less than the $30 mentioned if you really need to pinch pennies.

That being said, don't expect miracles from your N40L. 40-50MB/s is about par for the course. CIFS is single threaded and likes a fast CPU. Our's are only 1.5GHz, HoneyBadger's Xeon is 2.66GHz. Since HoneyBadger mentioned a SSD to SSD test, I presume he probably has one is his desktop. My desktops have spinning rust, and my performance is in the 40's.

Two, actually, and yes, it was to an SSD in my desktop. Another user set up a RAMdrive to assist in theoretical-max benchmark testing, so that would be a good way to ensure your client machine's disk isn't a factor if you're trying to test, but bear in mind you'll never see that kind of performance in the real world.
 

ellupu

Dabbler
Joined
Mar 26, 2014
Messages
13
I'm already looking for some 2nd hand cards - when I'm lucky I'll get one this week. My next PC will definitely have an Intel NIC on it. :) I already read that CIFS seems to be CPU intensive, although I also controlled the CPU usage of my N40L while transferring data. I've also controlled the CPU usage during a transfer - the maximum was some between 40-50%. I'm wondering myself how people claim to get about 100MB/s with the N40L...
 
Status
Not open for further replies.
Top