Looking to improve 70MB/s read speed over SMB/CIFS

Status
Not open for further replies.

palmboy5

Explorer
Joined
May 28, 2012
Messages
60
I'm generally not doing any copying to or from the drives on my desktop, the server shares are mapped and I treat them like local drives. That's why I want throughput to be optimal right from the server, maxing out the gigabit ethernet. I run things like VM's stored directly on the server. My primary benchmark is CrystalDiskMark which doesn't use the desktop drives, it's just testing the drive I select (which is one of the mapped drives of the server shares). All that really should be happening in any usage scenario is that whatever I'm loading from the server is only being loaded into RAM - local drives are irrelevant.

Can you try and compare iperf -c on a Windows machine vs on a *nix VM on the same machine (imitate my situation)? I doubt my slower iperf on Windows is an isolated case. I tried iperf on a Windows 7 VM on the same host as the Ubuntu VM, and the Windows 7 VM performs just as poorly as the host Windows 7. The Ubuntu VM still beats either Windows 7 when doing iperf -c.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Ok, I tested my FreeNAS VM against the host OS its running on (Server 2008 R2).

With the Server as the server and FreeNAS as the client I hit a ceiling of 350Mb/sec. I can tell you that I can do over 90MB/sec when accessing the FreeNAS VM shares and over 100MB/sec when accessing the windows shares. So that kind of proves that something is bogus with performing the test(are we even slightly surprised? LOL).

I was unable to use FreeNAS as the server and Server as the client on the default port(no clue why). It would connect at port 5001, but then nothing would happen. After changing the port to 5005 it worked fine. My performance would peak at about 310Mb/sec. Slightly sower than the other way around. This COULD be related to the firewall rules and all of the outgoing packets are passed without filtering but the incoming packets are filtered. This is just a guess though.

Strangely, I can map a fileshare from the FreeNAS VM in Server and copy data at about 50MB/sec. This isn't a very good test though because the same physical drives are used for reading and writing.
 

palmboy5

Explorer
Joined
May 28, 2012
Messages
60
Did you try different window sizes? Also, with results like that, so much for this huh?
iperf has virtually been an industry standard for testing network bandwidth for years.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Did you try different window sizes? Also, with results like that, so much for this huh?

I left it at default because from my reading the default window size of 64KB was typical for most OSes.

As for it being an industry standard, we are so far outside the intent of the test we're really playing around with stuff that doesn't matter. Big picture, we're transferring data from 1 virtualized NIC to another NIC's driver. That's all. We're not really using any real NIC hardware at all. So the test is pointless. I'd say we're somewhat crazy for even running these tests against a virtualized NIC. Quite simply, if you aren't able to achieve maximum theoretical speeds through a virtualized NIC then the issue is almost 100% certain to be hardware limitations. iperf was never intended to be used with VMs and the results to be taken for diagnostic purposes. It can give you a maximum speed, sure. But if you are virtualizing how do you know the issue isn't with the host's hardware, drivers, loading or the virtualized hardware, drivers, etc. Using a VM makes troubleshooting FAR more difficult than running on real hardware. Nobody in their right mind would use a VM for troubleshooting without a need that MUST be filled by a virtual machine. In your case, I can't think of why a VM would be "needed" to troubleshoot an issue between 2 different machines. Plus my Intel NICs handle the checksum offloading. If you are running from a VM, you are obviously completely bypassing that potential benefit. You are potentially ignoring other potential benefits such as interrupt moderation, large send offload, and a whole host of other potential benefits. I'm willing to bet some of these potential benefits are either not implemented or poorly implemented from crappy NICs like Realtek. For all we know the reason your VM seems to perform differently is because there is no checksum offloading because you are virtualizing.

We really should completely disregard any and all tests that involve testing iperf using any kind of VM. Period. The test, in my opinion, while interesting, is also completely invalid for our purpose unless your plan is to transfer the data from the VM to the FreeNAS server. In that case, your issue wouldn't necessarily be with the NIC but with the NIC, the hosts hardware+OS+overhead, the virtualized hardware+OS+overhead and any performance enhancing features you have lost because you chose to use virtualizing. Iperf in a virtualized environment will give you an indicator of what the theoretical limit should be. But if its not you have a very large amount of optimizing to identify and correct(if it is even possible to correct).
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Honestly, I've wondered if the reason nobody else is responding is because we've been dismissed as idiots for even discussing iperf results over VMs. At some point there are threads where I think "omg that person is an idiot" and I don't respond because I don't want to spend hours trying to fix a problem that the other guy likely won't even understand. There was one a week ago and I simply unsubscribed because it was clear he had zero clue what he was doing, but expecting the forum to help him with anything and everything, even stuff non-FreeNAS related.

As an aside from my previous post, would you EVER run benchmarks on your virtual machine and then complain about it? I've never even heard of anyone benchmarking a virtual machine.

Really you should implement those recommendations that have been previously made on the other 6 pages(!?) of this thread, then come back and post results. Good or bad. Then go from there.
 

palmboy5

Explorer
Joined
May 28, 2012
Messages
60
You keep saying VM testing is irrelevant because its expected to be slower in the first place. I agree that should be the case, but my whole point is that it is not slower. The only reason I continue to do iperf testing with Ubuntu in a VM is because even with all the disadvantages of running in a VM, it still sends to the other machine (the server) faster than the host sends to the other machine (the server). I am NOT sending between the host and VM like you did. Again, the tests are as mentioned before:
1.) iperf client -> Windows 7 -> NIC -> Switch -> NIC -> FreeNAS -> iperf server
2.) iperf client -> Ubuntu 11.04 -> VMWare Player -> Windows 7 -> NIC -> Switch -> NIC -> FreeNAS -> iperf server

Test #2 is the one that is faster than test #1

This has nothing to do with the original problem and almost all to do with my accusation that the Windows port of iperf is horrible.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I never said it is expected to be slower. I said it is expected to be more complicated and difficult to use the results for diagnosis. I also said that those potential benefits may not be available because of the VM. Additionally I said that I think those benefits are often poorly implemented or not implemented at all in crappy NICs. Please don't put words in my mouth and say I claimed it was slower. :)

Slower or faster, it doesn't matter. The VM really throws everything for a loop because neither you nor I likely have the technical knowledge of how the data is processed in a VM to be given to real hardware for output onto your LAN. Put simply, VMs add a whole new level of potential complications to be reliably used for troubleshooting. PERIOD. As I said before, there's zero way for you or I to know if this odd anomaly is a product of poor drivers, broken hardware, or whatnot when you are adding a whole second OS, second driver, and second level of overhead. It's entirely possible that the VM is faster only because the checksum calculations for the VM NIC driver are better than the checksum calcs performed by the windows hardware NIC driver. But there's zero way to ever know.

Short and simple, we need to quit wasting our time with the VM tests. They tell us zero about the real situation. It's an anomaly that could only be summed up as "Ubuntu MIGHT work better as a host OS". But it's a HUGE "might" and requires a whole lot of somewhat unhealthy assumptions because its a VM.

I don't buy for a second, that the windows port of iperf is horrible when it has been an industry for years. Heck, the last stable version was from 2010! If it was as poor as you think it is from a VM(which I've tried to explain is stupid for us to really even have discussed)it wouldn't be included with almost every networking version of FreeBSD/Linux variant for years. I see it as you failing to understand the whole extra layer of complexity of adding a VM machine and how it interacts with the real hardware.

But if you want to continue to discuss the whole VM thing, you can wait for someone else. I can't explain it any better than I already have and there's a reason nobody else has piped in. Frankly, I feel like I'm waisting my time with this discussion because I don't think I can explain it any better than to say "we're stupid idiots for using iperf in a VM and expecting the results to actually matter".
 
Status
Not open for further replies.
Top