HP N40L + ESXi 5.1 + FreeNAS 8.3 ==> Slow gigabit network performance

Status
Not open for further replies.

glipman

Dabbler
Joined
Oct 31, 2012
Messages
21
I have installed ESXi 5.1 on a HP N40L Microserver and run FreeNAS 8.3 as a client inside.

This works fine but I only get about 30-40 MByte/s read and write from my Desktop PC. At first I thought this was normal because of the extra virtualization layer and because the HP N40L is not exactly the fastest machine.
However: when I install a virtual Ubuntu Server I can read and write with 60/70MByte/s. Clearly ESXi/ the hardware/ my cabling is not the problem.

Next I started testing the network speed with iperf and that explains it all

Code:
Ubuntu Server (1GB memory assigned):
iperf -c 192.168.5.122 -w 256K -i 1 -f M
[108]  0.0-10.0 sec  1093 MBytes   109 MBytes/sec

FreeNAS Server (4GB memory assigned):
iperf -c 192.168.5.121 -w 256K -i 1 -f M
[108]  0.0-10.0 sec   422 MBytes  42.0 MBytes/sec


The Ubuntu server can saturate my gigabit network, FreeNAS stops at 40%

Note: I also tested NAS4Free, a plain FreeBSD installation and ZFSGuru and they are all equally slow. An XP windows client is fast like Ubuntu.

Why is FreeNAS so limited? Is this just because I must use the e1000 driver in stead of the VMXNET3? Would the VMXNET3 be three times as fast?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'm sorry, but comparing Ubuntu to FreeNAS isn't a good test of hardware performance. That's like saying that a machine that runs XP must also run Ubuntu well. Driver performance, kernel design, and alot of other factors play a large part in the performance. What does interest me is that NAS4Free and FreeBSD are equally slow which means that the problem is NOT with FreeNAS but is somehow related to FreeBSD and the underlying OS. It may not be optimized for ESXi like Ubuntu is.

What I would do is try running FreeNAS, FreeBSD, or NAS4Free without virtualization and see how performance does. I'll never understand why people complain when they virtualize without having a hugely OP machine. A small delay in processing due to virtualization can result in a cascading effect on performance.
 

glipman

Dabbler
Joined
Oct 31, 2012
Messages
21
I understand your point but I believe there are more people running my combination and I wonder how their performance is.

Perhaps I am doing something wrong and it is easy to bring the FreeNAS performance up to the Ubuntu level.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
There are some that do VMs, but VMs aren't really recommended for FreeNAS because you have to maintain tight control of what is going on in the virtualization layer. That machine isn't very powerful, so I'm not sure I'd expect someone else with the same hardware to be running it with ESXi.

Have you tried my suggestions in the previous post or is that not good enough because you insist on using a VM? If you insist on using a VM then I'd say your performance is about what you should expect from what I've seen.
 

survive

Behold the Wumpus
Moderator
Joined
May 28, 2011
Messages
875
Hi glipman,

I'm not sure what Vmware tools FreeNAS supports natively, but I would certainly try one of the VMXNET drivers. The e1000 nic in ESXi is completely emulated & on the n40l I would think it uses a lot of cpu itself. I just moved a pfsense box from proper hardware to ESXi and switching over to VMXNET3 NICs made a huge difference.

Personally I think you really ought to ask yourself if visualizing this system is really worth it. If you give FreeNAS 4GB and figure in another 2GB for ESXi itself, I can't see it really being worthwhile dealing with all the extra overhead....what else are you really going to be able to run on it?

-Will
 

glipman

Dabbler
Joined
Oct 31, 2012
Messages
21
When running FreeNAS from a live CD on bare metal I get around 550Mbit/s. Better but still a lot less then virtualized Ubuntu.

@survive: this is a home-setup, mostly experimental. I play a lot with different OS'es and FreeNAS is just one of them. As I am the only user of this ESXi server usually only one virtual machine at a time will be under stress and the N40L can handle that fine.
Of course saturating a gigabit network is not really necessary at home but I get frustrated when I see that another OS can achieve speeds FreeNAS seemingly can not. I want to use FreeNAS for its ZFS.

But as noobsauce80 pointed out: as I see the same results on plain FreeBSD and other BSD-based machines this problem is not FreeNAS related but more BSD-related. I will start reading BSD-forums.
 

evkruining

Cadet
Joined
Nov 5, 2012
Messages
3
I've been playing with my new N40L box over the weekend too. I'm running ESXi 5.1 with the latest FreeNAS in a vm and I'm seeing abysmal performance. If I'm lucky I get about 14MB/s throughput. My freeNAS vm has a dedicated disk, configured in ESX as passthrough so you would expect it not being too much affected by the virtualization layer.

When I run FreeNAS natively on the N40L (off of the internal USB port) I get proper transfer speeds of about 80MB/s.

I have not yet tried to switch the NIC to VMXNET3. I will do that shortly when I'm done tearing down the W2K8R2/Hyper-V stuff that I'm running now to see if that would make a difference. It didn't. Couldn't get FreeNAS running properly in Hyper-V anyways. Besides, I don't like to run Windows, let alone Hyper-V. I would really like to have this ESXi sorted so any hints and tips are much appreciated.
 

evkruining

Cadet
Joined
Nov 5, 2012
Messages
3
Got it sorted!

Setting the Write Caching to Enabled in the N40L BIOS makes all the difference. The default setting is disabled. Writing to a CIFS share I get a decent 40MB/s. Reading is about 65MB/s. AFP is even better, 60MB/s and 70MB/s respectively.

Still not saturating my Gigabit network but still pleased about the improved performance.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
If you intend to keep the write cache setting enabled I highly recommend you invest in a UPS. Even with ZFS, using a write cache can have serious consequences without a battery to provide for a proper shutdown for the write caches. There's a reason why high end RAID controllers come with their own battery backup.
 

evkruining

Cadet
Joined
Nov 5, 2012
Messages
3
If you intend to keep the write cache setting enabled I highly recommend you invest in a UPS.

Good point! I Just ordered one of these smart ups boxes. Let's see if and how I can make it work with ESXi. It would be cool if it shuts down my running vms and then gracefully shuts down ESXi itself. How hard can it be? :)
 
Status
Not open for further replies.
Top