FreeNAS from baremetal pc hardware to ESXI server hardware

Terydan

Dabbler
Joined
Jan 19, 2017
Messages
31
I have had for a while now an install of FreeNAS 11.2 (most recently U4) on a re-purposed Intel i7 4970K with 16 GB of RAM, 3x mixed size WD Purple hard drives (started with what I have and will slowly increase drives to match size) in Raid Z, 1x 1TB Purple hard drive stand-alone for misc non-important storage, and 1x 1TB Purple hard drive standalone for surveillance recording. I ran 2 VMs - Win10 Pro running Milestone XProtect VMS surveillance software, and Ubuntu 16.04LTS server hosting docker containers for my HomeAssistant automation system and NGINX), and 1 iocage jail for Plex. Everything worked great except my Win10 VM was a bit laggy and required regular reboots because my available resources for it were a little anemic. I had no issues with my Ubuntu VM or my Plex jail, they just worked. I researched my Win10 VM and it seems to be widely accepted that bhyve isn't the best hypervisor when it comes to Win10 if there is heavy I/O on the hard drive due to its poor drivers for HDD, so I started looking into ESXI as a baremetal host.

I picked up a Dell R720 server for $350 on eBay with dual E5-2640 2.5GHz 8-Core Processors, 48GB of RAM and an H310 mini HBA flashed to IT mode, and a network card with 4 Gigabit network ports. I installed an 500GB Samsung 970 EVO Plus NVMe on a PCIe 3.0 carrier card and used it as my primary datastore for the VMs. I am booting ESXI of a 16GB SanDisk Cruzer 16GB drive on the internal USB. I used the vmware converter software to mirror my Ubuntu and Win10 VMs to ESXI and disabled them in FreeNAS with no issues. I then installed FreeNAS 11.2 U5 as a new VM with the H310 passed through to FreeNAS, which booted up fine. I backed up my configuration and powered down my old FreeNAS machine and moved the hard drives over to the hotswap bays connected to my H310 HBA. I restored my configuration for FreeNAS and rebooted the FreeNAS VM. It booted fine and found all my drives. I had to use the ESXI console and reset the network settings to a static IP since it didn't recognize the new network hardware. Now that I could enter the web GUI via IP address I checked it out and it sees all of my drives and my SMB shares are accessible. So far so good.

Now that the specs and introduction have been explained, down to the nitty-gritty of my concerns. Some of my dashboard cards like Network Info, CPU Usage, CPU Temperature, and Memory Usage are all blank and have a blue light flashing left to right at the top of the card like its "thinking". The NIC address and name are shown for Network, but no bandwidth. My Plex jail wouldn't connect to DHCP (I have reservation set up in my DHCP server) and I had to change it to static. Now that Plex is working I keep getting disconnected on my clients with an error message that my "connection to the server is not fast enough" which I never had seen with my old hardware. I even added a second network connection to my 4-port NIC and configured to load balance in the host, but no change.

I have read through hundreds of posts now about setting the virtualization options in the BIOS, setting the vSwitch settings in ESXI, and configuring jail settings, but can't find a solution to my primary concern of Plex network issues, but also to the the dashboard problems which may be related. I tried creating Plex in my Ubuntu VM and connecting via NFS shares to FreeNAS, but the NFS shares keep disconnecting which leads me back to network issues in FreeNAS. My Ubuntu VM has no network issues other than staying connected to the NFS share.

Is there a problem with the R720 and compatibility with nesting virtualization (Jail in FreeNAS in ESXI). I have read many posts of people using R710. I chose the R720 for the newer hardware, and hopefully more compatibility options for NVMe and such.

Also, I'm hoping to upgrade my boot device to a PCI carrier with dual m.2 sata drives in a RAID0 array (via onboard RAID Controller) for redundancy and use the extra storage as a datastore for VM images and files, not booting VMs though. Any input on this idea?
 

Attachments

  • FreeNAS_1.png
    FreeNAS_1.png
    253.8 KB · Views: 581
  • FreeNAS_2.png
    FreeNAS_2.png
    175.7 KB · Views: 614

dir_d

Explorer
Joined
Nov 9, 2013
Messages
55
Can't help you much but to ease your mind about the R720. There are plenty of people that use the R720, personally im using a R720xd with bare metal freenas and ubuntu VM with docker.
 

Terydan

Dabbler
Joined
Jan 19, 2017
Messages
31
If it wasn't for my Windows VM and the potential of me making more Windows VMs for gameservers and such, I probably would have gone with a barenetal FreeNAS. I can deal with the lack of monitors in the FreeNAS dashboard other than my OCD, but the network drops are killing me and I can't find any precedent for them online. I even passed one of my NIC ports all the way through to FreeNAS VM so it wasn't using the host virtual switch and that didn't help either (actually it became more problematic and would barely connect).
 

Terydan

Dabbler
Joined
Jan 19, 2017
Messages
31
Can't help you much but to ease your mind about the R720. There are plenty of people that use the R720, personally im using a R720xd with bare metal freenas and ubuntu VM with docker.

I think I have narrowed down my networking drops to the integrated 4 port Broadcom NetExtreme BCM5720. I know Broadcom is considered a little taboo with FreeNAS based on my google searches, but I was thinking it should be fine since FreeNAS is virtualized and appears as a VXNET 3 NIC to the VM. Now I am thinking the general poor quality (per concensus of multiple Google searches) is bleeding through the host to the VM. I was curious if you are using the Broadcom NIC or if you changed the card. I'm looking at replacing it with a 4-port integrated Intel i350 Gigabit NIC from Ebay for about $15.
 
Joined
Dec 29, 2014
Messages
1,135
The i350 is a good choice as it is very well supported in FreeBSD/FreeNAS.
 

Terydan

Dabbler
Joined
Jan 19, 2017
Messages
31
I replaced the network card with a 4 port i350 PCIe integrated card. I rebooted everything and it all came back up no problem, but unfortunately I'm still having disconnect issues. I even created a second vSwitch that is dedicated to all my storage traffic that is only used by the VMs and FreeNAS so that the FreeNAS Management interface (also the interface to access Plex) is not as saturated. Still no luck. I can't seem to go more than about 20-30 minutes without a server disconnect. The Plex server becomes unavailable for about a minute and then comes back up and available.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
The BCM5720 isn't expected to work that well with bare metal FreeNAS. This has absolutely zero to do with FreeNAS as a VM, as ESXi hands FreeNAS what appears to be an honest-to-goodness Intel ethernet interface. (By the way, make sure you're using E1000 or E1000e and not a VMX interface, if you're having problems. VMX is kinda sucksville in various ways.) Do not confuse yourself by wondering what the compatibility of a Broadcom interface is with FreeNAS as a VM. It should be *exactly* the same as it is with any other VM. The whole host will work fine or the whole host will have problems.

Most Dell server-grade hardware is well-supported in ESXi, so make sure you are running the current firmware for the network card and make sure that you are using the driver specified in the VMware compatibility guide. Sometimes the driver you need to be using for a network card with ESXi is not the one that comes with ESXi.
 

Terydan

Dabbler
Joined
Jan 19, 2017
Messages
31
The BCM5720 isn't expected to work that well with bare metal FreeNAS. This has absolutely zero to do with FreeNAS as a VM, as ESXi hands FreeNAS what appears to be an honest-to-goodness Intel ethernet interface. (By the way, make sure you're using E1000 or E1000e and not a VMX interface, if you're having problems. VMX is kinda sucksville in various ways.) Do not confuse yourself by wondering what the compatibility of a Broadcom interface is with FreeNAS as a VM. It should be *exactly* the same as it is with any other VM. The whole host will work fine or the whole host will have problems.

Most Dell server-grade hardware is well-supported in ESXi, so make sure you are running the current firmware for the network card and make sure that you are using the driver specified in the VMware compatibility guide. Sometimes the driver you need to be using for a network card with ESXi is not the one that comes with ESXi.
Thank you for the thorough info. I thought that was the way ESXI would manage the card and that the VMs wouldn't know any better, but i didn't want to take a chance that FreeNAS might somehow detect the broadcom, and there was a lot of hate on the broadcom. Luckily the Intel card was only $15, and someone had damaged/bent outward the metal shield around the ports of the old card so I feel a little better anywho. And I always have the option to switch to baremetal now if I needed/wanted. I did update all the drivers before putting on ESXI, but I didn't review the recommended drivers for ESXI (got caught up in the excitement of getting it all running). I did not however update the Intel card drivers (DOH!). I will review the recommended ESXI drivers and update as necessary. Also, I thought I remember reading in one thread that the vmx network card was the recommended for performance/compatibility, but I will swap to e1000 and try it out.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Thank you for the thorough info. I thought that was the way ESXI would manage the card and that the VMs wouldn't know any better, but i didn't want to take a chance that FreeNAS might somehow detect the broadcom, and there was a lot of hate on the broadcom. Luckily the Intel card was only $15, and someone had damaged/bent outward the metal shield around the ports of the old card so I feel a little better anywho. And I always have the option to switch to baremetal now if I needed/wanted. I did update all the drivers before putting on ESXI, but I didn't review the recommended drivers for ESXI (got caught up in the excitement of getting it all running). I did not however update the Intel card drivers (DOH!). I will review the recommended ESXI drivers and update as necessary. Also, I thought I remember reading in one thread that the vmx network card was the recommended for performance/compatibility, but I will swap to e1000 and try it out.

No, VMX may have better performance in some scenarios, but it has some issues at well, especially if you use several of them on a VM.

http://matt.dinham.net/interface-ordering-vmware/

etc.

With physical hardware, it is pretty easy to set up topologies such as a single physical ethernet with multiple (easily dozens) of vlan interfaces and have this perform very well. With VMware, you can bring a trunk interface to a single ethernet on a VM and break vlans off of that, but performance is teh sux0rz under heavy network load because every packet on every vlan is being presented to the guest OS. So VMware's remediation for that is to tell you to create multiple network interfaces and let the vSwitch (high performance networking code) sort it out, but then they can't get persistent interface ordering right with vmxnet3, and you're limited to only 10 interfaces anyways. It's a little like beating your head with a brick, at least for those of us making virtualized networking infrastructure.

So first step anytime you have mysterious VM networking problems should be to switch to E1000, which works swimmingly well in pretty much every situation I've run across over the years. Some people have a mistaken idea that it is limited to gigE performance on the hypervisor. It isn't limited to a gigabit. There are even cases where it is faster than vmxnet3.
 

Terydan

Dabbler
Joined
Jan 19, 2017
Messages
31
Awesome information. Thank you so much. I am hoping to try and make these changes when I am home tonight and see how it works over the next couple days.
 

Terydan

Dabbler
Joined
Jan 19, 2017
Messages
31
I updated my igbn driver in ESXI to the recommended driver for my new Intel NIC. I changed the network interfaces for my FreeNAS VM to E1000e and reconfigured those interfaces in FreeNAS. Everything in regards to accessing FreeNAS and the shares is working. But now my Plex becomes unresponsive completely. It works at first for about 30-60 minutes, but then I can no longer access the Plex server at all. I have to go reboot the jail and things start working again for about 30-60 min. I checked my vSwitch settings and per other posts I have promiscuous mode enabled and forged transmits enabled.

Also, I don't know if it points to any conclusions, but since I put my FreeNAS in a VM on ESXI I have never been able to get my jails to pull DHCP; I must specify a static address in the jail.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
You probably shouldn't be using jails on a virtualized FreeNAS. Put the Plex in its own VM. Depending on how the jail is set up, you can have a variety of problems due to the default ESXi networking restrictions. You will want to make sure that Promiscuous, MAC Address Changes, and Forged Transmits are all set to Accept or you can run into more interesting issues.
 

silverback

Contributor
Joined
Jun 26, 2016
Messages
134
Are you passing a nic straight through to your Freenas VM? I have had no problem with jails running this way.
Yes, not all of the reporting in the Freenas VM functions properly.
 

Terydan

Dabbler
Joined
Jan 19, 2017
Messages
31
You probably shouldn't be using jails on a virtualized FreeNAS. Put the Plex in its own VM. Depending on how the jail is set up, you can have a variety of problems due to the default ESXi networking restrictions. You will want to make sure that Promiscuous, MAC Address Changes, and Forged Transmits are all set to Accept or you can run into more interesting issues.
I do have all those options set to "Accept" in the vSwitch. I had read some other posts that peoples' jails ran fine and pulled DHCP when those settings were set to "Accept". I like that the jail has direct access to the media files on the disk and I like the minimal resource usage it requires vs a complete VM for a single program. I was potentially planning on adding some more jails like UniFi when I upgrade my network hardware and maybe a couple others, but maybe I'll just add them to my docker stack instead that is running on my Ubuntu VM. I'd like to keep the Plex VM option a final resort, but I do respect your experience as it far surpasses my own and I will definitely keep that option open. I just have a small OCD complex that nags me saying "Everyone else can get it working, so I should be able to make it work too!". Until all my options are exhausted I just can't bring myself to completely abandon the idea because then I feel like I failed.

Are you passing a nic straight through to your Freenas VM? I have had no problem with jails running this way.
Yes, not all of the reporting in the Freenas VM functions properly.
I did try to pass a NIC straight through originally, but it was causing even more problems. That was also when I still had the Broadcom NetExtreme BCM5720 NIC in the system though. I have not tried a dedicated NIC since putting in the new Intel one. I was starting with the options jgreco brought up with setting up the E1000e virtual network adapter and reporting my findings. I am going on vacation for the weekend and getting ready for that tonight so I may try the dedicated NIC option on Monday.

All in all though, this project has really damaged my WAF points that I gained from some of my other projects. Plex is probably one thing she likes the most and these hiccups aren't joy to her. But unfortunately for me I guess I'm a glutton for punishment because my OCD to make things work the way I envisioned trumps her annoyance to make it work sooner. LOL.
 

Terydan

Dabbler
Joined
Jan 19, 2017
Messages
31
I successfully passed through one of my NIC ports from my 4-port i350 card and configured it in FreeNAS. I removed my virtual NIC for the management interface, but left the virtual NIC for my Storage interface. I went to my jail settings and set it to DHCP and it was able to successfully pull an IP address from my DHCP server (the reserved IP I had specified in my my DHCP server). I confirmed I can connect with Plex and view content, I will try it out for a couple days to see if things have stabilized.
 

Terydan

Dabbler
Joined
Jan 19, 2017
Messages
31
Well just to post results, I have successfully been running things without issue since my last post. FreeNAS definitely doesn't like virtual NIC interfaces when it comes to jails. I'm just glad that my nic is able to pass individual NIC ports instead of the entire thing like an HBA. For anyone else doing FreeNAS on ESXI, pass through an Intel NIC to start and save some trouble.
 
Top