FreeNAS virtualized on ESXi or VM's in FreeNAS?

Evertb1

Guru
Joined
May 31, 2016
Messages
700
At this moment I am running FreeNAS (11.0-U3 Stable) as a fileserver and as backup for a couple of workstations. No Jails or VM's. The hardware I am running it on is more than capable of doing that job:

SuperMicro X10SL7-F - On board SAS2 controller flashed to IT Mode
Edit: Intel Core i3-4130T Intel Xeon E3-1225 v3

2 x 8 GB Samsung ECC M391 B1G73QHO-YKO
Edit: Added 2 x 8 GB Crucial EUDIMM 102472BD1160B

Boot: SSD 60 GB Corsair

Edit: Added Intel S3700 200 GB SSD

Storage: Raidz2 5 x WD Red 2TB

There is a Xeon E3-1220 v3 CPU on its way that should give me a bit more power and I am thinking about buying some extra memory while I still can get it. The memory will only be bought if it is really needed.

I consider the server to be a production machine. It holds data that is important to me and my family and at least part of that data needs to be always available within reason. Of course I have a decent backup plan in place.

The last couple of weeks I am thinking about having some extra functionality on the server.
I would like to run a VM with Windows (10 or Server 2016 Essentials) and a VM with a Linux distro.

On the Windows VM I want to run StableBit CloudDrive to utilize my OneDrive storage as cloud backup medium, J-River as media server and some development related stuff. On the Linux distro I want to run a database server and a webserver for development purposes.

The way I see it I can go two ways about this. I could keep my current FreeNAS setup and run a couple of VM's within FreeNAS (after 11.1 is released I think).
Or I could go with ESXi on my current hardware, virtualize FreeNAS and run the Windows and Linux VM's. By the way I am aware of the caveats related to virtualisation of FreeNAS.

I know that several members of the Forum have experience with FreeNAS on ESXi and/or with VM's on FreeNAS. What I would appreciate is to receive some insights from members why and how they made the choice for ESXi or VM's within FreeNAS. The likes and dislikes about those solutions. I don't ask you for a manual how to do it but I just like to have your help with making an informed decision. It would be a bonus if it can help me to use my hardware the most efficient way. Thanks in advance.
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419

Allan Wilmath

Explorer
Joined
Nov 26, 2015
Messages
99
I think if you have the hardware to support using ESXi and running FreeNAS virtualized, then the answer is always yes. I am running the Docker Plex container in FreeNAS Corral and will not be changing until I find a better Docker solution.

I have been running FreeNAS virtualized since 9.3 with no issues, and highly recommend it. You do want a lot of ram, I have 32GB with 24GB reserved for FreeNAS. The one thing I would point out is that you want be careful to not overcommit the virtual CPU cores to VMs. Only give VMs the bare minimum and then only add cores if you need to.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419

melloa

Wizard
Joined
May 22, 2016
Messages
1,749

Evertb1

Guru
Joined
May 31, 2016
Messages
700
I am leaning to ESXi already. So there is another 16 GB ECC memory in my near future. This weekend I will use my lab computer to start a test setup.. It has a i5-4670K CPU and 32 GB memory. It is a fair playground to experiment a bit I think. @Stux Thanks for the link to your setup. I must admit that there are some things in there that worry me a bit (like the SLOG). I just hope that I have a marriage left after I made up the tally for the extra costs.
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
For some time I have left my ESXi plans shelfed. I have been in India for my work and I could use the time to get my feet back on the ground and my head out of the clouds. I decided that I should not go overboard with the home server thing. Buying everything what I wanted would mean a fairly big bill. I am a guy wo like to buy things new and not second hand.

However, lately I convinced myself that second hand is not always bad. So I bought a second hand CPU (80 euro's) and best of all an Intel 200 GB S3700 SSD (85 euro's) to be used as a SLOG. Together with 16 GB new memory and a Samsung SSD (256 GB 840 Pro) I think I am good to go ahead with ESXi.

I intent to put the OS SSD and the SLOG in a 2 bay swap cage so I can do the installation of ESXi in steps, putting the current FreeNAS bootdrive back again in the machine until I complete te switch from FreeNAS running on bare metal to ESXi hosted FreeNAS. This way the time that FreeNAS is off line will be limited

@Stux and @Allan Wilmath: You both stated to have reserved 24 GB of the available 32 GB memory for FreeNAS. Is that needed? I have been running FreeNAS with 16 GB and it worked fine. I have a fairly modest sized storage. Since I am going to run FreeNAS on ESXi it wil be again just a fileserver and any other task wil go to another VM (Windows and/or Linux). Would it not be more prudent for me tor run FreeNAS on 16 GB and save the rest for running ESXi and the other VM's? If you see a benefit for me going with the 24 GB for FreeNAS please let me know.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I use iSCSI which has higher memory requirements, and I found 24GB more performant. It will work fine with 16GB though.
 

Xelas

Explorer
Joined
Sep 10, 2013
Messages
97
I have that exact motherboard, and I've been running ESXi with Freenas as a VM on it for 2-3 years now (along with 2 x Ubuntu VMs, a Windows 10 VM, and an OpenVPN appliance). IMHO, 32 GB RAM is a must in this scenario. I used to run the VMs from the pool but this was really hammering the pool, and a SLOG only helped a little (and this was with a server-optimized Samsung SSD as SLOG). I mucked around with iSCSI and with NFS for hosting the VM drives. After wasting way too much time optimizing the setup and never getting very good performance (just acceptable), I ripped out the SLOG completely, added an SSD I had laying around as a separate datastore in ESXi, and installed the VM boot drive images onto that SSD. I have 5 VMs crammed into a 256 GB SSD, and they all work perfectly fine with no hiccups. For extra space for the VMs, they use space on my FreeNas via mundane CIFs shares. Performance is now GREAT (VMs reboot completely in 10-15 seconds, run very fast, etc), and my pool doesn't have to deal with VMs hammering on it at the same time as I'm trying to move files around or viewing media.

In a nutshell, my setup:
ESXi, boots from USB drive. ESXi 6.5 is awesome in that you can manage everything via the built-in WebGUI. No more clunky client!
Datastores:
SSD1 (256GB Samsung 830 Pro): Windows 10 (media management), Ubuntu 1 (various utilities, some media automation), Ubuntu 2 (dedicated to Crashplan), FreeNAS boot, and OpenVPN VM disk images.
SSD2 (256GB OEM LiteON SSD removed from a laptop I upgraded): Dedicated "Drive D:" for Windows VM to host Plex Media Server Metadata, which is VERY IO intensive and latency sensitive. Having that folder on the pool was MURDERING pool performance, and the metadata set is 20+ GB and 250k files, which took up something like 40+ GB of disk space when it was on the pool due to space allocation.

Media (points to the Media pool on FreeNAS shared via NFS - useful for hosting ISO files for virtual DVD drives to install/maintain the VMs. The Media pool also has a big "Installs" folder for everything on my network and my clients.)

The on-board Avago/LSI controller is on passthrough to the FreeNAS VM. It has 6 x 3TB WD Red Pro drives attached (will soon be replaced with 6 x 6TB drives). That has 1 pool with 3 datasets (Media, Data, and Backups)

I also have an NVidia grpahics card laying around i plan to install and pass through to the Windows VM to offload transcoding. Haven't gotten to that yet.

I have 12 GB RAM allocated to FreeNAS, the rest to the VMs. I started off with 16 GB dedicated to freeNAS, but I've found that 12 GB is perfectly OK as well in my setup, now that I moved all the VM stuff out of the pool.

I made sure that the boot order in ESXi has FreeNAS coming up first, and has a delay to make sure it boots completely before booting the other VMs. That way, all the shared drives are "online" in the VMs when they are up.
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
I have that exact motherboard, and I've been running ESXi with Freenas as a VM on it for 2-3 years now (along with 2 x Ubuntu VMs, a Windows 10 VM, and an OpenVPN appliance). IMHO, 32 GB RAM is a must in this scenario.

I can imagine that 32 GB is not exactly overkill. So that is what I got.
A bit off-topic, but what exactly did you use as an openvpn appliance? At the momen I use the build-in VPN of my Fritz!box but I am not overly happy with it. As the ESXi server will be running 24/7 anyway, a solution there would be more practical.

I ripped out the SLOG completely
I am a bit surprised that the SLOG did not work out for you. I was under the impression that it is a pretty important part of the FreeNAS on EXSi setup. Maybe the move of your VM's out of the pool to a dedicated drive was your big win?

In a nutshell, my setup:
Your setup is very close to what I intent to do. So I have good hopes for my own succes.

The on-board Avago/LSI controller is on passthrough to the FreeNAS VM.
I realized pretty soon in the proces that this is key in the whole setup of virtualized FreeNAS.

I also have an NVidia grpahics card laying around i plan to install and pass through to the Windows VM to offload transcoding. Haven't gotten to that yet.
I am still thinking about that one. I am very happy with the fact that pass through of devices is possible but music streaming is more important for me then video. Most of the time I play movies with my Blu Ray player (yeah I know, so old fashioned) or with a HTPC that has an ethernet connection with the network (and can access the mediafiles on the FreeNAS storage). So no transcoding is needed there. In fact all the devices in my media corner are connected trough ethernet. I do have a lot of music albums as FLAC files and those will be transcoded to MP3's for the mobile devices.
I made sure that the boot order in ESXi has FreeNAS coming up first, and has a delay to make sure it boots completely before booting the other VMs. That way, all the shared drives are "online" in the VMs when they are up.
Agreed. That makes sence.

Your input together with the build report of @Stux and the contribution of other forum members is more then sufficient information to go forward with this. Of course I cannot rubberstamp your solutions guys, but thanks.
 

Xelas

Explorer
Joined
Sep 10, 2013
Messages
97
I can imagine that 32 GB is not exactly overkill. So that is what I got.
A bit off-topic, but what exactly did you use as an openvpn appliance? At the momen I use the build-in VPN of my Fritz!box but I am not overly happy with it. As the ESXi server will be running 24/7 anyway, a solution there would be more practical.


I am a bit surprised that the SLOG did not work out for you. I was under the impression that it is a pretty important part of the FreeNAS on EXSi setup. Maybe the move of your VM's out of the pool to a dedicated drive was your big win?
...

The SLOG wasn't totally worthless, but the benefit wasn't great. Moving the drive out of the pool and hosting the VMs on it directly gave MUCH better performance, AND the additional benefit of not hammering the spinning-rust drives on the pool at all. Less wear and tear on the drives, and a VM suddenly deciding to run an update at some time does not impact the performance of the pool at all. Win-win.

For the VPN, I'm currently running a virtual appliance I downloaded from here:
https://openvpn.net/index.php/access-server/download-openvpn-as-vm.html

The GUI and no need to set anything up made this a VERY quick and easy setup, but the free version is limited to 2 concurrent connections. I plan to replace that with another Ubuntu instance and setting up OpenVPN from the open-sourced repo at some point. There are dozens of walk-though recipes on the internet you can use to help set it up if you are new to this, it's not THAT hard, but I just didn't have the time then. This was the lazy way out! I have a 100/100 Mb internet connection, and the OpenVPN appliance can max it out without even making a blip in CPU usage.

Creating a new VM in ESXi and installing Ubuntu on it takes about 15-20 minutes - it's re-acquainting myself with the OpenVPN config file, triple-checking everything, testing, etc. that soaks up another 2-3 hours.

Regarding transcoding, it really comes into play if you use Plex. I travel extensively for work, and I catch up on my occasional show while I'm on the road. Plex allows me to stream to my phone or laptop anywhere I go, and it can intelligently downgrade the video quality to maintain the stream, but this is where the transcoding happens.
I've also used it quite a bit to pre-load a movie or two or some show episodes onto my phone before an airline flight - downgrading the quality of the show shrinks the size of the media files considerably, allowing me to download the media for off-line viewing while I'm traveling and saves storage space on the phone. Plex manages all of this pretty seamlessly. A longer/bigger movie can take 20-40 minutes to transcode with just the CPU - I'm hoping that this can be sped up considerably with a GPU, but this is very low on my list of priorities, frankly. In my family, we never have more than 3 streams running concurrently, and the CPU handles that fine.

I have no idea of Plex is even capable of transcoding music, and I doubt that it can offload that to a GPU even if it does transcode. I haven't tackled my extensive classical music archive yet. Plex is terrible at managing classical music (it's oriented towards the concept of Albums, which doesn't work at all for classical music), so I'm still looking for a good solution here.

Oh - another thing. That CPU won't work with the mother board you picked. The board is a bit older (I got it 3+ years ago), and only supports the v3 or v4 generation CPUs, not the V5. The V3/4 and the V5 use different CPU sockets.

The spiritual successor to my board is this one:
https://www.supermicro.com/products/motherboard/Xeon/C236_C232/X11SSL-CF.cfm
If you get this board, don't get the Intel E3-1225v5 CPU. The 1225 has built-in graphics, but the chipset in the X11SSLV is the Intel C232, which does not work with the integrated graphics. You are paying for something you can't use with this board. The E3-1220v5 is the exact same CPU but without the built-in graphics, and is about $30-$40 cheaper.

I know this, because I made this mistake when I built my system 3-4 years ago.

However, if I were buying a new system today, I would get this:
https://www.supermicro.com/products/motherboard/Xeon/C236_C232/X11SSH-CTF.cfm
It has 2 x 10-Gb network interfaces (as opposed to the 2 x 1Gb interfaces in the other one) , and the better C236 chipset (as opposed to the C232 chipset in the one above). It also has an M.2 slot, so it opens up the ability to install a VERY fast PCIe SSD such as the Samsung 960 Pro for datastores or possibly even an ARC drive.
The main difference is that this board CAN take advantage of the Integrated graphics in the E3-1225v5 CPU because of the C236 chipset. The Intel graphics has buillt-in QuickSync, which can be used to offload video transcoding in hardware without requiring you to buy, install, keep powered, and keep updated drivers for another graphics card (if you ever decide you need transcoding).

This board is probably more expensive, but it will be much more future-proof. The 2 x 10Gb NICs alone are worth the upgrade, IMHO. I expect to get at least another 3-4 years of life out of my system - I would invest into as much longevity (and future options for usage you may not anticipate right now, such as video streaming) as I can.

==============

EDIT: Arghh. Ignore most of what I wrote about the CPU. The forum seems to have auto-generated a link to an Amazon listing, and the listing pointed to the E3-1225 V5 CPU. I just realized that you have a "V3" version of the CPU, so please ignore what I wrote about the motherboards. That said, it looks like you got the 1225 version, not the 1220, so my point about not being able to use that built-in graphics core still stands. The X10SL7-F uses the C222 chipset, which disables built-in graphics on any CPU you'd use.
 
Last edited:

Evertb1

Guru
Joined
May 31, 2016
Messages
700
EDIT: Arghh. Ignore most of what I wrote about the CPU. The forum seems to have auto-generated a link to an Amazon listing, and the listing pointed to the E3-1225 V5 CPU. I just realized that you have a "V3" version of the CPU, so please ignore what I wrote about the motherboards. That said, it looks like you got the 1225 version, not the 1220, so my point about not being able to use that built-in graphics core still stands. The X10SL7-F uses the C222 chipset, which disables built-in graphics on any CPU you'd use.
:) Yes sometimes systems are to smart for their own good. Concerning the build in graphics. I know I can't use them. But that CPU is running fine in my FreeNAS box and I got it for a very good price (80 euro's). It was pulled from a server that was not used very much and in fact was powered down for some time (I know the owner). If you are looking for second hand stuff for a nice price you need to take what you can get. At this moment I don't need anything more and I have no plans for upgrading in the near future. Besides of that my wife is starting to get a bit annoyed about my money pit called a server. Oh well.

Concerning trans coding of the FLAC's: I don't intent to use Plex but Serviio as I had great results with that in the past. The music will be made available by DNLA. If needed I can write custom profiles for the trans coding.
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
It has been some time since I visited this thread. Again I have been working abroad for almost 10 months. So not much time for playing around with ESXi. But I am home now since the beginning of this month and the next job in Japan is for a collegue. I am done with it for now. The past couple of weeks I have been experimenting with ESXi and am getting more exited about it everyday. Again thanks to @Stux and others for all the "how to" threads.

Also thanks @joeschmuck for his how-to go about the use of a PSU with an USB connection and not a network module. I had no idea how to go around that and his contribution and that from others was very usefull.

The coming weekend I wil migrate FreeNAS to ESXi on the existing hardware with some additions: A small Samsung SSD as boot device (I keep the FreeNAS boot device on a safe place), a bigger Samsung SSD as the new home for my VM's and an Intel s3700 SSD to expand my FreeNAS volume with a SLOG. I hope to execute the migration on such a way that it is completely transparent for the dataconsumers on the LAN. My server has a Supermicro motherboard with 2 "normal" NIC's (Lan1 en Lan2) and the IPMI NIC (IPMI LAN). At the moment the server is connected to the network on LAN1. Both the IPMI and the LAN1 have a static IP adress. My intention is to disconnect the server from LAN1 and connect it to LAN2 during the installation of ESXi. I will take care that LAN2 receives a static IP adress as well. As soon as ESXi is running I reconnect LAN1 and create a VM for FreeNAS. I want to assign the LAN1 NIC to the FreeNAS VM by PCIe pass-through if that is possible. It is my hope that the original IP adress for FreeNAS wil be preserved that way.

If anybody thinks that I miss the mark here completely please feel free to enlighten me. I dont have a spare server with a motherboard with multiple NIC's etc to experiment with.
 

melloa

Wizard
Joined
May 22, 2016
Messages
1,749
As per your signature system, you are using a SAS in IT, so I would:

- Save the config
- Connect a boot ssd and a datastore ssd to the MD SATA ports
- Install ESXi and configure a vSwitch with both NICs
- Create a VM with the same freenas version you are running
- Set your router to fix the IP with the new MAC
- Add the SAS to the VM in passthrough
- Boot, import volume, import config
- That's all to it
 

ChrisReeve

Explorer
Joined
Feb 21, 2019
Messages
91
Hello, jumping in here, as I am also planning on migrating to ESXi First of all, I dont want to spend 700 USD for an essentials key. Will the free option be good enough for me?

My current setup runs freeNAS with the Plex plugin, and one VM running Ubuntu for game servers/TeamSpeak.

Specs are as follows:
MB: Supermicro X9SRL-F
CPU: Xeon E5-2650 v2 (8C/16T. Might upgrade to 10 or 12 cores if necessary, they are all under 200$ which is fine)
RAM: 64GB DDR3 ECC (will dedicate at least 32GB to freeNAS, and are able to upgrade to 128GB for about 100$)
Boot drive: SanDisk 16GB USB2 Stick
Jails/VM drive: Intel 180GB SATA SSD
Main (and only) pool: 10x10TB WD Reds in one large encrypted ZFS2 pool, about 68TB formatted space
Intel X540-T2 10GbE NIC
2x LSI 9211i HBAs flashed to IT mode

I can spend a few hundred $ for upgrades, but only if necessary. Max simultaneous Plex streams are 4-5 (usually only 1-3), and the game servers I am talking about are only about max 5-6 players at one time. Would like to be able to have up to 10 players in CS:GO, Garrys Mod (only one server at a a time)

It might be overkill, but my main goal is to be able to have 10 players on a CS:GO server, the same 10 players on TeamSpeak3, and 2-3 plex streams at the same time, with no issues. I can upgrade RAM, VM storage, and CPU if necessary.

Right now, everything sort of works fine, but I dont like running Plex as a plugin in freeNAS, and I'm not satisfied with the VM experience in freeNAS, among other reasons, for the lack of Win10 support with any kind of performance.

My plan would be as follows:
Buy new mirrored boot drives for datastore. Planning on 2x 250/500GB SSDs. Can I install both ESXI, and the VMs on the same datastore?
Then I'll hw passthrough both the 10GbE NIC, and the two HBA-cards to the freeNAS VM, to import my existing pool.

When this is done, Ill install VM2, running Ubuntu for server hosting. Im not sure what the best way to do plex hosting is. Should I run 1 VM for Plex/Torrenting, and one VM for game server/Teamspeak hosting, or just run all of this on one VM?

Will my E5-2650 v2 (8C/16T) be enough? Will I benefit from an upgrade to 2695/2697 (12C/24T CPU?)
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Can I install both ESXI, and the VMs on the same datastore?
Yes

Should I run 1 VM for Plex/Torrenting, and one VM for game server/Teamspeak hosting, or just run all of this on one VM?
A good way to think about this is at what kind of segregation you want in the snapshots/backups. If you would like to be able to make changes to Plex and be able to roll back plex without impact to the game server, then it will make sense to have separate VMs. If you don't see that as an issue, you can avoid the overhead of dedicating RAM and CPU resources to a new copy of the OS and just stack it up in a single VM/OS.

Will my E5-2650 v2 (8C/16T) be enough?
Probably, you'll need to get a sense for how much Plex transcoding you will do, but as a general rule 8 cores is plenty.
 

ChrisReeve

Explorer
Joined
Feb 21, 2019
Messages
91
Thank you. I dont do alot of transcoding, most streaming is direct play, maybe direct stream of the video, but transcoding audio (which I assume is far less intensive than full video/audio transcoding). And I can always upgrade the CPU down the line if it becomes necessary.
 
Top