Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

"Absolutely must virtualize FreeNAS!" ... a guide to not completely losing your data.

Status
Not open for further replies.

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
11,776
Thanks
3,018
#1
[---- 2018/02/27: This is still as relevant as ever. As PCIe-Passthru has matured, fewer problems are reported. I've updated some specific things known to be problematic ----]

[---- 2014/12/24: Note, there is another post discussing how to deploy a small FreeNAS VM instance for basic file sharing (small office, documents, scratch space). THIS post is aimed at people wanting to use FreeNAS to manage lots of storage space. ----]

You need to read "Please do not run FreeNAS in production as a Virtual Machine!" ... and then not read the remainder of this. You will be saner and safer for having stopped.

<the rest of this is intended as a starting point to be filled in further>

But there are some of you who insist on blindly charging forward. I'm among you, and there are others. So here's how you can successfully virtualize FreeNAS, less-dangerously, with a primary emphasis on being able to recover your data when something inevitably fscks up. And remember, something will inevitably fsck up, and then you have to figure out how to recover. Best to have thought about it ahead of time.

  1. Pick a virtualization platform that is suitable to the task. You want a bare metal, or "Type 1," hypervisor. Things like VirtualBox, VMware Fusion, VMware Workstation, etc. are not acceptable.

    VMware ESXi is suitable to the task.

    Hyper-V is not suitable for the task, as it is incompatible with FreeBSD at this time.

    I am not aware of specific issues that would prevent Xen from being suitable. There is some debate as to the suitability of KVM. You are in uncharted waters if you use these products.

  2. Pick a server platform with specific support for hardware virtualization with PCI-Passthrough. Most of Intel's Xeon family supports VT-d, and generally users have had good success with most recent Intel and Supermicro server grade boards. Other boards may claim to support PCI-Passthrough, but quite frankly it is an esoteric feature and the likelihood that a consumer or prosumer board manufacturer will have spent significant time on the feature is questionable. Pick a manufacturer whose support people don't think "server" means the guy who brings your food at the restaurant.

    You will actually want to carefully research compatibility prior to making a decision and prior to making a purchase. Once you've purchased a marginal board, you can spend a lot of time and effort trying to figure out the gremlins. This is not fun or productive. Pay particular attention to the reports of success or failure that other ESXi users have had with VT-d on your board of choice. Google is your friend.

    Older boards utilizing Supermicro X8* or Intel 5500/5600 CPU's and prior are expected to have significant issues, some of which are fairly intermittent, and may not bite you for weeks or months. All of the boards that have been part of the forum recommended hardware series seem to work very well for virtualization.

  3. Do NOT use VMware Raw Device Mapping. This is the crazy train to numerous problems and issues. You will reasonably expect that this ought to be a straightforward, sensible solution, but it isn't. The forums have seen too many users crying over their shattered and irretrievable bits. And yes, I know it "works great for you," which seems to be the way it goes for everyone until a mapping goes wrong somehow and the house of cards falls. Along the way, you've probably lost the ability to monitor SMART and other drive health indicators as well, so you may not see the iceberg dead ahead.

  4. DO use PCI-Passthrough for a decent SATA controller or HBA. We've used PCI-Passthrough with the onboard SAS/SATA controllers on mainboards, and as another option, LSI controllers usually pass through fine. Get a nice M1015 in IT mode if need be. Note that you may need to twiddle with setting hw.pci.enable_msi/msix to make interrupt storms stop. Some PCH AHCI's ("onboard SATA") and SCU's ("onboard SAS/SATA") work. Tylersburg does not work reliably. I've seen Patsburg and Cougar Point work fine on at least some Supermicro boards, but had reports of trouble with the ASUS board. The Ivy Bridge CPU era is the approximate tipping point where things went from "lots of stuff does not to work" and began to favor "likely to work."

  5. Try to pick a board with em-based network interfaces. While not strictly necessary, the capability to have the same interfaces for both virtual and bare metal installs makes recovery easier. Much easier.
Now, here's the thing. What you want to do is to use PCI-Passthrough for your storage, and create a virtual hardware platform that is very similar to your actual physical host... just smaller. So put FreeNAS on the bare metal, create your pool, and make sure that all works ... first! Then load ESXi. ESXi will want its own datastore, and cannot be on the PCI-Passthrough'd controller, so maybe add an M1015 in IR mode and a pair of disks for the local ESXi image and datastore (you have to store the FreeNAS VM somewhere after all!). Create a FreeNAS VM and import the same configuration.

Now at this point, if ESXi were to blow up, you can still bring the FreeNAS back online with a USB key of FreeNAS, and a copy of your configuration. This is really the point I'm trying to make: this should be THE most important quality you look for in a virtualized FreeNAS, the ability to just stick in a USB key and get on with it all if there's a virtualization issue. Your data is still there, in a form that could easily be moved to another machine if need be, without any major complicating factors.

But, some warnings:

  1. Test, test, and then test some more. Do not assume that "it saw my disks on a PCI-Passthru'd controller" is sufficient proof that your PCI-Passthrough is sufficient and stable. We often test even stuff we expect to work fine for weeks or months prior to releasing it for production.

  2. As tempting as it is to under-resource FreeNAS, do try to aggressively allocate resources to FreeNAS, both memory and CPU.

  3. Make sure your virtualization environment has reserved resources, specifically including all memory, for FreeNAS. There is absolutely no value to allowing your virtualization environment to swap the FreeNAS VM.

  4. Do not try to have the virtualization host mount the FreeNAS-in-a-VM for "extra VM storage". This won't work, or at least it won't work well, because when the virtualization host is booting, it most likely wants to mount all its datastores before it begins launching VM's. You could have it serve up VM's to other virtualization hosts, though, as long as you understand the dependencies. (This disappoints me too.)

    --update-- ESXi 5.5 appears to support rudimentary tiered dependencies, meaning you should be able to get ESXi to boot a FreeNAS VM first.

    Due to lack of time I have not tried this. If you do, report back how well (or if) it works.

  5. Test all the same things, like drive replacement and resilvering, that you would for a bare metal FreeNAS implementation.

  6. Have a formalized system for storing the current configuration automatically, preferably to the pool. Several forum members have offered scripts of varying complexity for this sort of thing. This makes restoration of service substantially easier.

  7. Since you lack a USB drive key, strongly consider having a second VM and 4GB disk configured and ready to go for upgrades and the like. It is completely awesome to be able to shut down one VM and bring up another a few moments later and restore service at the speed of an SSD datastore.
If what is described herein doesn't suit you, please consider trying this option instead.
 
Last edited:
Joined
Mar 25, 2012
Messages
19,154
Thanks
1,854
#2
Re: "Absolutely must virtualize FreeNAS!" ... a guide to not completely losing your d

You just shot this forum in the foot. Now everyone's going to think its okay to ignore 1 or more points and cry foul.... :(


Edit: Here's a tidbit for those virtualizing. ESXi5(not sure about any other version) has a very peculiar use of vCPUs to physical cores. This killed my server performance until I lowered the number of vCPUs in my machine.

I had assigned 4 cores to FreeNAS and 4 cores to a Linux VM on a 4core(8 thread) system. The thought was both could work hard without competing for resources. After all, both will pretty much never be heavily loaded at the same time as the other, so this should be "smart". FreeNAS performed absolutely horribly even with the other VMs idle.

It turns out that the VM will get processing resources only when the total number of VCPUs is available(regardless of actual loading the VM needs). So even if FreeNAS is completely idle with about 0% CPU usage the VM will NOT be given any timeslices for computing unless I had 4 cores free. I thought it would adjust dynamically, but it does not. So if you give one VM 5 cores and another 4 cores on the same 8 thread system those 2 VMs cannot under any circumstances use the CPU at the same time. Big waste.

So the trick is to assign the minimum number of cores necessary to do the job. In my case, switching from 4 to 2 cause an increase in FreeNAS performance of almost 50%. Decreasing it to 3 increased it even further.

In my case, I now use 3 vCPUs for FreeNAS, 2 vCPUs for Linux, and all other VMs get a single core. Everyone wins and performance is now much better for not overly allocating resources.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
11,776
Thanks
3,018
#3
Re: "Absolutely must virtualize FreeNAS!" ... a guide to not completely losing your d

You just shot this forum in the foot. Now everyone's going to think its okay to ignore 1 or more points and cry foul.... :(
Yeah, oh well. It's the old story of safe-you-know-what vs abstinence. Since they're going to do it anyways, it is better to have good information out there.
 

paleoN

FreeNAS Guru
Joined
Apr 22, 2012
Messages
1,403
Thanks
21
#4
Pick a virtualization platform that is suitable to the task. You want a bare metal, or "Type 1," hypervisor. Things like VirtualBox are not acceptable.
I see you didn't mention bhyve on an AMD processor. Which must mean it's a good choice.

But in all seriousness the more correct information available, the better all around. This will give them the option of doing it properly.
 
Joined
Mar 25, 2012
Messages
19,154
Thanks
1,854
#5
Re: "Absolutely must virtualize FreeNAS!" ... a guide to not completely losing your d

bhyve??? What is that an acronym for?
 

paleoN

FreeNAS Guru
Joined
Apr 22, 2012
Messages
1,403
Thanks
21
#7
They did change how it's spelled, thankfully. The old spelling: BHyVe.
 
Joined
Jun 14, 2013
Messages
2
Thanks
0
#8
Ok I've read both posts, but I still have a question.

I want to test FreeNAS on a virtual machine on my laptop to learn it without having to set it up on a physical network?

I've been searching online but thus far have found solutions which need a pci passthrough but will that apply on the laptop? I'm not sure.
 

titan_rw

FreeNAS Experienced
Joined
Sep 1, 2012
Messages
594
Thanks
67
#9
For testing purposes, you can disregard this entire thread. The assumption is that data integrity is not important in a testing environment.

This thread is for people who absolutely must virtualize a 'production' freenas machine.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
11,776
Thanks
3,018
#11
You don't need PCI passthrough and you won't find a meaningfully usable laptop on which to use PCI passthrough on (what are you going to pass through, the only disk controller?) For experimentation, yes, absolutely, have fun, use virtual disks, it's a bit more fragile than a hardware install, but plenty of opportunity to learn.
 
Joined
Jul 15, 2013
Messages
11
Thanks
0
#14
Okay so i'm new to this and I want to do it right the first time.
Let me try and say what I understand.

You put FreeNAS on a usb drive and boot from it and build your ZFS array with all your drives.
Then, on that zpool, you install ESXi. (How do you install ESXi over FreeNAS?)
With ESXi installed, you can unplug that FreeNAS usb drive and you create a VM with FreeNAS with *what drives?*.
Then you fiddle with the VM of FreeNAS.

This seems really weird to me and I think I got it all wrong.
Please enlighten me!
Thank you
 
Joined
Mar 25, 2012
Messages
19,154
Thanks
1,854
#15
Nothing like what you are explaining.

You install ESXi on a hard drive(one that has nothing else on it).
You install FreeNAS on a USB stick(one that has nothing else on it).
You create a FreeNAS VM in ESXi and recover your config to it.

Then you do whatever you want to do at that point. Create a zpool or whatever.

There are many other ways to do the same thing, but in a different order. It's a matter of you understanding what to do and when. Failure to make these logical connections in your planning will result in lots of pain and misery later(as well as potential $ spent on replacing hardware that isn't compatible with your plans.

Based on your confusion I'd say you are months away from having a firm grasp of what is going on between VMs and real hardware. You should just make a FreeNAS machine and use that.

If you have any other questions you should make your own thread. The questions you asked are extremely elementary compared to the level of support this thread is intended to provide(no offense).
 
Joined
Jul 15, 2013
Messages
11
Thanks
0
#16
Well it happens that I really love to learn new things. Where do you suggest that I read about what is going on between VM and hardware.
I recently ordered parts to make a freenas machine and I want other VMs (windows server/ubuntu/dedicated game servers) besides freenas that will use the freenas zpool to run. Is there any docs about that?

I realize this is not on topic and we can continue this by PM if you wish.

Thank you for your help

My incoming build is
Xeon E3-1230v2 Quad hypethreading 3.3GHz (3.7 turbo)
Supermicro X9SCM-F
6x WD red 2tb
Kingston 16GB ECC ram (2x8)
Corsair CX500M
NZXT Source 210
 
Joined
Mar 25, 2012
Messages
19,154
Thanks
1,854
#17
Well it happens that I really love to learn new things. Where do you suggest that I read about what is going on between VM and hardware.
I recently ordered parts to make a freenas machine and I want other VMs (windows server/ubuntu/dedicated game servers) besides freenas that will use the freenas zpool to run. Is there any docs about that?

I realize this is not on topic and we can continue this by PM if you wish.

Thank you for your help

My incoming build is
Xeon E3-1230v2 Quad hypethreading 3.3GHz (3.7 turbo)
Supermicro X9SCM-F
6x WD red 2tb
Kingston 16GB ECC ram (2x8)
Corsair CX500M
NZXT Source 210
There are no docs about it specifically. It's all about knowing how to apply various computer principles to achieve the desired objective without losing your data. Your first place to start would be to build a FreeNAS server and an ESXi server and use each one independently for a few months until you can get a grasp of the fundamentals. You shouldn't have bought your hardware yet as you have a ways to go before you are likely to be able to safely do what you are planning.

Please, I asked before, start your own thread if you have questions. I hate having to hand out warnings and stuff. This section is for guides and not for questions unless it directly relates to a step in the guide.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
11,776
Thanks
3,018
#18
I recently ordered parts to make a freenas machine and I want other VMs (windows server/ubuntu/dedicated game servers) besides freenas that will use the freenas zpool to run. Is there any docs about that?
Why yes, yes there is. There's a thread over in the Installation forum called "Absolutely must virtualize FreeNAS!" ... a guide to not completely losing your data. And had you read it, you would have come across the warnings I listed, specifically warning number 4.

Further, it specifically tells you that you need to read "Please do not run FreeNAS in production as a virtual machine" over in the N00bs forum, which indicates the problem with this in point number 6.

Now, the thing is, this isn't an absolute prohibition. I talk very specifically about production gear, because we rely on deterministic behaviours when designing complex systems, in order to assure things work well during events such as power loss recovery. For a lab setup, where maybe it doesn't matter if your virtualization host hangs for an hour during bootup, and then doesn't start its FreeNAS-hosted VMs because the datastore is unavailable, well, hey, have at it, just don't tell me I didn't warn you.

And if it seems like cyberjock and I are sounding maybe a little annoyed here, it may have something to do with the fact that you're posting questions indicating that you haven't read the available documentation in the very thread that provides the documentation you're asking for.

bash-head.gif
 

pbucher

FreeNAS Experienced
Joined
Oct 15, 2012
Messages
180
Thanks
21
#19
Just a quick FYI: messing around with msi/msix interrupts is no longer needed as of FreeNAS 9.1RC2. You can safely remove tweaks to hw.pci.enable_msi/msix and/or their LSI specific cousins.

The underlying issue has been fixed in FreeBSD with the changes making into the FreeNAS source base as of 9.1RC2(will also be in FreeBSD 9.2). Longer term the fix will need to be removed and or tweaked if and when vmware ever updates some of it's interrupt code they used from Linux.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
11,776
Thanks
3,018
#20
I thought the "fix" was that they added it to some msi/msix quirks table. I'm not near a src tree so I can't actually look, but I seem to recall that device-specific sysctls or tunables were also now available.
 
Status
Not open for further replies.
Top