Please do not run FreeNAS in production as a Virtual Machine!

Status
Not open for further replies.

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
This +1. The point we are trying to make is that while there may be a guide for setting it up, there is not a guide for resolving problems that may crop up later. Adding a hypervisor squares the solution space. While it is definitely navigable, you (@messerchmidt, not @pirateghost) must have the tools to find your way back to a working setup without data loss.

Squares? You mean cubes, at least. And the paths that can lead to pain are many. That's why I've carefully documented both the pitfalls and ways to be successful in these threads, but they are not a beginner's guide to virtualization. Just like most people don't belong messing around under the hood of a car: I can discuss how to hack in a turbocharger, but that does not mean that my discussion is suitable for the average car driver to do the mod. It's intended for other people who routinely work under the hood of a car and feel comfortable disassembling and putting back together an engine. Those people will find the guidance I've given useful, probably even obvious in hindsight once they hear the reasoning. Those people are likely to install the turbocharger and have it work great. And if it doesn't, they understand that it's their own fscking fault. But when some guy who has never even changed a car battery out gets in there and tries to install a turbocharger, then there's some/lots/total risk...
 

9C1 Newbee

Patron
Joined
Oct 9, 2012
Messages
485
GREAT analogy! And the funny thing is, I was just talking to my buddy who tunes power adder (read turbo charging) engine setups. He said the #1 thing that fsckers up an engine under boost is not having a proper fuel system. He said it is quite frustrating because people just insist on cutting corners in that area. They might even get a few passes down the drag strip. But it will end up biting them in the ass and will cost WAY more in the long run. Same people, different hobby.
 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
The problem is simple though. People that virtualize are almost always choosing to do it to cut corners. They just don't like being told they can't cut certain corners, then they flip the heck out. :(
 

petr

Contributor
Joined
Jun 13, 2013
Messages
142
The problem is simple though. People that virtualize are almost always choosing to do it to cut corners. They just don't like being told they can't cut certain corners, then they flip the heck out. :(

I am starting to get the appeal of virtualisation. Originally I thought of it as a bit too much of a risk, however, with dedicated storage controllers passed via VT-d to the VM on a server-grade supermicro board, is there actually much that could go wrong?

I do not need to run any demanding VMs but it seems a bit tricky to run even most basic VMs in the virtualbox with any degree of realibility (I am exploring the the host-IO checkbox setting now, seems to be running OK even though there was a scrub happening on the pool in the background).

In any case - I've read the first post and most of the points do not apply to my case - only the one with spotty VT-d implementation. Is there any knowledge base with tested configurations for this?
 

petr

Contributor
Joined
Jun 13, 2013
Messages
142

Krdan

Cadet
Joined
Jan 15, 2015
Messages
2
Hello everyone, I'm the FreeNas new noob, I read most of the thread and I got few question if I may ask

My setup first:
Dual CPU Motherboard with currently a single Xeon E5620 installed
32GB RAMECC
Hardware Raid Controller with 512 MB RAMECC on-board and dedicated battery backup
4 Disk Raid 10
Virtualization via CentOS (minimal installation with no other services) + Virtualbox

In my office (half dozen PCs) we need a NAS, and we were planning of deploying a VM with FreeNAS to fulfil our needs, recommended RAM and Disk space seems to pose no issue, but I did not fully understand what can go wrong with a single virtual disk (apart 8gb disk for boot) attached to FreeNAS on top of our Raid 10

Should I use ZFS or UFS? Can 1 virtual disk be enough or should I mirror 2 virtual disks? (point 3 of initial post seems saying this)

As usual if there is something in particular I should read, let me know where I can find documentation.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
I believe jgreco explains it nicely in the OP:

A single virtual drive provides no protection whatsoever against bitrot. ZFS will see the errors and go "AAAH! I DON'T HAVE REDUNDANCY!", because all the disks were abstracted away, even though it could have redundancy.

UFS support in FreeNAS is dead, so forget about it. For ZFS, you'll definitely want at least a mirror, meaning two virtual devices, each backed by fast redundant storage. Or pass-through a proper HBA and use ZFS like it was intended to be used.

Frankly, with 32GB of RAM for the host system, virtualizing is a bad idea. With so little RAM, you're better off using FreeNAS on that bare metal and moving the other VMs somewhere else.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
In my office (half dozen PCs) we need a NAS, and we were planning of deploying a VM with FreeNAS to fulfil our needs, recommended RAM and Disk space seems to pose no issue, but I did not fully understand what can go wrong with a single virtual disk (apart 8gb disk for boot) attached to FreeNAS on top of our Raid 10

Should I use ZFS or UFS? Can 1 virtual disk be enough or should I mirror 2 virtual disks? (point 3 of initial post seems saying this)

As usual if there is something in particular I should read, let me know where I can find documentation.

This is for you

https://forums.freenas.org/index.ph...ative-for-those-seeking-virtualization.26095/
 

Krdan

Cadet
Joined
Jan 15, 2015
Messages
2
Thanks both for replies, I'll evaluate the solution better before moving to production.
 

9C1 Newbee

Patron
Joined
Oct 9, 2012
Messages
485
People just can't help themselves.
 

djk29a

Dabbler
Joined
Dec 13, 2013
Messages
13
It really needs to be repeated that trying to run virtualization for a NAS is already a complication on top of something fairly complex on its own. You're multiplying your risks out basically by virtualizing in both theory and in practice, so unless you truly grasp every aspect of your setup and why you're doing things and know exactly what the loose ends are from what you think should happen versus what actually happens, you should assume you don't know squat. Hell, I have a VMware VCP and I was a bit surprised to hear years ago that ZFS is anecdotally better off in virtualized mode rather than physical (physical is what's recommended for specific cases that need maximum hardware access while getting some virtualization benefits such as with MS clustering).

The only setup I'll consider doing virtualization with for a ZFS NAS (even at home, which is what I'm building now from my pure physical FreeNAS setup) is one with really solid VT-d setup and passing a controller off to that. This already destroys 80%+ of the reasons anyone I've seen professionally would virtualize anything (you can't vMotion it anymore and you're now locking the ESXi host out of being as independent as it should be for handling transient workloads in compute clusters, among others). It's ludicrous because in almost all these situations you've paid for an expensive SAN. Below that kind of enterprise budget, you're generally in start-up budget kinds of territory, and not the kind of "start-up" that's having launch parties and getting 7-figure funding rounds every year (you're not a start-up anymore if you're at that stage IMO). Yes, there's mid-sized businesses with a few hundred employees and all too, but they tend to have really modest datacenter needs just like start-ups in my experience. The budgets start exploding as management overhead goes up I've found, not as the workforce itself is bigger.

But I'll agree that it is almost certain that you should not be virtualizing FreeNAS if you're not capable of explaining the points in the OP.
Pffft... Like there's any other reason home users have hotswap bays!
Honestly, I want hotswap bays for the reason of availability and risk avoidance in that every time I open up a case and have to mess with cables and am touching the other hard drives and knocking them around in my clumsy usual way, so I will probably cause more downtime. And downtime for my NAS means I have to do more work that I don't really feel like doing that's of any actual interest to me as well as get yelled at by the wife for ruining movie night. I don't get fulfillment out of janitoring my computers or something, and it doesn't make me a better engineer or anything either. And ask any married man if he'd pay a few hundred bucks to remove much of the chances of getting wife rage from your hobbies - the answer is "yes."
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
This already destroys 80%+ of the reasons anyone I've seen professionally would virtualize anything (you can't vMotion it anymore and you're now locking the ESXi host out of being as independent as it should be for handling transient workloads in compute clusters, among others). It's ludicrous because in almost all these situations you've paid for an expensive SAN. Below that kind of enterprise budget, you're generally in start-up budget kinds of territory

Well, that there's the thing, the VAST majority of ESXi deployments are not multiple-host vSphere Enterprise deployments, but rather Free ESXi or maybe vSphere Essentials. Remember that a "deployment" may consist of a hundred hosts on a hot SAN but that still only counts as one deployment. ;-)

Virtualization isn't all about gaining vMotion. It's about saving on hardware, reducing wasted resources, etc. We used to burn a lot of watts and have racks full of gear, some of which did relatively little "busy work." I can run all the services we ran back in 2000 on two well-specced 1U E5 Xeon's (though I need SSD datastores to get the I/O necessary) and that's a hell of a reduction in opex, because a full rack is $1800/month, and suddenly 40U and 3200 watts is reduced to 2U and 500 watts. Being able to transparently shift around loads would be nice but it'd require more gear, more space, more watts, for no significant benefit. As a VMware certified guy, you may be focused on the finer points, where vMotion for load management and maintenance is indeed nice (and other features = also nice), but as a company officer, while I'm aware of that, the bigger picture is the overall win in the form of reduced capex and recurring opex that you can get in many environments just by getting a few sufficiently large hypervisors.

The problem is that for every vSphere professional out there, there are ten consultants/IT guys/hobbyists who have seen vSphere and maybe played/used/deployed vSphere enough to be able to deploy it halfway competently, but may not fully grasp all the finer points. When everything you've virtualized seems to work swimmingly well, the obvious thing to try is to virtualize FreeNAS, and until we started clamping down on the practice of just Doing Some Random Strategy And Expecting That It Is Obviously Correct Because It Works Today, we were seeing a lot of people come in with all-in-one box disasters and other various catastrop**ks that would begin unraveling the moment there was a hardware event and an operational deviation resulted. The worst of these seemed to be a parade of users who were hacking RDM to work with local SATA.

Now, it ought to be obvious that it's possible to virtualize FreeNAS, but for an appliance where the expectation is that it safely stores your data, the introduction of virtualization just adds a lot of new sharp edges you have to make sure you don't cut yourself on. Some of us do virtualize ... but it is a touchy thing. We expect that most people run FreeNAS to securely store their data, and doing so virtually requires some expertise with VMware, FreeNAS, and the underlying hardware. It's certainly possible to do - and I do it here - but not a beginner-level project.
 

djk29a

Dabbler
Joined
Dec 13, 2013
Messages
13
Not going to get too deep since we basically agree, but there's many, many reasons people choose to virtualize and cost control is hardly ever uttered when it comes to C-level talks anymore because everyone's really flush with cash (again). Conversations have shifted mostly to "cloud cloud cloud" instead of "cloud cloud cost." As an example from 7+ years ago, DoD specifically chose to virtualize not to consolidate but to "standardize" their environments. DoD's massive cost overruns are from sheer mismanagement / culture of ineffective management contributing to sprawl and $400 million in licensing is a drop in the ocean of Pentagon money to have a chance to save on billions and billions of lost time from IT being so fragmented. Strangest looks I ever saw before from anyone at VMware when they explained that one, but I think their million dollar individual sales bonuses are hardly keeping them up at night (not an exaggeration).

The #1 loss with the pinned VM w/ passthrough is your DR scenario is messier (likely requiring third party help now on top of SAN-based WAN syncing) rather than loss of vMotion, which is an HA thing. DR is something people don't talk about on here really when it's something everyone in storage professionally is kind of obsessed with. Even for home scenarios, if you have like 20TB of stuff to try to do DR for, I'm not sure if CrashPlan or Backblaze will suffice anymore.

I'm doing this for home because I know what I'm doing and I accept risks and am cheap but not completely stupid either, so I backup anything somewhat important elsewhere and use server grade hardware to avoid headaches galore (which has dropped drastically in costs for home users over several years). I was reminded of why I do it when I tried to fire up ESXi for a quick test on a Z97I-Plus motherboard by Asus - nothing worked besides the boot-up itself, no storage, no network, no USB. My E3-1230 Intel S1200BTS based box? Even this is a compromise because one of the NICs isn't supported on a stock ESXi 5.0 install and there's a bizarre USB input issue that keeps local USB input from working unless using the USB slot onboard. I've never, ever had to deal with this when working professionally, in contrast.
 

newbstarr

Cadet
Joined
Feb 5, 2015
Messages
4
In my brief escapade with freenas and nas4free, intertwined virtualisation for the individual appliances worked fine. It was only ESXi and access to storage not through explicitly listed as supported methods that did not work out. If you lose your host/controller/appliance head end, you still have your volume information on the disks and should be fine assuming that case. esxi's vt-d tech is still maturing, as is most implementations from my brief foray into the technology and attempting to hack your way through unsupported configurations is always a mire of problems. As the good people here pointed out to me, rdm is not a particularly good solution for allot of situations and I proved that for myself, which is what my testing showed. Esxi datastores are a somewhat messy layer to attempt to write data through.

If my thread was not locked, I would have put this next part there, maybe if you guys do, it may help you with some other hard heads who've backed themselves into a corner.

My 'solution' is such that, if you want a hypervisor that is able to hook into some convoluted storage solution, in my case a SAN draw without a filer, and provide that storage with some sort of protection, then instead of heading for a virtualised appliance dedicated to the task and using esxi which might misleadingly seem like a good idea, just get an OS that can rather more easily do both.
I ended up running a centos substrate, ( this actually wonderfully returned mpio back into my mix aswell on a side note ) which retained the use of zfs and raidz2 (raid6) natively, it also has a nice mature hyperviser in KVM and can actually run some services off the hyperviser to share your clustered storage etc. Not quite, a light wieght hyperviser situation but is a one box solution. My 14 disk fc running zfs is actually using less ram (ok for my purposes) and with mpio might even end up benching faster, tba with some numbers, first I've got to test the stability sufficiently hard enough for long enough to be sure but so far this appears a much better solution than running esxi.

Thanks for all the help freenas forum.
 

homerjr43

Dabbler
Joined
Mar 24, 2015
Messages
16
I have read all of the forum guides and the warnings that are detailed throughout this thread, but I am still looking for guidance. I have a 12 Bay Supermicro 2U server with a X8DTN+ and 2x L5520 Xeons. I also have 24 gb ECC ram and a Intel M1505 flashed to IT mode. My plan is to follow the guide that allows freenas to boot from baremetal or from Exsi with the same results. So assuming I can set-up PCI passthrough correctly. My data should be accessible and "safe" if i boot directly from USB or if I boot up via Exsi.

Ok, assuming I get this set up properly and Exsi is running on a 32gb SSD, and Freenas is running on a usb key, can I safely install Windows on the same 32GB SSD just to run windows media center/DVR/KODI (XBMC) with media extender support?

Also, assuming the PCI pass thorough is properly set up, can i move the hard drives to new hardware and still have access to my raidz2? if using Exsi with passthrough, is there anything that makes the Exsi required in the future?

Based on some more reading, I plan to have the onboard sata ports and the M1505 ports PCI passthrough to the freenas VM. I would then add a small 2 port sata card, from that I could run the 32gb ssd for the exsi install, and a second SSD for my windows install if needed. I also plan to run freenas from one usb key, and numerous plugins/jails from another usb key.

Does this setup make sense? I also may but another 12GB of ram (triple channel) and give 8gb to the windows VM. Thanks for the advice!
 
Last edited:

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,545
I have read all of the forum guides and the warnings that are detailed throughout this thread, but I am still looking for guidance. I have a 12 Bay Supermicro 2U server with a X8DTN+ and 2x L5520 Xeons. I also have 24 gb ECC ram and a Intel M1505 flashed to IT mode. My plan is to follow the guide that allows freenas to boot from baremetal or from Exsi with the same results. So assuming I can set-up PCI passthrough correctly. My data should be accessible and "safe" if i boot directly from USB or if I boot up via Exsi.

Ok, assuming I get this set up properly and Exsi is running on a 32gb SSD, and Freenas is running on a usb key, can I safely install Windows on the same 32GB SSD just to run windows media center/DVR/KODI (XBMC) with media extender support?

Also, assuming the PCI pass thorough is properly set up, can i move the hard drives to new hardware and still have access to my raidz2? if using Exsi with passthrough, is there anything that makes the Exsi required in the future?

Based on some more reading, I plan to have the onboard sata ports and the M1505 ports PCI passthrough to the freenas VM. I would then add a small 2 port sata card, from that I could run the 32gb ssd for the exsi install, and a second SSD for my windows install if needed. I also plan to run freenas from one usb key, and numerous plugins/jails from another usb key.

Does this setup make sense? I also may but another 12GB of ram (triple channel) and give 8gb to the windows VM. Thanks for the advice!
Since we're ignoring large amounts of information / research / experience regarding VMs and hardware, I'll consult my magic 8-ball....

"outlook not so good"
 

djk29a

Dabbler
Joined
Dec 13, 2013
Messages
13
...Windows on the same 32GB SSD just to run windows media center/DVR/KODI (XBMC) with media extender support?

...can i move the hard drives to new hardware and still have access to my raidz2? if using Exsi with passthrough, is there anything that makes the Exsi required in the future?

Based on some more reading, I plan to have the onboard sata ports and the M1505 ports PCI passthrough to the freenas VM. I would then add a small 2 port sata card, from that I could run the 32gb ssd for the exsi install, and a second SSD for my windows install if needed. I also plan to run freenas from one usb key, and numerous plugins/jails from another usb key.

Does this setup make sense?...
Sounds like you've got a likely unnecessarily complicated setup.

PCI-passthrough means that the VM would see everything on the drives perfectly as-is just like if it were not virtualized. The hard drives that the RAIDZ is setup on will not be affected, no. You should use the same HBA really in another setup down the road, but it's not terribly necessary because the layout would be discovered an import.

I'm not sure why you'd try to separate out your FreeNAS VMs from your other ones by trying to boot those off of USB instead of, say, an SSD. This would avoid potential complications with your ESX server accidentally booting off of the FreeNAS USB stick instead of the SSD for ESX. Really not sure why you want a separate set of SATA ports to copy there as well.

Here's the setup I'm going to run for something similar to what you're trying to achieve:

* 240GB SSD formatted as VMFS running ESXi, my NAS OS VM, and any other critical VM. This would be hooked up to the onboard SATA ports
* M1115 PCI passthrough to NAS OS VM. This will have all my mass storage.
* Maybe another SSD in the future for more VMs - this could be on the onboard SATA or if I feel like wasting even more money on this rather obsolete Sandy Bridge Xeon setup, I could spring for a PCI-e based SSD

Expansion of VMs could be easily done by just adding another SSD and mapping VMDKs to the VMs. Even if cost is an issue, just getting a bigger SSD would be cheaper than trying to add more USB drives IMO. If you want some logical separation between a FreeNAS jail and the OS, you can simply bind virtual disks to the VM and mount the jail partition as one of those virtual disks. It's easier to manage than some rat's nest of USB sticks hanging out of a server if you ask me. If you're trying to put stuff in FreeNAS onto USB sticks because you're worried about wear on the SSD, I'd recommend stopping now and just go get a bigger SSD or another one. USB sticks are a lot more fragile than SSDs in practice, and who says you can't just have your FreeNAS build boot using the same exact image like a USB stick except off of a VMDK thereby avoiding the write wear issue?

Have you ever worked with ESX/ESXi before? I would highly, highly recommend learning how to work with ESX/ESXi first far before even thinking about FreeNAS if your goal is to virtualize FreeNAS.
 

homerjr43

Dabbler
Joined
Mar 24, 2015
Messages
16
Anodos, thanks for not helping :) I am just trying to do the best option. Both jgreco and cyberjock use freenas in VM, and my hardware and plan match the guides provided. I am not blindly going VM. I have read the research and the experience of numerous others. With proper PCI pass-through it appears that ZFS works well and that the data is able to be read without a VM layer. I come to the forum for advice, not snarky comments. I am just trying to determine what "specific" problems Exsi could cause.

Djk29a, thanks for your constructive comment. The reason for running freenas off a usb is that in the event of any type of error or freenas problem, I could use it to load freenas on the baremetal without using exsi. From the guide provided by jgreco, this is helpful, because trouble shooting is much easier if freenas is running on bare metal. Also, per the guides running jails and plugins from another usb stick allows for better usb longevity because the freenas install is basically read only. I might run windows and exsi from the same SSD. Hence my question above. To answer your other questions, I plan to get my server up and running with 6 disks as a test server before going production. This is when I plan to really give Exsi a try and spend time to understand it. Also, I need the extra sata ports because my freenas VM will require all of the integrated intel sata ports and all of the ports on the M1505.

I understand that VM and Freenas is a dirty subject to talk about, I am here to ask for assistance. I plan to have 12 3TB drives in a raidz2. So, I am doing my research now before i get everything up and running. In the end, I may just run freenas on the baremetal, but I would love the ability to use my Dual xeons with 8 cores and 16 threads for a little more that just my NAS. I have a HDHomerun which is a cable card tuner. If I can run windows on a VM as my DVR, all tvs in my house can share the same DVR collection. Also, using media extenders on the Xbox 360 and Plex, I can simplify my 3 HTPC set ups. In fact, if this works they way i want, i can get rid of all 3 HTPCs.

Thanks for the advice. If you are hear to tell me not to use Exsi, could you please provide details as to why, that are not already laid out in the three posted guides on the forums. Because based on the forum posts and advice from jgreco and cyberjock, this set up is possible and effective, you just have to know what you are doing.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
I also have 24 gb ECC ram

RAM: Shy but possible

My plan is to follow the guide that allows freenas to boot from baremetal or from Exsi with the same results.

Good

So assuming I can set-up PCI passthrough correctly.

"Assuming...." X8 boards seem to have less success with VT-d...

My data should be accessible and "safe" if i boot directly from USB or if I boot up via Exsi.

Safe strategy

Ok, assuming I get this set up properly and Exsi is running on a 32gb SSD, and Freenas is running on a usb key, can I safely install Windows on the same 32GB SSD

Yes

just to run windows media center/DVR/KODI (XBMC) with media extender support?

... hears white noise Windows stuff and goes "uhhhh"

Also, assuming the PCI pass thorough is properly set up, can i move the hard drives to new hardware and still have access to my raidz2?

Yes

if using Exsi with passthrough, is there anything that makes the Exsi required in the future?

No, except that you lose access to the Windows VM.

Based on some more reading, I plan to have the onboard sata ports and the M1505 ports PCI passthrough to the freenas VM. I would then add a small 2 port sata card, from that I could run the 32gb ssd for the exsi install, and a second SSD for my windows install if needed. I also plan to run freenas from one usb key, and numerous plugins/jails from another usb key.

Mmmm, messy. Hard to say if the added-on SATA card will work as desired to boot ESXi, and I don't know offhand if the X8 SATA controller can be handed off with VT-d. You can lean slightly more on the virtualization layer here and put the FreeNAS boot image on the ESXi datastore. That's completely fine, you just need to have a copy of the configuration in case you need to migrate to baremetal. Get a nice 60GB SSD and put ESXi, the Windows VM, and the FreeNAS boot image (8-16GB HDD vmdk) all on it. Trying to make lots of USB keys work with ESXi is creepy.
 

homerjr43

Dabbler
Joined
Mar 24, 2015
Messages
16
Wow, thanks so much for the great advice!! I truly appreciate it. I will look into X8 boards and VT-d, and I will look in the bios to determine if booting from a 2 port sata card is possible. From what I have read, it appears that the onboard intel sata should be PCI pass-through capable. I know its small, but the 32gb ssd should be plenty, because the windows VM will be under 10GB total, which still allows 8-16 for freenas and at least 6 for exsi itself. As for the white windows noise, I would love to get rid of it, but windows is the only platform that supports the encryption needed for cable cards, which is very unfortunate.

Since you and cyberjock seem to be the gurus, can you give me some more advice or links regarding the possible dangers.. what types of things should i test/break and try to fix before going production? Also, is it smart to have two freenas installs in exsi, that way you can upgrade one, and still fall back to the old one if there is a problem? Finally, if i am patient and plan properly, is VM and Freenas as bad as people make it out to be?

Thanks again!!!
 
Status
Not open for further replies.
Top