Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

Please do not run FreeNAS in production as a Virtual Machine!

Status
Not open for further replies.

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
11,781
Thanks
3,036
#1
[---- 2014/12/24: Note, there is another post discussing how to deploy a small FreeNAS VM instance for basic file sharing (small office, documents, scratch space). THIS post is aimed somewhat more at people wanting to use FreeNAS to manage lots of storage space. ----]


FreeNAS is awesome. FreeNAS can and will run as a VM. That does not make it a good idea.

  1. FreeNAS is designed to run on bare metal, without any clever storage systems (UNIX/VMFS filesystem layers, RAID card caches, etc!) getting in the way. Think about this: ZFS is designed to implement the functionality of a RAID controller. However, its cache is your system's RAM, and its processor is your system's CPU, both of which are probably a lot larger and faster than your hardware RAID controller's cache!
  2. Without direct access to the hard drives, FreeNAS lacks the ability to read SMART data and identify other developing problems or storage failures.
  3. A lot of the power of FreeNAS comes from ZFS. Passing a single virtual disk to ZFS to be shared out via FreeNAS is relatively safe, except that ZFS will only be able to detect and not actually correct any errors that are found, even if there is redundancy in the underlying storage.
  4. There is a great temptation to create multiple virtual disks on top of nonredundant datastores in order to gain "MOAR SPACE!!!". This is dangerous. Some specific issues to concern yourself with: The data is unretrievable without the hypervisor software, the hypervisor might be reordering data on the way out (which makes the pool at least temporarily inconsistent), and the hypervisor almost certainly handles device failures non-gracefully, resulting in problems from locked up VM to unbootable VM, plus interesting challenges once you've replaced the failed device.
  5. Passing your hard disks to ZFS as RDM to gain the benefits of ZFS *and* virtualization seems like it would make sense, except that the actual experiences of FreeNAS users is that this works great, right up until something bad happens, at which point usually more wrong things happen, and it becomes a nightmare scenario to work out what has happened with RDM, and in many instances, users have lost their pool. VMware does not support using RDM in this manner, and relying on hacking up your VM config file to force it to happen is dangerous and risky.
  6. FreeNAS with hardware PCI passthrough of the storage controller (Intel VT-d) is a smart idea, as it actually addresses the three points above. However, PCI passthrough on most consumer and prosumer grade motherboards is unlikely to work reliably. VT-d for your storage controller is dangerous and risky to your pool. A few server manufacturers seem to have a handle on making this work correctly, but do NOT assume that your non-server-grade board will reliably support this (even if it appears to).
  7. Virtualization tempts people to under-resource a FreeNAS instance. FreeNAS can, and will, use as much RAM as you throw at it, for example. Making a 4GB FreeNAS VM may leave you 12GB for other VM's, but is placing your FreeNAS at a dangerously low amount of RAM. 8GB is the floor, the minimum.
  8. The vast majority of wannabe-virtualizers seem to want to run FreeNAS in order to provide additional reliable VM storage. Great idea, except that virtualization software typically wants its datastores to all be available prior to powering on VM's, which creates a bootstrap paradox. Put simply, this doesn't work, at least not without lots of manual intervention, timeouts during rebooting, and other headaches. (2013 note, ESXi 5.5 may offer a way around this.)
I'm pretty sure I'm forgetting a few. But the conclusion is this: it's perfectly fine to experiment with FreeNAS in a VM. However, if you run it in production, put your valuable data on it, and then something bad happens, and you absolutely positively must get your data back, there probably won't be a lot of help available from the forum. We've seen it happen again, and again, and again. Sigh.
 

pbucher

FreeNAS Experienced
Joined
Oct 15, 2012
Messages
180
Thanks
21
#2
Good post, I will only disagree with the point that you shouldn't virtualize FreeNAS.

Virtualizing FreeNAS under ESXi works great if you know how to set it up, you should use a LSI SAS card with VT-d(see here). Don't bother unless you have a minimum of 32GB of RAM unless you are just hosting a few lightweight VMs. I've got FreeNAS running comfortably on a VM with 6GB of RAM for my house server and for my heavy duty servers at work I give them 20GB of RAM. Also for any kind of performance give them 4 CPU cores too. While my house server doesn't use a server grade board(not yet). A consumer board adds risks and don't try to use anything cheap, google ESXi whitebox and find what boards people are using and have proven VT-d works on. The biggest risk is because ZFS's caching of disk writes in RAM you really should go a server grade board with ECC ram, if you are going to risk it on the ram then do a good burn in using memtest to verify that your ram is good and don't use anything cheap.

On #6 You gotta have a drive to house the FreeNAS VM for ESXi or just don't do it.

In short unless you like tinkering with this stuff just buy a cheap RAID controller that has battery backed up ram cache and does RAID-5 or even better buy one of these. Virtualized FN done right it works really great, I've pushed many TBs of data to my virtualized FreeNAS boxes in the last 6 months and my only down time was to upgrade from 8.3.0RC1 to 8.3.0 to 8.3.1.
 

9C1 Newbee

FreeNAS Experienced
Joined
Oct 9, 2012
Messages
482
Thanks
47
#3
You seem to be smart and know what you are doing. I don't think you are the target audience. As for most of the users and myself, NFW (no way) should we attempt what you are talking about. And he is saying if you do it and hose all your data, don't expect much help from here. After that, we enjoy bashing the forum and complaining how the free software sucks. This is a warning for newbs like me.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
11,781
Thanks
3,036
#4
This is a warning for newbs like me.
Pretty much. I mostly virtualize FreeNAS too, but with a very specific formula that is intended to minimize the risks. I've thought about documenting that and the reasoning, but lots of experience suggests that people will pick and choose the convenient bits, and then I'll hear "but you said this works" when they've chosen to ignore something significant. Someone who has bothered to comprehensively read the forum will wind up seeing most of the truly important bits ... the less important but also maybe less obvious bits like "Make sure the NAS has all its RAM reserved so it doesn't vswap" aren't likely to actually kill you.
 
Joined
Mar 25, 2012
Messages
19,151
Thanks
1,857
#5
In no way shape or form am I looking to virtualize the FreeNas host.
Then the warning from this thread doesn't apply to you. But, keep in mind that there are potential serious performance issues with using ESXi datastores on ZFS. Search the forums for additional information.
 
Joined
Mar 25, 2012
Messages
19,151
Thanks
1,857
#6
There's plenty of ways to backup your data. But trying to backup iSCSI devices that are currently in use can be difficult. You should read up more on the options and ask any other questions you have in a new/different thread.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
11,781
Thanks
3,036
#7
I wasn't planning on using ZFS since I think the physical box may only be able to support 2gb ram. ZFS I've heard 4gb ram is recommended.
With ZFS, 8GB of RAM is recommended - a change that has temporarily vanished from the docs because apparently there was a problem with the web server, but the text will be clarified as follows:

===RAM===

The best way to get the most out of your FreeNAS® system is to install as much RAM as possible. If your RAM is limited, consider using UFS until you can afford better hardware. FreeNAS® with ZFS typically requires a minimum of 8 GB of RAM in order to provide good performance and stability. The more RAM, the better the performance, and the [http://forums.freenas.org FreeNAS® Forums] provide anecdotal evidence from users on how much performance is gained by adding more RAM. For systems with large disk capacity (greater than 8 TB), a general rule of thumb is 1 GB of RAM for every 1 TB of storage. This [http://hardforum.com/showpost.php?p=1036865233&postcount=3 post] describes how RAM is used by ZFS.

It is possible to use ZFS on systems with less than 8 GB of RAM. However, FreeNAS® as distributed is configured to be suitable for systems meeting the sizing recommendations above. If you wish to use ZFS on a smaller memory system, some tuning will be necessary, and performance will be (likely substantially) reduced. ZFS will automatically disable pre-fetching (caching) on systems where it is not able to use at least 4 GB of memory just for ZFS cache and data structures. This [http://forums.freenas.org/showthrea...rough-FreeNAS-on-a-32bit-proc-with-low-memory post] describes many of the relevant tunables.
So anyways, if you're using FreeNAS with UFS/FFS, you're probably fine with less memory - I've had UFS/FFS systems with 512MB IIRC and ZFS with 1GB (and lots of twiddling and I wouldn't trust it for production). The admin web services might be slow - some of the comments in the middleware suggest that they were written to assume several gigs of memory available, but that's what swap is for, eh.
 
Joined
May 28, 2013
Messages
4
Thanks
0
#8
i have one computer and plan to use Windows 7 as Minecraft server and also plan to use FreeNAS as well. Seeing this post does not recommend to do so, but is it do able? any help or info would help us getting it done correctly?

Thanks in advance,
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
11,781
Thanks
3,036
#9
Is there a good reason you cannot run your Minecraft server as a FreeBSD jail?
 
Joined
Jun 18, 2013
Messages
1
Thanks
0
#11
Today I read most of the topics about virtualization of FreeNAS and decided to wait and learn more. Is it also risky to use UFS on a FreeNAS virtual environment?

I was using Windows Server 2008 R2 as my e-mail server and network storage system. However, now I decided to move mail server on a linux disto. and storage to FreeNAS on ESXi.

btw my CPU (Pentium G2020) has no VT-d support. Also I will use onboard raid controller and configure it as RAID 1.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
11,781
Thanks
3,036
#12
I would deem it as less dangerous, primarily because what happens with ZFS is that people try Stupid Virtualization Tricks to get their physical storage allocated to virtual disks, then have ZFS merge that into a pool, then when something goes wrong, the pool gets scrambled and they lose their data. Making a single datastore and sticking UFS on it is probably safer, and it will also be clearer to you what is at fault for your data loss if something goes wrong.
 
Joined
May 13, 2012
Messages
29
Thanks
0
#13
Without direct access to the hard drives, FreeNAS lacks the ability to read SMART data and identify other storage failures."

Is this still true when using Pci pass through?
 
Joined
Mar 25, 2012
Messages
19,151
Thanks
1,857
#14
Without direct access to the hard drives, FreeNAS lacks the ability to read SMART data and identify other storage failures."

Is this still true when using Pci pass through?
PCI passthrough is the only way that you can have access to that data.
 
Joined
May 13, 2012
Messages
29
Thanks
0
#15
OK so in other words, if I do use PCI passthrough then FreeNAS will produce the SMART alerts?
 
Joined
Mar 25, 2012
Messages
19,151
Thanks
1,857
#16
If you have them setup properly and your hardware is supported, yes.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
11,781
Thanks
3,036
#17
With PCI Passthrough, the controller in question is not available to the hypervisor as a potential storage controller, and is instead presented to a VM. The hypervisor is still involved in making sure interrupts, etc., are handled correctly, but from the point of view of the VM, the controller is actually directly attached to the VM. This means that it is very nearly equivalent to working as a controller on a bare metal FreeNAS install would.

SMART essentially involves using the controller to chat with the disk drive's onboard systems. Theoretically, there's nothing preventing you from doing SMART in much more complicated situations, except that in many cases, either the manner in which SMART data is accessed has been made different (I believe some of the 3Ware RAID controllers), or that nobody has bothered to provide the extra structure needed to make SMART work. This isn't a virtualization issue, it's a general "somebody didn't bother to figure out how to handle the finer points" issue.

I've said for many years, "it's a detail-oriented business," and SMART is a detail. On ESXi, for example, if you were using RDM, there's no particularly good reason that basic SMART support couldn't have been built in, except that it is mostly a feature that VMware expects will be handled by your SAN. It is apparently possible, with the right hardware and the right incantations, to get SMART via RDM on ESXi 5.1, but it appears to be the exception rather than the rule. Since we've also seen RDM catastrophes, I'm not exactly anxious to try.
 
Joined
May 13, 2012
Messages
29
Thanks
0
#18
With that in mind, what would be best for FreeNAS and ESXi

1. Use the SATA ports on the motherboard with PCIe Passthrough
2. A hardware RAID controller (e.g. RAID 0) with PCIe Passthrough
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
11,781
Thanks
3,036
#19
Both seem to work fine, at least with the right hardware.
 
Joined
May 13, 2012
Messages
29
Thanks
0
#20
To refer back to your point

"FreeNAS with hardware PCI passthrough of the storage controller (Intel VT-d) is a smart idea, as it actually addresses the three points above. However, PCI passthrough on most consumer and prosumer grade motherboards is unlikely to work reliably. VT-d for your storage controller is dangerous and risky to your pool. A few server manufacturers seem to have a handle on making this work correctly, but do NOT assume that your non-server-grade board will reliably support this (even if it appears to)."


Since I am using a SOHO motherboard (not server class) I would imagine the RAID controller would be what you'd suggest?
 
Status
Not open for further replies.
Top