Please do not run FreeNAS in production as a Virtual Machine!

Status
Not open for further replies.

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You misunderstand. The onboard controller and an add-on controller are approximately the same from the point of view I meant. The question is whether the mainboard supports it properly. It is a bit of an esoteric feature.
 

pbucher

Contributor
Joined
Oct 15, 2012
Messages
180
Since I am using a SOHO motherboard (not server class) I would imagine the RAID controller would be what you'd suggest?

I'd suggest going with the ZFS/FreeBSD gold standard of a LSI SAS/SATA controller, RAID isn't what you want for a ZFS based setup. Search the forums for more info on what cards and howto make them work in pass-through mode.
 

JaimieV

Guru
Joined
Oct 12, 2012
Messages
742
Using a hardware RAID controller (unless set to "just give me normal ports, thanks" mode) is never a good idea with a ZFS system, whether virtualised or native.

/Edit: Aargh! Sorry, confused by the new UI into replying to an old post.
 

reb00tas

Dabbler
Joined
Dec 30, 2012
Messages
18
raw device mapping is working fine on my esxi host. I can take out my discs and load the pool into whatever system i want.

I use vmkfstools -z to mount the discs :)

And I like all my all onboard controllers work when use use esxi, but they do not if i install freenas directly on the machine.
 

pbucher

Contributor
Joined
Oct 15, 2012
Messages
180
Yes if RDM is done right it does seem to currently work on ESXi. But you are seriously playing with fire here, not to mention vmware specifically doesn't support even making RDMs for the most part anymore and the feature is probably going to get removed in some future release.

That said I've done it and I don't do it any longer, not because it didn't work, but because it's seriously unsupported. It's best to simply pass-through the controller card to the VM and use it that way.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
raw device mapping is working fine on my esxi host. I can take out my discs and load the pool into whatever system i want.

I use vmkfstools -z to mount the discs :)

And I like all my all onboard controllers work when use use esxi, but they do not if i install freenas directly on the machine.

Lots of people have setup RDM and it has worked. There's various problems, however. Just off the top of my head:

1. All SMART support is gone.
2. Proper read/write errors to/from the disk are gone.
3. RDM is not supported any longer in ESXi and will likely be removed in the future.
4. Troubleshooting issues that may be related to virtualizing are no longer possible.

The top two are very very big no-nos. SMART support is crucial to identifying a failing disk. It's so valuable that SMART is supported in FreeNAS with the ability to monitor and email you if there are problems.

Many people have used RDM for weeks or months, then lost everything because things went bad. This thread was born because so many people had setup FreeNAS in a VM and lost everything because they did the exact things that this thread says not to do.

This thread was literally born out of the blood, sweat, and tears of prior admins that thought it worked fine... until the day it stopped working fine. In fact, if you do have a problem with your FreeNAS installation and you used RDM you won't get much help because most of us more experienced guys knows that if its not able to mount your zpool automatically there is literally zero chance you'll ever see your data again. Plenty of threads can validate this very horrifying reality.
 

cheezehead

Dabbler
Joined
Oct 3, 2012
Messages
36
Can it be done? Yes.
Should it be done? It depends

Depends on the hardware, if running in production VT-D/IOMMU is a minimum to pass-through the SAS controller.

Beyond the mentioned RAM requirements I would also stress two other points.
1) Give it a minimum of 2vCPU and make sure that it always has 2-cores available...overbooking the processor will give you nightmares.
2) The e1000 adapter is very stable but is not 10GB, the VMXNet3 adapter while 10GB has all sorts of driver performance issues. Use multiple virtual e1000's on separate vSwitches to utilize either the multi-path iSCSI adapter if available (round-robin) or to quasi-loadbalance the activity.
 

cheezehead

Dabbler
Joined
Oct 3, 2012
Messages
36
RDM is a supported feature by VMware as of ESXi 5.1U1, what is not supported but is technically possible are local RDM's. The feature is there generally for test beds and training scenarios. The supported RDM method is via VT-d or IOMMU and then passing an add-on storage controller directly to the VM.

http://kb.vmware.com/selfservice/mi...nguage=en_US&cmd=displayKC&externalId=1017530

As far as the "futures", I have not heard anything as of yet with regards to this. One can easily infer that current 2TB vmdk limit will be removed with the next release ESXi, which should make it around 32TB (Per the open beta of VMware Workstation), thus removing the need in some scenarios of needing to use an RDM.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Bah. I had a link that discussed some stuff for ESXi future builds(both a proposed 5.2/5.1 update and 6), but the page is gone.

But the page had said that RDM support was going to be removed as VT-d provides the same function. I'll see if I can find someone else that had talked about it.
 

cheezehead

Dabbler
Joined
Oct 3, 2012
Messages
36
Currently mapped FC/iSCSI/NFS with an RDM is the only "supported" configuration for some clustering scenarios. Anything beyond that I cannot comment on (NDA). As history has shown with product releases at each VMworld, one could suspect some product releases or public announcements the 25th-29th of next month.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The supported RDM method is via VT-d or IOMMU and then passing an add-on storage controller directly to the VM.

That's not an "RDM method". It also has the particular downside of being poorly or entirely unsupported on lots of hardware, though one can hardly blame VMware for that, of course. I'm frequently first in line to say bad things about craptacular consumer grade hardware... especially given that a lot of the Supermicro server-grade boards are similarly priced.

One can easily infer that current 2TB vmdk limit will be removed with the next release ESXi, which should make it around 32TB (Per the open beta of VMware Workstation), thus removing the need in some scenarios of needing to use an RDM.

Yeah, looking forward to that train wreck. As it stands, users attempting virtualized installs typically get a bit hung up on how to proceed and wind up looking at this and the other virtualization thread, which hopefully convinces them to do the right thing instead of bludgeoning the VM config file into mapping the disks as RDM. With large vmdk support, I expect to see a lot more upset virtualization users who do the "obvious" setting up a large vmdk on each of their disks and then RAIDZ them all together, and then when something goes wrong, the bad things start to cascade. From a FreeNAS VM point of view, RDM and VMDK are both bad news, generally speaking neither should be used to provide disks for ZFS to use in a pool.

But I still found the possibility of VMware discontinuing RDM an interesting tidbit, since I can see that there are numerous scenarios where it could be useful - just not in a FreeNAS context.
 

pbucher

Contributor
Joined
Oct 15, 2012
Messages
180
For VMWare's stance on using RDM to string together any kind of "poor mans" SAN see http://kb.vmware.com/selfservice/mi...nguage=en_US&cmd=displayKC&externalId=1017530 who's title(Creating Raw Device Mapping(RDM) is not supported for local storage) pretty much sums it up. RDM is meant to present FC & iSCSI attached LUNs to a VM for the purpose of creating a M$ Cluster or something like that.

Also note there are 2 types of RDM mappings: Virtual Mode & Physical Mode. If you are going to play with fire make sure you use Physical Mode, it does work and removes any sort of 2TB limit.
But I still found the possibility of VMware discontinuing RDM an interesting tidbit, since I can see that there are numerous scenarios where it could be useful - just not in a FreeNAS context.

I wasn't very clear, what I meant to say is the hacks(that can be found) to make a RDM from local storage could very well be discontinued in the next major release. Since they are clearly unsupported by VMware and with VT-d becoming widespread it's becoming a non-issue to build things like virtual sans.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Mostly for the benefit of "everyone else" I offer the following thoughts in response:

For VMWare's stance on using RDM to string together any kind of "poor mans" SAN see http://kb.vmware.com/selfservice/mi...nguage=en_US&cmd=displayKC&externalId=1017530 who's title(Creating Raw Device Mapping(RDM) is not supported for local storage) pretty much sums it up. RDM is meant to present FC & iSCSI attached LUNs to a VM for the purpose of creating a M$ Cluster or something like that.

Hey like totally awesome, I had been not finding that link for months now, because VMware uses such effin' oblique terms and vague handwaving, and I wasn't motivated enough to keep trying every possible combination of terms. I've updated the OP to include your link. Thanks.

Also note there are 2 types of RDM mappings: Virtual Mode & Physical Mode. If you are going to play with fire make sure you use Physical Mode, it does work and removes any sort of 2TB limit.

For purposes of my warning, they're basically both bad. I would guess that there might actually be a use case for physical RDM's with FreeNAS in an actual SAN environment that was officially supported by VMware. But the question becomes, why.

1) If you have a ~$10,000+ SAN storage setup for VMware, it likely already offers a good data protection environment and a system for managing failed disks, spares, and replacements. You then add FreeNAS to the mix, basically transforming the hardware to an overly expensive HBA with unused features. Bleh.

2) A full larger FreeNAS system might be

Chassis - $1000
Mainboard/E5 CPU - $1000
M1015 - $100
128GB RAM - $1000
12 4TB SATA - $1800

A highly redundant 30TB+ system for about $5000. Most of the people who seem to want to run a big FreeNAS have already figured that out. It seems to be mostly guys trying to make everything run on a single non-HCL lab box at home that are desperate to find a hack to "make it work".

I wasn't very clear, what I meant to say is the hacks(that can be found) to make a RDM from local storage could very well be discontinued in the next major release. Since they are clearly unsupported by VMware and with VT-d becoming widespread it's becoming a non-issue to build things like virtual sans.

VT-d isn't widespread(*). Unless you're following VMware's HCL and you aren't budgetarily-constrained. So yes you and I and guys like us, we probably run across the quaint odd platform where VT-d isn't supported. But most of the users here, they're trying to recycle hardware, or they think they "know what to buy," or they're cost-constrained. I'm tired of explaining to people why they shouldn't buy an ASUS 1155 board with Realtek for $110. We've seen problems with VT-d on platforms that claim to support it but that aren't server boards. Basically VT-d is a feature I only trust in certain conditions, and I use the term trust loosely.

I assume they're not removing RDM support entirely, then? Because that would be a loss. I don't pay enough attention to the VMware world, as you have probably guessed... I've always seen the potential for RDM to be useful in certain environments. I can't imagine that people would generally find the substitution of VT-d and a controller to be acceptable, since that'd screw with the number of VM's that could be hosted, vmotion, etc. so it makes a lot more sense if we're talking strictly about removing RDM local disk (non-)support rather than RDM in general.
 

pbucher

Contributor
Joined
Oct 15, 2012
Messages
180
I assume they're not removing RDM support entirely, then? Because that would be a loss. I don't pay enough attention to the VMware world, as you have probably guessed... I've always seen the potential for RDM to be useful in certain environments. I can't imagine that people would generally find the substitution of VT-d and a controller to be acceptable, since that'd screw with the number of VM's that could be hosted, vmotion, etc. so it makes a lot more sense if we're talking strictly about removing RDM local disk (non-)support rather than RDM in general.

Yes just for local disks.

An idea for older server grade hardware that folks want to recycle: I've done this with a Dell 2950 which doesn't support vt-d that I wanted a ZFS SAN on so I can use the snapshot replication feature for DR purposes. Setup a Raid 5/6 volume(or what ever makes sense for the setup, just something redundant) and install ESXi onto it. Make sure you have monitoring for the RAID volume setup so you can see detect disk failures and replace drives and such, the same as you'd do for any ESXi server running with local storage with data on you care about(aka use a VMware certified RAID controller which can be had fairly cheaply). Make a FreeNAS VM with a big chunk of RAM and give it a standard 4 GB vmdk to boot from and then a 2TB "data" vmdk for the ZFS pool. Create a ZFS volume using the single 2TB data vmdk, this will give you the advantage of using ZFS for scrubs to find data errors and the ability to make snapshots and replicate them to another box for disaster recovery purposes. I've been running a box like this for 9 months now without issue, the performance is really good.

For folks who may read this thread for advice, don't try to use FreeNAS for any kind on local redundancy when you can pass through a disk controller using vt-d in a virtual environment, let the local RAID controller do that.


Just as a side note: The reason I play with virtualized FreeNAS in a production environment is because I've taken a pair of ESXi servers that have 10gb ethernet nics to spare in them and ran a patch cable between the servers and now I've got a end to end 10gb heavy duty network backbone dedicated to SAN traffic for both ESXi servers. I can then do useful things like move the storage for running VMs between the SANs using storage vMotion during work hours without any down time. Also I can push ZFS replication of some very large snapshots across the backbone very quickly. If the price for data center quality 10gb switches would come down I'd jump on switching to physical boxes in a heart beat, but the cost savings on not buying the extra server hardware & network switches is too much for me to ignore.

With that all said I'm seriously thinking that I'd like to try a TrueNAS box as a head unit for my 2 big disk arrays. That way I could replicate between the arrays at disk i/o speed without the latency of the network and possibly not loose that much performance to my ESXi boxes, since ESXi benefits from caching and tends to do very random writes and I really see any sustained writes above gigabit ethernet speeds.
 

hpnas

Dabbler
Joined
May 13, 2012
Messages
29
Yes just for local disks.

An idea for older server grade hardware that folks want to recycle: I've done this with a Dell 2950 which doesn't support vt-d that I wanted a ZFS SAN on so I can use the snapshot replication feature for DR purposes. Setup a Raid 5/6 volume(or what ever makes sense for the setup, just something redundant) and install ESXi onto it. Make sure you have monitoring for the RAID volume setup so you can see detect disk failures and replace drives and such, the same as you'd do for any ESXi server running with local storage with data on you care about(aka use a VMware certified RAID controller which can be had fairly cheaply). Make a FreeNAS VM with a big chunk of RAM and give it a standard 4 GB vmdk to boot from and then a 2TB "data" vmdk for the ZFS pool. Create a ZFS volume using the single 2TB data vmdk, this will give you the advantage of using ZFS for scrubs to find data errors and the ability to make snapshots and replicate them to another box for disaster recovery purposes. I've been running a box like this for 9 months now without issue, the performance is really good.

For folks who may read this thread for advice, don't try to use FreeNAS for any kind on local redundancy when you can pass through a disk controller using vt-d in a virtual environment, let the local RAID controller do that.


Just as a side note: The reason I play with virtualized FreeNAS in a production environment is because I've taken a pair of ESXi servers that have 10gb ethernet nics to spare in them and ran a patch cable between the servers and now I've got a end to end 10gb heavy duty network backbone dedicated to SAN traffic for both ESXi servers. I can then do useful things like move the storage for running VMs between the SANs using storage vMotion during work hours without any down time. Also I can push ZFS replication of some very large snapshots across the backbone very quickly. If the price for data center quality 10gb switches would come down I'd jump on switching to physical boxes in a heart beat, but the cost savings on not buying the extra server hardware & network switches is too much for me to ignore.

With that all said I'm seriously thinking that I'd like to try a TrueNAS box as a head unit for my 2 big disk arrays. That way I could replicate between the arrays at disk i/o speed without the latency of the network and possibly not loose that much performance to my ESXi boxes, since ESXi benefits from caching and tends to do very random writes and I really see any sustained writes above gigabit ethernet speeds.


This sounds like a great idea, what type of performance do you see when using ESXi in terms of read and write?
 

IanWorthington

Contributor
Joined
Sep 13, 2013
Messages
144
Just trying to get my head around all this.

What I would /like/ to have is a single box which runs both linux and Windows under a hypervisor but also provides NAS functions, maybe limited to providing a zfs filesystem to those linux and windows guests.

Does the advice not to run freenas in a virtual machine apply to running it as dom0 under Xen?

i
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Just trying to get my head around all this.

What I would /like/ to have is a single box which runs both linux and Windows under a hypervisor but also provides NAS functions, maybe limited to providing a zfs filesystem to those linux and windows guests.

Does the advice not to run freenas in a virtual machine apply to running it as dom0 under Xen?

i

I'm completely unfamiliar with the phrase "dom0" but based on what I've read in 10 minutes of google searches about it you can't run FreeNAS as dom0 at the same time as Xen, so that kind of defeats the purpose.

The hypervisor is your dom0, while all of your VMs run on the hypervisor.

So I think your answer is "no".

If you've got a heartfelt demand for FreeNAS as a VM, read http://forums.freenas.org/threads/a...ide-to-not-completely-losing-your-data.12714/. But be warned, you are instantly asking for significant price increases for your server as a whole, you are putting your data at more risk(and significantly more risk if you even think about ignoring 1 or more bullets on the link I just provided.

And to be honest(no offense), but if you are as confused about dom0 as I think you are, you don't have the necessary experience to get this right.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You can, of course, do as you wish. The initial message in this thread is a brief summary of the various common forms of train wreck virtualizers run into. Another sticky, elsewhere, expands on what might be a safe-ish technique to virtualize. Several of us virtualize FreeNAS with no problems at all, but basically it is because we avoid the pitfalls outlined in the initial message here.

I haven't seen Xen in some time and I had no idea that you could run something besides a Linux service console as dom0 in it.

http://forums.freenas.org/threads/a...ide-to-not-completely-losing-your-data.12714/

The points in that message are specifically designed to help assist you in avoiding a total data loss event through the common problems users have experienced.

I am not aware of anyone having successfully virtualized FreeNAS on Xen along the lines outlined in that document. I am not aware of any reason it wouldn't be possible, but you would be in lonely waters. I do think you'd be safe if you follow those guidelines, exchanging Xen for ESXi.

I expect that Xen might have difficulty mounting filesystems if the FreeNAS VM isn't yet started. ESXi has this problem too. This means that it is difficult to get a system that provides ZFS backed storage to its own hypervisor. It is quite likely it won't boot up and start all VM's correctly without manual intervention.
 

IanWorthington

Contributor
Joined
Sep 13, 2013
Messages
144
Xen is a type1 hypervisor that leverages the facilities of a dom0 host. Thought that might have overcome the objections to FreeNAS in vm.

But you're quite right, http://wiki.xen.org/wiki/Dom0_Kernels_for_Xen does not list FreeBSD as a host, though the list is stated to be "incomplete".

Had read your link previously and certainly I have no intention of going against the advice here not to vm FreeNAS.

Maybe this is outside the scope of this forum but do you have any thoughts about using a supported linux as a Xen dom0 to provide zfs facilities (using linux-zfs) to a guests?

And no offense taken: you're right, I certainly don't have the necessary experience to get this right. But I can learn :)

i
 
Status
Not open for further replies.
Top