Please do not run FreeNAS in production as a Virtual Machine!

Status
Not open for further replies.

djk29a

Dabbler
Joined
Dec 13, 2013
Messages
13
I'm fundamentally asking what's taking up all your onboard SATA ports on a motherboard where you're needing to buy more SATA ports on top of the substantial number already. In dev, you should be getting your drives setup on the M1505 which leaves 1 for the boot SSD and 5 ports. Why are you going to put everything on the onboard SATA first when it has nothing to do with your final configuration? Just stick your local SSD-based storage to the onboard and move on. Overcomplicating setups is one of the fastest ways to create unreliable systems regardless of your budget.

While I can understand why someone might want to be able to switch between ESXi-based FreeNAS and a standard physical FreeNAS as an emergency recovery measure, I would typically run FreeNAS in a VM off of local (non-USB) storage and use maybe ESXi booting off of a USB drive (it can incur some writes for things like logging, after all). If I have an emergency, I'll pray I have a FreeNAS configuration backup, burn a new USB drive (or PXE boot, heck), and boot and reconfigure the new FreeNAS instance hoping that the IP configuration works out well (it almost certainly won't - your ESXi host's IP will be bound to the same as what used to be your FreeNAS VM, and lots of things will break probably). If anything should be backed up, it'd be that FreeNAS VM and the data on that RAIDZ. ESXi hosts are supposed to be pretty expendable to the point that if the hardware fails, you could run the VMs on that host immediately on another host - this option is hampered by using PCI passthrough mode because it binds that VM to that host entirely. And if your compute hardware fails, you'll now have to physically move the hard disks attached to the M1505 to your replacement hardware anyway.

To gloss over a lot of issues on just the virtualization layer creating potential problems, ESXi can cause all sorts of problems if you try to be clever, and even worse, you can make things terribly unreliable / unstable by actually FOLLOWING the recommended best practices that VMware would have you believe when it comes specifically to ZFS. As an example, RDMs (Raw Device Mappings) are a technique that VMware typically recommends when exposing disks to VMs so that users can get some virtualization features like snapshotting for those disks when using something like Microsoft clustering (requires direct access to drives somewhat similar to how ZFS should). Unfortunately, hypervisors and their I/O and CPU scheduling can in certain fluke scenarios (a VM stuck in a wait state for example that's waiting for a signal that it missed due to a fluke - this is only resolved by shutting everything down or vMotioning the VM to another ESXi host) re-order I/O or CPU threads that can break some assumptions that kernel writers may have made. VMware optimized RDMs for use specifically with MS clustering it appears, NOT for ZFS's concerns - this can really mess with the assumptions of writes by ZFS. This type of consideration is purely out of paranoia rather than actually cited VMware ESXi kernel knowledge / whitepapers, which is exactly what you should be doing when creating such a setup for anything resembling reliability for a business.

The reason people are scorning you is because a virtualized storage setup, while possible and is done with some safety, is really not something that people that are even just intermediate level users of virtualization can hope to plan for carefully. You are free to experiment on your own and try to learn through that but there's a great deal of considerations that go far beyond what you've asked so far. People with VCPs tend to bill customers at at least $150 / hr (lower? Who are you so I can hire you for cheap?! Seriously), this is very serious stuff with a lot of experience needed to be super confident about what you're doing, somewhere near that level of knowledge is where you should aim for when it comes to protecting your own data reliably if it's that important. I don't think we can exactly give you a tutorial on what you can and can't do with ESXi to the details necessary to give you the comfort of a solid ESXi based FreeNAS system like you desire without spending a great deal of effort. Enthusiasm is appreciated, but so is self-study before asking basic questions.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Since you and cyberjock seem to be the gurus, can you give me some more advice or links regarding the possible dangers.. what types of things should i test/break and try to fix before going production?

The viability of virtualization is a function both of the technical issues and of the opinion and expertise of whoever is installing it. I can name you five things in ten seconds that I would do in a heartbeat that Cyberjock would never ever do, partially because I see the world in a greater bitdepth than Cyberjock who tends to see a monochrome display. That doesn't make me right, and it doesn't make him right, it's just a matter of experience and opinion.

I've encouraged people to use techniques for virtualization that allow the hypervisor layer to be jettisoned, so that the exposed FreeNAS appliance is effectively identical to the server they would have made without a hypervisor. This is safer especially in that a user having a problem might have a chance of asking for help here, and it minimizes the hypervisor complications.

There's a lot of knowledge and issues discussed in this and the other virtualization sticky. Read the whole thing for further insight.

Also, is it smart to have two freenas installs in exsi, that way you can upgrade one, and still fall back to the old one if there is a problem?

You can do that, yes. I do prefer to manage major upgrades like the 9.2->9.3 thing in such a fashion.

Finally, if i am patient and plan properly, is VM and Freenas as bad as people make it out to be?

Those who have lost their data will say yes.

Those who haven't lost their data have a wide variety of opinions.

I know I've managed to have a bad feeling more than once when a mistake was made, but on the flip side, I have a rough time justifying all the wattburn just for a filer.

With the recent trend towards software based storage solutions, certain aspects of this are getting better. The biggest impediment is that many "experienced" VM people tend to think that virtualizing something like FreeNAS should be easy. It isn't. There's reasons. Play by the rules I've outlined for better chances of success. Break those rules at your own peril.
 

homerjr43

Dabbler
Joined
Mar 24, 2015
Messages
16
I'm fundamentally asking what's taking up all your onboard SATA ports on a motherboard where you're needing to buy more SATA ports on top of the substantial number already. In dev, you should be getting your drives setup on the M1505 which leaves 1 for the boot SSD and 5 ports. Why are you going to put everything on the onboard SATA first when it has nothing to do with your final configuration? Just stick your local SSD-based storage to the onboard and move on. Overcomplicating setups is one of the fastest ways to create unreliable systems regardless of your budget.

DJK29A, I appreciate your detailed responses. So after doing a bit more research based on jgreco's response, it appears that my Intel ich10r is not officially supported by exsi for passthrough. As such, based on your recommendations, I may get 2 M1505 cards. I need 12 ports for my 12 wide raidz2.

While I can understand why someone might want to be able to switch between ESXi-based FreeNAS and a standard physical FreeNAS as an emergency recovery measure, I would typically run FreeNAS in a VM off of local (non-USB) storage and use maybe ESXi booting off of a USB drive (it can incur some writes for things like logging, after all). If I have an emergency, I'll pray I have a FreeNAS configuration backup, burn a new USB drive (or PXE boot, heck), and boot and reconfigure the new FreeNAS instance hoping that the IP configuration works out well (it almost certainly won't - your ESXi host's IP will be bound to the same as what used to be your FreeNAS VM, and lots of things will break probably). If anything should be backed up, it'd be that FreeNAS VM and the data on that RAIDZ. ESXi hosts are supposed to be pretty expendable to the point that if the hardware fails, you could run the VMs on that host immediately on another host - this option is hampered by using PCI passthrough mode because it binds that VM to that host entirely. And if your compute hardware fails, you'll now have to physically move the hard disks attached to the M1505 to your replacement hardware anyway.

I will follow your advice and run the Freenas off the SSD, and just keep a backup USB drive ready with the config files in the event of an emergecny. Also, luckily, my best friend as identical hardware, and he might actually run an almost identical set-up, so in the event of a hardware problem, I have another server i can put my drives into.

To gloss over a lot of issues on just the virtualization layer creating potential problems, ESXi can cause all sorts of problems if you try to be clever, and even worse, you can make things terribly unreliable / unstable by actually FOLLOWING the recommended best practices that VMware would have you believe when it comes specifically to ZFS. As an example, RDMs (Raw Device Mappings) are a technique that VMware typically recommends when exposing disks to VMs so that users can get some virtualization features like snapshotting for those disks when using something like Microsoft clustering (requires direct access to drives somewhat similar to how ZFS should). Unfortunately, hypervisors and their I/O and CPU scheduling can in certain fluke scenarios (a VM stuck in a wait state for example that's waiting for a signal that it missed due to a fluke - this is only resolved by shutting everything down or vMotioning the VM to another ESXi host) re-order I/O or CPU threads that can break some assumptions that kernel writers may have made. VMware optimized RDMs for use specifically with MS clustering it appears, NOT for ZFS's concerns - this can really mess with the assumptions of writes by ZFS. This type of consideration is purely out of paranoia rather than actually cited VMware ESXi kernel knowledge / whitepapers, which is exactly what you should be doing when creating such a setup for anything resembling reliability for a business.

The reason people are scorning you is because a virtualized storage setup, while possible and is done with some safety, is really not something that people that are even just intermediate level users of virtualization can hope to plan for carefully. You are free to experiment on your own and try to learn through that but there's a great deal of considerations that go far beyond what you've asked so far. People with VCPs tend to bill customers at at least $150 / hr (lower? Who are you so I can hire you for cheap?! Seriously), this is very serious stuff with a lot of experience needed to be super confident about what you're doing, somewhere near that level of knowledge is where you should aim for when it comes to protecting your own data reliably if it's that important. I don't think we can exactly give you a tutorial on what you can and can't do with ESXi to the details necessary to give you the comfort of a solid ESXi based FreeNAS system like you desire without spending a great deal of effort. Enthusiasm is appreciated, but so is self-study before asking basic questions.

As for being clever, I only want to do two "difficult" things with Exsi. I need the Freenas VM to have pci-passthrough to my sata ports, and, if possible, I would like the windows VM to have access to a Radeon 6450. I read the warnings previously regarding RDM, so I will not be using this with freenas. I plan to test thoroughly before putting my data on the machine. I also plan to mess with exsi as little as possible. At this point, I can always go back to windows server and use ReFS, it I fail to get this set-up.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
The reason people are scorning you is because a virtualized storage setup, while possible and is done with some safety, is really not something that people that are even just intermediate level users of virtualization can hope to plan for carefully.

An astute observation (actually a well-written post overall).
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
Mmmm, messy. Hard to say if the added-on SATA card will work as desired to boot ESXi, and I don't know offhand if the X8 SATA controller can be handed off with VT-d.

I believe success has been mixed to low in passing through the ICH10R southbridge X8 boards have.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
One more grass roots two bits. I intended to set up a virtualized FreeNAS instance for clients a little better than a year ago. But being the paranoid SOB I am, I paid very close attention to the extremely negative posts and zeitgeist surrounding the idea. So switched up the plan to test jgreco's best practice vt-d model instead.

I have done my utmost to thrash it and beat on it. It has some crappy drives, I've yanked em, not followed procedures, transferred TB after TB after TB crushed em with scripts, kicked the power cords repeatedly... No server deserves such abuse. Never so much as a hiccup or blink. Understand your hardware and test recovery modes. The magic is in your experience level if something goes sideways. A month or two (or ten) of 15 hour days won't hurt either ;).

I got comfortable the second jgreco mentioned one of the boxes he's been running for well over a year virtualized. Could be double that now... dunn0. But the point is you don't need to be a VCP, with good gear vt-d is really pretty basic. You turn it on, pass the device, leave it alone. There are plenty of mostly silent success stories. I wasn't here for the wild west days of rdm and random foolishness but we've seen very little pain and suffering recently. For the most part people get pointed in a solid direction quickly.

Ironically, I'll probably never virtualize FreeNas on site. But it has proven itself in my eyes when set up correctly. What a mighty fine gateway drug to zfs, and she is a cruel mistress.

Good luck, homerjr43.
 

Tywin

Contributor
Joined
Sep 19, 2014
Messages
163
Understand your hardware and test recovery modes. The magic is in your experience level if something goes sideways.

A million times this. All of your backup strategies and disaster recovery procedures mean squat if you haven't explicitly tested them.
 
Last edited:

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
A million times this. All of your backup strategies and disaster recovery procedures mean squat if you haven't explicitly tested them.
I always tell the junior guys, it's not a backup until you prove you can recover from it... unless you test it, it is just taking up space.
 

djk29a

Dabbler
Joined
Dec 13, 2013
Messages
13
"I'm going to virtualize FreeNAS" = "Now you've got both a FreeNAS and virtualization problem." That's really all I can say to summarize at a high level that if you don't know enough to advise others on the setup, you should be paranoid or simply accept some random terrible or annoying problem.

As for being clever, I only want to do two "difficult" things with Exsi. I need the Freenas VM to have pci-passthrough to my sata ports, and, if possible, I would like the windows VM to have access to a Radeon 6450.
Your calculation of only two hurdles is probably an underestimation. To answer your two main questions: yes, people do PCI passthrough of video cards to a VM and have done so successfully, and passing in an HBA like your M1505 is going to be the safest (and only recommended) route to a VM. You can passthrough quite a lot of PCI-e devices actually.

You must be deep in a gravity hole or something.
You don't know if he's in a tesseract and is sending us a message from the future that we should actually be sending all our servers to space for maximum availability, expansion room, and isolation from world events.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
"I'm going to virtualize FreeNAS" = "Now you've got both a FreeNAS and virtualization problem." That's really all I can say to summarize at a high level that if you don't know enough to advise others on the setup, you should be paranoid or simply accept some random terrible or annoying problem.

Your calculation of only two hurdles is probably an underestimation. To answer your two main questions: yes, people do PCI passthrough of video cards to a VM and have done so successfully, and passing in an HBA like your M1505 is going to be the safest (and only recommended) route to a VM. You can passthrough quite a lot of PCI-e devices actually.


You don't know if he's in a tesseract and is sending us a message from the future that we should actually be sending all our servers to space for maximum availability, expansion room, and isolation from world events.

Space is overrated. Have you ever tried running VMs from an orbiting datastore? The latency kills the whole idea.
 

Tywin

Contributor
Joined
Sep 19, 2014
Messages
163
Space is overrated. Have you ever tried running VMs from an orbiting datastore? The latency kills the whole idea.

Ah, but if your server is in orbit, physical access becomes much less of a security concern :cool:
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Space is overrated. Have you ever tried running VMs from an orbiting datastore? The latency kills the whole idea.

I like the idea of terawatt interstellar communication lasers. Handy for long distance 10GbE or maybe for breaking out of a GP#4 hull...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
The viability of virtualization is a function both of the technical issues and of the opinion and expertise of whoever is installing it. I can name you five things in ten seconds that I would do in a heartbeat that Cyberjock would never ever do, partially because I see the world in a greater bitdepth than Cyberjock who tends to see a monochrome display. That doesn't make me right, and it doesn't make him right, it's just a matter of experience and opinion.

I take the more "monochromed" look because, ultimately, there's 2 kinds of people:

Those those that have the required experience and knowledge to do this properly.
Those that don't.

If you ask 99% of people that work in IT, they'll tell you they are an expert. They don't like being told "you are wrong". They don't like being told "go read up on the subject and come back to me". They want the answer and they want it now. They also want they answer that they want to hear. So if you tell someone they can't do it, they'll immediately be upset and they will let you know.

There's a lot about FreeNAS, FreeBSD, and file servers in general, that I don't know. I also haven't worked in an IT shop where you are taught, coaxed, or otherwise expected to always be an expert all of the time. I came from a field where you *must* be open to all options, all information, and all theories of how to do something, at all times. Failure to do so makes you very unemployed, and very quickly too.

I've been astonished, now that I do work in IT, that such a small percentage of people are actually willing to take advice and internalize it, be open to the fact that their original idea is a bad idea, etc.

I was lucky, and when I decided to try it just to see how it went, someone helped me (name withheld for a reason). I did virtualize for a while because I wanted to learn (and that was the best way). I don't anymore, and I haven't for over a year. Why?

1. I didn't feel there was anything else to learn about this particular aspect of FreeNAS.
2. I saw several people that did things that seemed pretty innocuous and they lost their pools while virtualizing.

I don't want to be in group #2. At the end of the day I'm telling people in a variety of topics that they should use FreeNAS how it was designed. Don't try to redesign it. Don't try to bend it to your whims because you are unhappy. Don't try to make it do things it is not engineered to do. I, personally, consider virtualizing to be trying to do a little of all 3. I'd rather not be that hypocrite that tells people not to virtualize while simultaneously virtualizing or tells people not to bend or re-engineer FreeNAS while I do exactly that. I have a little test system I do all sorts of really really screwed up things to on a regular basis. It has no data that doesn't exist elsewhere, and I do it *only* for education.

Like I've told plenty of people. If you want to virtualize, have a ball. Don't expect me to fix your ball if it develops a hole.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Yeah, but you also enjoy wandering around with that BB gun, shooting people's balls.

<tries to keep a straight face>
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,543
I take the more "monochromed" look because, ultimately, there's 2 kinds of people:

Those those that have the required experience and knowledge to do this properly.
Those that don't.

If you ask 99% of people that work in IT, they'll tell you they are an expert. They don't like being told "you are wrong". They don't like being told "go read up on the subject and come back to me". They want the answer and they want it now. They also want they answer that they want to hear. So if you tell someone they can't do it, they'll immediately be upset and they will let you know.

There's a lot about FreeNAS, FreeBSD, and file servers in general, that I don't know. I also haven't worked in an IT shop where you are taught, coaxed, or otherwise expected to always be an expert all of the time. I came from a field where you *must* be open to all options, all information, and all theories of how to do something, at all times. Failure to do so makes you very unemployed, and very quickly too.

I've been astonished, now that I do work in IT, that such a small percentage of people are actually willing to take advice and internalize it, be open to the fact that their original idea is a bad idea, etc.

I was lucky, and when I decided to try it just to see how it went, someone helped me (name withheld for a reason). I did virtualize for a while because I wanted to learn (and that was the best way). I don't anymore, and I haven't for over a year. Why?

1. I didn't feel there was anything else to learn about this particular aspect of FreeNAS.
2. I saw several people that did things that seemed pretty innocuous and they lost their pools while virtualizing.

I don't want to be in group #2. At the end of the day I'm telling people in a variety of topics that they should use FreeNAS how it was designed. Don't try to redesign it. Don't try to bend it to your whims because you are unhappy. Don't try to make it do things it is not engineered to do. I, personally, consider virtualizing to be trying to do a little of all 3. I'd rather not be that hypocrite that tells people not to virtualize while simultaneously virtualizing or tells people not to bend or re-engineer FreeNAS while I do exactly that. I have a little test system I do all sorts of really really screwed up things to on a regular basis. It has no data that doesn't exist elsewhere, and I do it *only* for education.

Like I've told plenty of people. If you want to virtualize, have a ball. Don't expect me to fix your ball if it develops a hole.
I feel like the IT industry, outside of enterprise, is a giant case study in the dunning-kruger effect.
 
Last edited:
Status
Not open for further replies.
Top