- Joined
- Apr 24, 2020
- Messages
- 5,399
I think you're cherry-picking quotes from that link, which was written in 2015 for FreeNAS 9. What's linked here is current practice for FreeNAS 11, which has a larger footprint than FreeNAS 9.
Thanks, but I'll trust a software engineer from the company in question over some random person on a forum.I think you're cherry-picking quotes from that link, which was written in 2015 for FreeNAS 9. What's linked here is current practice for FreeNAS 11, which has a larger footprint than FreeNAS 9.
No, i don't want your help.You asked for help on this forum. If you don't want our help, based on thousands of hours running FreeNAS over several hardware and virtual baselines, and seeing what works, then that's your choice.
1. If this is true, why does a senior software engineer from iXSystems state "You absolutely can virtualize FreeNAS" and "Other hypervisors such as bhyve, KVM, and Hyper-V also work...". That entire blog post is calling out statements like yours and others on this thread as, at best, a misleading interpretation of his earlier statements, and at worst, patently false. I've also found multiple people who have successfully setup FreeNAS in a Hyper-V VM. I get that it may not be easy, and I get that there are issues unique to PCIe passthrough which might inhibit a prescriptive solution. But that doesn't mean a FreeNAS VM is going to under-perform, let alone be impossible to configure.1. FreeBSD does not run very well in HyperV. Technical fact.
2. If you must use a hypervisor, go ESXi.
3. Initially you wrote you had a dedicated system for FreeNAS so why use a hypervisor at all?
You can find an assortment of machines that I run FreeNAS on in my signature. None of this is anywhere the size/cost range of a cheap home Plex server. Most regulars on this forum run FreeNAS for business, that's why we are regulars.
You might be better off with just Linux or FreeBSD (modulo the HyperV constraint), ZFS and a regular Plex install.
I can't encourage you to go without ECC but is is sufficiently proven that the "scrub of death" is a myth and ZFS without ECC is still better than any other filesystem without ECC. Do a burn-in and memory test and don't overclock ...
HTH,
Patrick
LSI SAS 4i4e HBA
I interpreted that as "these are parts of a machine that has no other purpose beside FreeNAS" ...For running the FreeNAS VM, I have the following hardware that will be used for no other purpose besides FreeNAS:
Afterall, this is FreeNAS, so plenty of home users run it as well considering there's no expensive licenses involved.
Thanks. This was helpful.I'm assuming you're using DDA for that, PCIe passthrough. Which, last I checked, is only supported on the server SKUs, not the client SKUs.
As this will only be used for Plex media, I get your risk profile.
As for networking: That's a Hyper-V question. Depends a bit on how you do networking on Hyper-V. You'll need an external virtual switch, a legacy network adapter (means type 1, not type 2 virtual machine), and then from there basic network troubleshooting, maybe ask in a Hyper-V forum.
Keep in mind that FreeNAS has a short list of supported adapters. If your host machine is using Realtek, and that's showing up as Realtek within FreeBSD, you'll have issues. There is a list on these forums of supported adapters, in a nutshell: Intel server adapters. If all else fails, DDA one of those through to FreeNAS for exclusive use.
There's a guide here that'll step you through the legacy adapter setup that's necessary: https://www.servethehome.com/install-FreeNAS-hyperv-part-1-basic-configuration/ . Note he's using a dedicated NIC.
People here are not likely to have virtualized FreeNAS using Hyper-V. Some are doing this with ESXi and even Proxmox, with PCIe passthrough for the HBA.
What it comes down to is: You can definitely do as you're doing. If (or when) you run into issues specific to a VM deployment on Hyper-V, you are not likely to get good guidance here, as it's not a setup that people are running - or feel like offering free support for.
It does, through Storage Spaces. It handles 2-way mirrors just fine.Patrick M. Hausen said:why not run OpenZFS on Windows or a simple mirror (if I am not mistaken Windows supports that without additional software)
But of all the sources available, this place should be the best chance of getting info on this
That statement I made about hardware dedicated to FreeNAS meant that was hardware that I'm not allocating for the host OS. The storage drives are entirely dedicated to the VM/FreeNAS. Whereas the "2 cores" and "12GB RAM" were resources allocated to the VM in Hyper-V. So the full CPU and full RAM aren't being dedicated to the VM, but that is how much of each I allocated to the VM.My "not very well" statement is founded in the fact that ESXi officially supports FreeBSD including guest additions while Hyper-V to my knowledge doesn't. Possibly my knowledge is outdated, possibly it's not.
In your initial post you wrote:
I interpreted that as "these are parts of a machine that has no other purpose beside FreeNAS" ...
But if you are not even intending to run Plex in FreeNAS why not run OpenZFS on Windows or a simple mirror (if I am not mistaken Windows supports that without additional software) instead of going through all these contortions to plug a NAS into your Windows machine, then use a network protocol to access it ...? Seriously.
See, us "regulars" run FreeNAS as the only OS on our hardware regardless of size and then integrate $stuff into FreeNAS. After all it's a capable server OS - containers, hypervisor and everyting ...
Kind regards
Patrick
As an avid forum user, I get the annoyance about repeated posts on the same topic. But the other side is that the poster doesn't know that it's a oft-repeated topic, so the "regulars" need to take this into account or just not respond. It helps no one to post some of these replies I received in reponse to this.Blessing and a curse. Watch the recent interview with Kris Moore (https://www.youtube.com/watch?v=z5H9gB0FVdY) and pay attention to the laughing references to the kind of hardware people want to run FreeNAS on. 9 out of 10 issues with FreeNAS come down to "creative" hardware choices.
You are embarking on a journey here. I do think you can (probably) get some form of thing going, though with additional data loss risk because Win10 won't let you DDA the HBA. Which you have clearly stated you are fine and dandy with, and I get that. After all, it's just entertainment media, that can always be put back on again.
You are very much out on a limb of "I am doing my own thing, and I can then YouTube this and have people marvel at my crazy setup and how I got that to work". As a home user myself, I get the urge to tinker and do "perverted" things with software and virtualization.
All the people on this forum are trying to do is make you very aware of what you're embarking on. I do agree the message could have been a little more gentle. Keep in mind though that there are a LOT of "I want to run FreeNAS on VirtualBox on Ryzen Gaming Pro MegaRGB, why am I having issues?" kind of posts, and folk can get a little fatigued with it. Have some patience with that fatigue, please, and some of us at least will do our utmost to have patience with your FrankenBuild. Which it is. I'm saying this affectionately. It's way out there on a limb, and if you get it to work, as "wobbly" as it would be for production, I think it'd be kinda cool. As long as "I may lose all data at any time and ZFS won't help me" is clearly understood, because, no DDA will hurt, if (or when) things go south on a drive.
I hear you, and the way you stated it here is fair. But others came in just chastising me for suggesting I'm attempting to run it in a VM, which is ridiculous. As you just said here, ESXi is well tested and run by several on the board. So getting on my case about running in a VM is just a waste of everyone's time. Who cares why I want to do it, I'm doing it. Just post any help you may have on the topic or dont post at all. I'm not directing this at you, BTW, just speaking in generalities.You'd think so, but not really. Most everybody on this forum is quite fond of their data. Even with backups, people don't really feel like having major failures. And that means, if one is to virtualize, there needs to be PCIe Passthrough. At this point the choices are:
- ESXi, well tested, several prominent members run it. Not a lot of support but at least there's a track record
- Proxmox, a bit more out there, but again several YT videos on how to do that and folk have had success
- Hyper-V on Windows Server, completely unproven, quite niche, FreeBSD support is not a thing MS cares about greatly, and it takes a Win Server license, which is in a different ballpark than Proxmox or ESXi, which can, if things haven't changed lately, both be free.
Running it on Win10 without DDA - just no for any kind of use where keeping the data is even remotely desirable.
Edited to add: People have done things like run ESXi, pass a HBA through to FreeNAS, and pass a GPU through to a Win10 VM, and that way get Win10 and FreeNAS all running on the same hardware. Doable. A little fiddly. More power to those for whom that works well and meets their needs.
And this is the thing... you can buy a Supermicro X9SCL motherboard with a Xeon e3 1220 or 1230 and 32GB ECC RAM for under $200 off eBay. Say another $150 for a new case and power supply (ok, power supplies are hard to find in stock right now...) and you're looking at $350-400. Now you have an actual server, and for less money than just one of those hard drives. It will have no upgrade path (RAM is already maxed out), but it does cover the requirements of what you've been describing.I'm not running mirroring, because these drives are like ~$400 a piece. Trying to get the bang for my buck on the software side.
See that just goes to show how little I even know about this stuff. I've never heard of OpenZFS as an option in my research. It just wasn't a name that ever came up for whatever reason. And of course now I will be looking into it. My initial thoughts have me confused, as there's no website? OpenZFS.org is just a wiki page, and the link to the Windows version takes you to a website with nothing on it except a single fuzzy photo. Is this like a prank or something?OpenZFS is not an OS, it's an addition to Windows that lets you create ZFS pools directly in Windows. I am wondering why you want to go with a separate OS installation running inside a hypervisor at all? Create a mirror in Windows and share that from Windows. Or use OpenZFS in Windows and then share that ... no separate OS, no hypervisor, no hassle ...
Patrick
You cant compare HDD prices to server components. The cost of HDDs is fixed no matter what you are building. That's like saying I could have bought a massive server for the price I paid for my house. Yeah sure, thats a true statement, but its irrelevant.And this is the thing... you can buy a Supermicro X9SCL motherboard with a Xeon e3 1220 or 1230 and 32GB ECC RAM for under $200 off eBay. Say another $150 for a new case and power supply (ok, power supplies are hard to find in stock right now...) and you're looking at $350-400. Now you have an actual server, and for less money than just one of those hard drives. It will have no upgrade path (RAM is already maxed out), but it does cover the requirements of what you've been describing.
THIS is what I meant when I said in my initial post that you may want to re-think your hardware choices, and why I recommended in my second that you read forum stickies. This is all covered.