FreeNAS Virtualized on Proxmox

Joined
Jul 2, 2017
Messages
5
Thanks
0
#1
Hi all,

all right so here is my plan. I intend to build a 15 drive server running in a Ryzen 7 8 core processor. Proxmox will run on 2 1TB drives in RAID 1 which will also host my VM's. I will also use a 1TB drive for caching to FreeNAS. Storage will be based around 12 3TB drives passed directly through to FreeNAS and put into a RAIDZ-3 array giving me 27TB's of storage, with three drive redundancy.

What I would like to know is, if perse proxmox goes sideways, would I still be able in import my ZFS array into a new instance of FreeNas.

Just to clarify, I am new to FreeNAS. I have used OMV for years and still do, but the more I research FreeNAS the more I like the features of it.

I thank you for your time.

Mike
 
Last edited by a moderator:

danb35

FreeNAS Wizard
Joined
Aug 16, 2011
Messages
9,810
Thanks
3,438
#2
If the drives are passed directly through to the FreeNAS VM, your data should be fine. But other posts here (including by @jgreco) indicate that KVM (which is what Proxmox uses as its hypervisor) doesn't work well with FreeNAS.
 

Stux

FreeNAS Wizard
Joined
Jun 2, 2016
Messages
4,170
Thanks
1,588
#3
I'm confused about your choice of a 1TB caching drive? Is that an SSD?
 

melloa

FreeNAS Expert
Joined
May 22, 2016
Messages
1,532
Thanks
292
#4
Joined
Jul 2, 2017
Messages
5
Thanks
0
#6
Go esx(i). Vsphere (still) free and its what everybody's running with success.
Yeah, I considered that, but I have experience with Debian (and like Debian very much) which is what Proxmox is based on. Proxmox allows me to run LXC containers which I have grown fond of and I like to place each service I am running in a separate LXC instance that way if something goes pear shaped, I simply have to spin up a new instance of Ubuntu Server and reinstall the service. It does warrant further investigation though. Does vSphere use LXC instances?
 

Stux

FreeNAS Wizard
Joined
Jun 2, 2016
Messages
4,170
Thanks
1,588
#7
Yes. Is that too big of a drive for a cache?
Its quite a large L2ARC, yes.

The problem is that the book-keeping data for the L2ARC has to live in the same memory that the ARC uses, so the more L2ARC you have (in use) the less ARC you have.

Some estimates are to keep a 1:5 ratio RAM:L2ARC up to 1:10.

https://www.google.com/#q=how+to+size+l2arc

So, with 1TB of L2ARC, I hope you were planning on 128GB of RAM.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
11,435
Thanks
2,768
#8
L2ARC:ARC RAM sizing has a lot to do with the block size you select, which in turn has a lot to do with what you're doing with it. If you're doing RAIDZ and you're already planning a RAID1 to store your VM's, that hopefully means that you're not planning to store VM's on the virtualized FreeNAS. If that's the case, don't worry about the L2ARC, size your FreeNAS VM RAM appropriately (maybe 24-32GB), and give it a shot.

You need to do proper PCIe passthru in order for FreeNAS to be able to work correctly and safely. This is the biggest single blocker to virtualization. We know that ESXi does this right on many platforms as long as they're server-grade and vaguely modern, but a lot of other hypervisors don't. If you cannot see the raw disk, there are extreme hazards, such as you might find that your data is locked away in a hypervisor-specific format when things go sideways, and then you cannot just run FreeNAS on the bare metal platform and import the pool, so your data recovery then becomes a challenge in how expert you are at virtualization disaster recovery.

Please note that the forum isn't a good choice for support if your virtualization initiative goes sideways. Most of the people here are "hobbyist" or "home user" class users, and don't do it. Many of us who do virtualize are completely comfortable with virtualization (often professionally) yet still have at least a virtualized FreeNAS horror story or two to tell, and the additional complexity of figuring out how to unwind a botched system is beyond what forum support can provide. I'll buy you a virtual beer if that happens, but it's beyond what I can support for free on a forum.
 
Joined
Jul 2, 2017
Messages
5
Thanks
0
#9
L2ARC:ARC RAM sizing has a lot to do with the block size you select, which in turn has a lot to do with what you're doing with it. If you're doing RAIDZ and you're already planning a RAID1 to store your VM's, that hopefully means that you're not planning to store VM's on the virtualized FreeNAS. If that's the case, don't worry about the L2ARC, size your FreeNAS VM RAM appropriately (maybe 24-32GB), and give it a shot.
all right I appreciate it. I'll just pas through this drive directly through to my Windows VM for usage. (Not going into details as to why due to that pasky DMCA).

You need to do proper PCIe passthru in order for FreeNAS to be able to work correctly and safely. This is the biggest single blocker to virtualization. We know that ESXi does this right on many platforms as long as they're server-grade and vaguely modern, but a lot of other hypervisors don't. If you cannot see the raw disk, there are extreme hazards, such as you might find that your data is locked away in a hypervisor-specific format when things go sideways, and then you cannot just run FreeNAS on the bare metal platform and import the pool, so your data recovery then becomes a challenge in how expert you are at virtualization disaster recovery.
Fair enough warning. This is for home use and isn't a production server. This is merely for media, so nothing mission critical here. I do enjoy a challenge, so if something does go pear shaped, could be fun to figure out.

Please note that the forum isn't a good choice for support if your virtualization initiative goes sideways. Most of the people here are "hobbyist" or "home user" class users, and don't do it. Many of us who do virtualize are completely comfortable with virtualization (often professionally) yet still have at least a virtualized FreeNAS horror story or two to tell, and the additional complexity of figuring out how to unwind a botched system is beyond what forum support can provide. I'll buy you a virtual beer if that happens, but it's beyond what I can support for free on a forum.
Appreciate the warning. I have always found home users come up with the best, if not the most orthodox solutions to these kinds of problems.

I thank you all for your input, and will keep you all updated as to my adventure.
 
Joined
Jul 2, 2017
Messages
5
Thanks
0
#10
As an update. I just ran a test and was able spin up a FreeBSD VM in Proxmox and was able to pass a 400GB disk directly through to it and mount it with no problems.

My question is as follows. Is it FreeNAS, or is it ZFS that has issues with my proposed configuration?
 

Martin Maisey

FreeNAS Aware
Joined
May 22, 2017
Messages
34
Thanks
18
#11
Yeah, I considered that, but I have experience with Debian (and like Debian very much) which is what Proxmox is based on. Proxmox allows me to run LXC containers which I have grown fond of and I like to place each service I am running in a separate LXC instance that way if something goes pear shaped, I simply have to spin up a new instance of Ubuntu Server and reinstall the service. It does warrant further investigation though. Does vSphere use LXC instances?
No, vSphere doesn't provide LXC or anything like it. FreeNAS, of course, supports jails which are a very similar concept for FreeBSD containers. So you could run your services on those instead, but you would have to get used to FreeBSD, and in particular its packaging system.

I suspect you're dead set on Proxmox - something I certainly understand. For home use, I think it’s wonderful and its community edition beats free ESXi into a cocked hat. If so - and you want to do hyperconverged virtualisation and storage on the same box - you may be better off dropping FreeNAS and just using ZFS directly. ZFS on Proxmox has been a first class citizen for some time. It will natively support creating zvols for VM disks, nested file systems for containers, ZFS snapshotting/cloning, and even - with pve-zsync - scheduled snapshotting and replication. There are suggestions in the manual that at some point in the future there will also be GUI support for setting the latter up and doing VM recovery. All very neat and clean.

What you will lose, of course, is all of FreeNAS's awesome support for graphically managing pools, volumes, shares, snapshots, scrubs, replication etc., very good alerting, and easy configuration backup/restore. Samba with web based management can be done relatively easily by bind-mounting storage into the provided fileserver LXC appliance. NFS is hard to run from a container so must be done from the host, although you can use ZFS's sharenfs properties to easily export shares. ZFS alerting can be done through setting up zfs-zed on the host (SMART alerting to the Proxmox root user's email already works out of the box). Scrubs must be manually scheduled via crontab. To replicate non-VM shares, you need to install one of the many existing scripts for managing snapshots and replication to the host. But all in all, you need to be comfortable doing storage admin at the command line.

FreeBSD/FreeNAS has historically been a more mature ZFS environment than ZFS-on-Linux due to license compatibility issues. However, ZoL's been around a while now and is becoming a more mainstream concern since Ubuntu 16.04 decided to ignore the licensing risk and bundle it in as a first class citizen anyway; Proxmox uses an Ubuntu-derived kernel with a Debian userland. Harder to ignore is the wealth of experience on this forum on running FreeNAS based ZFS, which is subtly different. The Proxmox/ZoL forums don’t have the same level of knowledge.

I’m currently trying to work out which option’s best for me, as I want to slim down from running a two machine Proxmox cluster (with a lightweight Proxmox node on FreeNAS VirtualBox for cluster quorum) accessing FreeNAS storage over NFS. With a backup FreeNAS box to take storage replicas, this is four physical servers. I want to run a hyperconverged ZFS virtualisation setup on two larger servers. As well as being simpler and more efficient in hardware terms, it removes network limitations on accessing storage - 10Gbit looks too expensive and fiddly for me, I don’t have the spare slot for the cards on my ZFS servers, and even with it there’s still a fair bit of protocol overhead from iSCSI/NFS.

Summarising below what I think the pros and cons are at the moment for me - I would appreciate corrections + feedback on things I have missed. I’ve ruled out FreeNAS under Proxmox and under Xen, as they don’t seem very frequently used and potentially problematic.

It’s not an easy decision; if FreeNAS had decent native virtualisation it would be the easy leader, but even in v11 it seems extremely basic.

Cheers,

Martin

****

FreeNAS with native virtualisation

Pros:

* First class, mature ZFS NAS
* No abstraction layers (NFS/iSCSI) over ZFS VM volumes
* Support for all ZFS features - e.g. low cost VM snapshot/clone, thin provision
* Boot off mirrored USB to maximise drive bays for pool storage
* Really easy host backup/restore
* Integrated management/alerting
* Recommended configuration suitable for production use (is this actually the case for the new virtualisation features, or are they still viewed as experimental?)
* Easy snapshot replication

Cons:

* Very immature virtualisation - bhyve pretty new with limited features, UI basic and usability not good
* Somewhat picky about hardware
* No native Linux containers with storage bind-mount
* Can only buy official support on iXsystems certified hardware

****

ESXi/FreeNAS all-in-one (with HBA pass-through)

Pros

* First class, mature ZFS NAS
* Mature ESXi virtualisation
* Fair number of people doing it
* Integrated management/alerting

Cons

* Limited ZFS feature support via VAAI
* VMware somewhat crippled in free ESXi version (e.g. storage vmotion, can you do snapshots nowadays as you couldn’t when I last used it?), vSphere expensive for home use
* Choice between NFS (easy to manage but significant overhead) or iSCSI (more complex management requirements and still some overhead) additional layers over ZFS VM volumes
* USB boot, but ESXi datastore needs to reside on disk, burning a bay/bays
* No software resilience on USB boot device or data store; additional hardware adapter required for RAID of latter
* Very picky about non-enterprise hardware
* Not really recommended for production, not possible to buy official support
* Complicated setup, difficult to get things like controlled UPS shutdown working properly
* No native Linux containers with storage bind-mount

****

Proxmox with native ZFS

Pros:

* First-class, non-crippled virtualisation
* No abstraction layers (NFS/iSCSI) over ZFS volumes
* Support for all ZFS features - e.g. low cost snapshot/clone
* Software mirroring (ZFS or mdadm) for boot drives
* Native Linux containers with storage bind-mount
* Wide hardware support
* Recommended configuration suitable for production use (albeit with some customisation required of Debian based VM host for ZFS snapshotting, zed, NFS sharing)
* Fair number of people doing it
* Reasonably priced support for home use

Cons:

* No built in UI NAS functionality - lots of command-line required
* Have to roll/integrate your own alerting/snapshot replication for ZFS
* Less mature ZFS platform than FreeNAS
* Less mature virtualisation platform than ESXi
* No USB boot, burning a bay/bays
* No appliance-like host backup/restore
 
Last edited:

Martin Maisey

FreeNAS Aware
Joined
May 22, 2017
Messages
34
Thanks
18
#12
Ryzen 7 8 core processor
From what I've read, although Ryzen supports ECC RAM, motherboard support for it is patchy and definitely worth checking. Running ZFS without working ECC is risky and not recommended - if you get a memory error, pool scrubs can trash perfectly good on-disk data.
 

danb35

FreeNAS Wizard
Joined
Aug 16, 2011
Messages
9,810
Thanks
3,438
#13
if you get a memory error, pool scrubs can trash perfectly good on-disk data.
Not really. Even with bad RAM, a scrub will trash valid data only if a number of things happen:
  • ZFS reads good data from the pool, but because of a memory error, it's read incorrectly and doesn't match its checksum.
  • ZFS then goes to parity. Due to the bad RAM, the parity data is also read incorrectly.
  • The memory error also results in the checksum for that parity being read incorrectly.
  • By an astronomical coincidence, the bad parity data matches the bad checksum.
  • ZFS computes what the data block should have been, based on the bad parity data, and "corrects" it.
The coincidence of the first three bullets is unlikely enough, but the fourth is just not going to happen. There are good reasons to use ECC in ZFS systems, but "scrubs will trash your data" isn't one of them.
 

Ericloewe

Not-very-passive-but-aggressive
Moderator
Joined
Feb 15, 2014
Messages
15,959
Thanks
3,788
#14
and you want to do hyperconverged virtualisation and storage on the same box
That noise you hear is the sound of millions of souls groaning at the use of that buzzword.
 

Martin Maisey

FreeNAS Aware
Joined
May 22, 2017
Messages
34
Thanks
18
#16
Not really. Even with bad RAM, a scrub will trash valid data only if a number of things happen:
  • ZFS reads good data from the pool, but because of a memory error, it's read incorrectly and doesn't match its checksum.
  • ZFS then goes to parity. Due to the bad RAM, the parity data is also read incorrectly.
  • The memory error also results in the checksum for that parity being read incorrectly.
  • By an astronomical coincidence, the bad parity data matches the bad checksum.
  • ZFS computes what the data block should have been, based on the bad parity data, and "corrects" it.
The coincidence of the first three bullets is unlikely enough, but the fourth is just not going to happen. There are good reasons to use ECC in ZFS systems, but "scrubs will trash your data" isn't one of them.
To be fair, I had just been quoting https://drive.google.com/file/d/0BzHapVfrocfwblFvMVdvQ2ZqTGM/view , slide 46. But having googled, http://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/ provides some balance and says the same as you.

I defer to the greater wisdom of others here + apologies for propagating an urban myth!

Anyway, as you say, the general point about ECC being a good idea still stands...
 
Joined
Apr 15, 2019
Messages
1
Thanks
0
#17
For what it is worth, I have been running FreeNAS for over 7 years without ECC memory. I have suffered at least 4 drive failures over that time and have not had any memory related issues. I was able to replace the failed drives and rebuilt the array every time without losing a single bit of data.

Now that I have said it, I hope it does not byte me in the ass.

Greg
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
11,435
Thanks
2,768
#18
In general, well-tested non-ECC memory has a strong tendency to remain good for its lifetime. It's just the cases where it doesn't, probably in combination with other misfortune, that causes ZFS badness.
 
Top