New to FreeNAS, need some pointers

Bytales

Dabbler
Joined
Dec 17, 2018
Messages
31
after reading much about hte wonders of the ZFS System, decided to give freeNAS a try. However, my main plan was to Setup two gaming virtual machines with gpu passtrough, so i choose ESXI 6.7 to do that. That leaves me with having to virtualize freeNAS.

My Mainboard is GA-MZ31 AR0, 32core EPYC CPU, 64gb DDR4 ECC ram, 2 VEga Frontier Edition Card WC, 3 NVME SSDs (one inserted into the Motherboard - where the ESXI is intalled, two inserted in a pci 3.0 16x Card, all three running at pci 3.0 4x Speeds. - the other two are used as gaming drives as they are, for the Windows VMs) Motherboard has 2x10Gbit ports. Also a single 10TB WD Gold.
1x10GBIT port is given to the ESXI and ist what the Windows vm use.
1x10gbit is made into a vSwitch, and give to the free NAS VM

The 10 TB WD Gold, is "passed through" as a RDM with the command
vmkfstools -z /vmfs/devices/disks/diskname /vmfs/volumes/datastorename/vmfolder/vmname.vmdk
from the ESXI console, as shown here
https://kb.vmware.com/s/article/1017530
The Boot drive for the freeNAS is a 16gb virtual disk on the SSD where the ESXI is installed. Also 16gb ram is given and 16v CPUs.

Questions:
1)With the Raw Device Mapping done as shown above, does FreeNAS have direct acces to the 10TB hdd as it should have ?
2)Can you Point me to a place where i can learn more about data store, snapshot, share, im still learning to get the gist of free nas.

Statement.
I know freeNAS isnt suppose to work in a virtualized enviroment, but
1)Motherboard is Server grade
2)16vCPU and 16GB ram are assigned
3)The 10TB hdd is given as a raw mapped device - a 9.1TB device appears in free NAS as if a hdd was attached directly.
4)The free NAS VM has for itself a 10GBIT Network, even though my Switch is Gigabit in Speed, so no shortcuts here as well.
Isnt this suppose to be enough to make for a fail proof usage of free NAS ?

I would ask if i could do it the other way around.
Install freNAS directly on the PC, and use it to create two Windows VMs, but the question is:
1)Can i passthrough GPUs to the VMs,
2)Can i pass USB 3.0 Controllers ?
3)Can i pass the NVME SSDs ? (Probably i wouldnt Need to, free NAS could use them and integrate them in ist storage System)
4)Can i pass optical drives to the VMs ?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
No, there's a virtualization guide in the useful links button in my signature, please read. also FreeNAS / ZFS needs two disks to have redundancy so it can do the things that it is there for.
 

Bytales

Dabbler
Joined
Dec 17, 2018
Messages
31
Does freeNAS Support PCI passtrough for ist VMs?
Ive see one could create a virtual machine from witihn freeNAS, if it had pci passtrough, i might give it a try.
I Need my GPU in my Windows VM, otherwise i wouldnt bother
not only my gpus, but my usb ports, for the pripherals.

Since it seems FreeNAS is beteer used installed directly on the System.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Sorry for short answers but I am working from my phone.
PCI passthrough is not supported by the virtual machine infrastructure in FreeNAS.
Your plan for FreeNAS in a VM is workable, but it will need some adjustments.
Please look at the build by @Stux as it is using VMWARE with FreeNAS inside a VM.
 

Bytales

Dabbler
Joined
Dec 17, 2018
Messages
31
No, Problem, apreciate the answers, can you post a link to his build - when you have time, the dude has so many Posts here on the Forum, im not sure which one is it.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
No, Problem, apreciate the answers, can you post a link to his build - when you have time, the dude has so many Posts here on the Forum, im not sure which one is it.
These are the two most important things, but there is a lot more in the links in my signature that you should review.

Build Report: Node 304 + X10SDV-TLN4F [ESXi/FreeNAS AIO]
https://forums.freenas.org/index.ph...node-304-x10sdv-tln4f-esxi-freenas-aio.57116/

"Absolutely must virtualize FreeNAS!" ... a guide to not completely losing your data.
https://forums.freenas.org/index.ph...ide-to-not-completely-losing-your-data.12714/
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
1x10GBIT port is given to the ESXI and ist what the Windows vm use.
1x10gbit is made into a vSwitch, and give to the free NAS VM
Put both on the same vSwitch and use IP based load balancing. Your not getting anything out of "dedicated NICs
The 10 TB WD Gold, is "passed through" as a RDM with the command
vmkfstools -z /vmfs/devices/disks/diskname /vmfs/volumes/datastorename/vmfolder/vmname.vmdk
from the ESXI console, as shown here
https://kb.vmware.com/s/article/1017530
The Boot drive for the freeNAS is a 16gb virtual disk on the SSD where the ESXI is installed. Also 16gb ram is given and 16v CPUs.
Ohh boy, where to start....
RDM is NOT pass though. If you care about the data (obviously you don't with only one disk), you NEED to pass the disk controller through to the VM. Also I dont know why you used the CLI to do this but hey why not... 16GB of RAM is fine but 16 cores is more than FreeNAS will even care to use. Due to the way CPU cores are scheduled, you should only allocate the minimum number needed for the job. In your case, note more than 4. It does not matter how many you have in your system, you should never allocate more than needed otherwise you making it harder to schedule other VMs. Please don't make me get technical on this...
1)With the Raw Device Mapping done as shown above, does FreeNAS have direct acces to the 10TB hdd as it should have ?
No.
2)Can you Point me to a place where i can learn more about data store, snapshot, share, im still learning to get the gist of free nas.
Look at VMwares knowledge base for VMware datastores, snapshots, etc. DO NOT use VMware snapshots unless you are deleting them in the same day. For FreeNAS look into https://www.ixsystems.com/documentation/freenas/
4)The free NAS VM has for itself a 10GBIT Network, even though my Switch is Gigabit in Speed, so no shortcuts here as well.
If you keep all of your VMs on the same vSwitch, they are all at least 10gbe speeds. It sounds like
1)Can i passthrough GPUs to the VMs,
In theory yeas but virtualization on FreeNAS is NOT mature, poorly implemented and not ready for serious workloads or odd setups like yours.
2)Can i pass USB 3.0 Controllers ?
Sam as the GPU. Passthrough works by passing the raw PCI device into the VM. If the device is capable of this, it doesn't matter if its a USB controller, a GPU, or some other odd PCI device. Remember, full PCI devices, NOT PORTS.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
It may also be useful to note in advance that if you passthrough a disk controller card, you will not be able to enable the VM extensions for the FreeNAS VM, meaning you will be restricted to only use jails, no VMs/dockerVMs.
 

Bytales

Dabbler
Joined
Dec 17, 2018
Messages
31
Its not that i dont care about my data, its just that its all i have now, and plan to add more hdd in the future. Thanks, will read up on all the links. It seems the best way to do it, is by doing a direct pass through.

I find it to bad that freeNas cant pci passtrough, othrwise would have made the gaming windows vm from within free nas.

The cores avirtual cpus, and the 32 core epyc has 64 threads, hence 64 virtual cpus, so 16vcpus, are obviously 8 real cores. You think that is overkill, and that i should use 8vpcu (4 physical cores) for the frenass VM ?

Regarding datastores and snapshots, i was refereing of course the free nas ones.

So by the looks of it im stuck with making esxi work, and using freenas as a virtual machine in ESXI.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
It may also be useful to note in advance that if you passthrough a disk controller card, you will not be able to enable the VM extensions for the FreeNAS VM, meaning you will be restricted to only use jails, no VMs/dockerVMs.
I don't know that that's true. If the CPU supports all the needed extensions, ESXi will happily perform nested virtualization.
the best way to do it, is by doing a direct pass through.
The only supported way is PCI passthrough.
I find it to bad that freeNas can't pci passtrough
FreeNAS does not but the hypervisor used does. bhyve is growing and maturing its just WAY behind most other hypervisors.
so 16vcpus, are obviously 8 real cores
That's not how CPU scheduling works.
You think that is overkill, and that i should use 8vpcu (4 physical cores) for the frenass VM ?
I know you should set the CPU count to 4. Let the scheduler do its job of CPU scheduling. Don't overthink it. That's my job.
So by the looks of it i'm stuck with making esxi work, and using freenas as a virtual machine in ESXI.
Nothing wrong with that, just use PCI passthrough and your set. Remember, the only VMDK backed storage FreeNAS should use is for the boot disk. Are you planning to use PCI passthrough for the NVMe or are you just using RDMs? In that case, RDM is fine. If you want to use VMware snapshots, use Virtual mode RDM. Physical RDM will not support VMware snapshots.

WORD OF WARNING - If using PCI passthrough, do not remove, add, or rearrange you PCI/NVMe devices and this may change the IDs and ESXi may not like that. This would not cause data loss just require reconfiguring things (in most cases).
 

Bytales

Dabbler
Joined
Dec 17, 2018
Messages
31
so you mean to say 4 vCPU would be enough for freeNAS no matter how much Terabyte it is handling ?
Why have i heard then of freeNas boxes with quad core or octo core cpus ?
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
so you mean to say 4 vCPU would be enough for freeNAS no matter how much Terabyte it is handling ?
Why have i heard then of freeNas boxes with quad core or octo core cpus ?
The CPU has nothing to do with the quantity of data. The only reason to have more CPUs (cores) is for running jails, VMs, or busy services. I could build a NAS that only has 500GB but needs 4GHz+ 10 core CPUs. I even removed a CPU from my old server to save power and there was ZERO impact on performance. This NAS was serving 8+ VMs over dual 10gbe and a hand full of small jails.

If your NAS was running a few jails, serving lots of people and supporting VMs, then more CPUs would make sense. Just for fun, try setting it to 1 CPU and test performance. You could even set a CPU limit and vary between the number of cores and "clock" speed to see how it impacts performance. In general, when building virtual servers for work, I default to two vCPUs unless I know that I will need more. I only know from testing and vendor guidelines. In this case, it's just "Multicore 64-bit processor". More wont hurt (unless you trade GHz for cores on an extreal level) in a physical build but that changes with virtualization. You have to share your resources and to do that effectively you need to know how those resources are shared.

Just because people do things, does not make it correct or logical.
 

Bytales

Dabbler
Joined
Dec 17, 2018
Messages
31
Well its a bummer, passing through the mainboard Sata controller, crashes the ESXI host, so i guess ill need to use the non recommended option.
vmkfstools -z /vmfs/devices/disks/diskname /vmfs/volumes/datastorename/vmfolder/vmname.vmdk

I dont get it. If this option works, why would it not work at some point in the future, as people say.

Also i want to know, if i use the hdd in the freenas "passed through" like this, are the bits written on the hdd exactly as if the hdd were to be used in a direct free NAS instalation ?

What i mean to say, if i use the HDD like that, and then plug it out, insert it in another pc, install there free NAS; would the hadd have direct the file structure that is recognized by freeNAS ?

Because hte way i see it, this command, points the freeNAS VM to use the whole hdd directly, kinda like its attached directly to the freeNAS vm. Only its not 100% direct, becuase stuff like SMART isnt available.

I think this is my only way to use the hdd in the freeNAS vm, if i cant figure a way to make the passthrough to work correctly.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Using the -z as opposed to -r is indeed important and *SHOULD* result in data being written to the disk as though it was directly connected to a "passed" HBA. This is still a gray area as far as smart an other non-trivial drive operations. Please test to see how the drives appear in FreeNAS as well as SMART data and report back!

OR

Go buy a supported HBA for $20 and do it the supported/known to not kill you data way.
 

Bytales

Dabbler
Joined
Dec 17, 2018
Messages
31
Supported HBA ? Any links for what that might be. Ist a shame, since i have 16 sata ports on my Motherboard. Since pci passtrough busts ESXI, and ive busted my windowsd vms as well (Need to reinstall them), ist the issue of the Image not being displayed properly on th emonitor, or they dont start properly), i think ill use the RDM as pointed above for now.
 

Bytales

Dabbler
Joined
Dec 17, 2018
Messages
31
Managed to set up Free NAS, created a stripe ZFS pool from a single 10tb wd Gold hdd, given to the freeNAS VM, as a Raw Device Mapping. Supposedly as written here
https://docs.vmware.com/en/VMware-v...UID-4236E44E-E11F-4EDD-8CC0-12BA664BB811.html
Raw Device mapping, is of two kinds, physical and Virtual, whereas Physical presents the drive to the VMas if it were a physical Connection ? Maybe similar to a direct passtrough.

I have tried a ESXI 6.7U1 custom tuned from DELL, this ESXI seemed to worked better for my Windows VMs, and i made the FreNas VM here with a RDM. Never bothered to try a direct passtrough with this Version of ESXi, becuase hte previous Version would Crash the host once a vm is started with that passed trough hdd.

Im going to try a HirensBoot CD to see how the Windows Enviroment sees the 10 TB drive.If it sees it as a virtual disk, i know ist not a direct Connection between hte disk and the freeNAS VM. but if it see thes ZFS pool, then i know that RDM/Physical is perhaps similar to a direct passtrough, well more similar than a virtual RawDeviceMapping.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Physical presents the drive to the VMas if it were a physical Connection ? Maybe similar to a direct passtrough.
This has more to do with SCSI command translation. In "Virtual" RDM mode, the hypervisor hijacks the commands from the VM to the drive. This allows for things like snapshotting the disk and some other features. The physical mode *SHOULD* just pass the SCSI commands/sense codes to and from the disk through the virtual SCSI controller. This concern is that we don't know exactly how that controller handles retries and error codes.

In the FreeNAS shell please run camcontrol devlist and smartctl --all
 
Top