I’m going to go with yes. I don’t have a particular idea in mind at the moment, but I’d rather build it with the capability from the start.
Just to clarify, this would be for giving other guests mass storage on the FreeNAS pool? Or giving ESXi storage on the pool to install guests on?
If I want to give other guests mass storage as previously stated, that has to be passed back through ESXi?
Like I said, I’m hugely a noob with this, but everyone has to start somewhere.
Don't overthink it. Aside from the small ESXi datastore used to boot your FreeNAS, Think of FreeNAS as a separate storage server. If you want to provide storage to ESXi to house VMs, use iSCSI. If you want a file level share, setup SMB/FTP/AFP.
I don't remember if Stux covers this but you should be sure to use VMXNET3 network cards for FreeNAS. You also need to do some reading on iSCSI, multipathing, and iscsi vmkernel port binding. I know it sounds like a lot but its not bad once you do it 50 times.
1) what is the difference between mounting a SMB/NFS share in a guest vs sending it back to ESXi to be given to a guest as a virtual disc? If I understand correct, ESXi is still mounting that as a NFS, but then providing it to a guest as a virtual disc. What is the difference..? All I can figure is the guest sees it as a normal hard drive and thus uses its virtual SATA controller to reads and write data which should provide greater speed and IOPS. But then it hits ESXi>FreeNAS as NFS, so, how is it really different? Does that not just become the bottleneck?
"mounting a SMB/NFS share in a guest vs sending it back to ESXi to be given to a guest as a virtual disc" That's about it. It all depends on where your data is and how to best consume it for a given application. Don't overthink it, it's no different that having all separate servers (kinda-sorta-not really).
"virtual SATA controller" most VM versions preset to SCSI of some sort. For you boot drives leave the defaults, for other drives, add a pvscsi controller and connect your vmdk to that. Don't forget VMware tools or the disk may not show up as it includes the drivers.
2) can you mount the same datastore as a virtual disc multiple guest VM’s? Or do they have a 1:1 relationship as in once a datastore is given as a virtual disc to one guest, it can no longer be given to others?
You don't mount the datastore to a VM. The datastore holds all of your VMs and their files including the VMDK's (the virtual disks). ESXi mounts the datastore via NFS (dont use this please), iSCSI, Fiber Channel, Direct attached disk/array, FCoE, etc...
Put another way, VMs are files, files get saved to a datastore.
3) can other guests access a datastore that is given to a guest as a virtual disc via SMB? I want to give Ubuntu a virtual disc for syncthing to increase its IOPS performance (assuming that is the best way) but my windows gaming PC/photo editing machine will need SMB access to that data.
See "2)" also SMB and datastores do not have anything to do with each other. A datastore is just a filesystem where VMs get saved. So if I make a windows file server VM with three disks, one for boot, one for the finance files (F:) and one for engineering files (E:), I can use SMB in windows to share those drives and the files on them. But nobody outside of ESXi (or FreeNAS if using NFS) has any clue that those drives (C:,F:,E:) are just vmdk files.
"I want to give Ubuntu a virtual disc for syncthing to increase its IOPS performance (assuming that is the best way) but my windows gaming PC/photo editing machine will need SMB access to that data."
In this case you can give ubuntu a hard drive and share the folder (from within Ubuntu) with SMB or you could have a SMB share on FreeNAS suffer the small performance hit. The biggest factor in IOPS will be your zpool anyway. Again, how whould you do this with two/Three physical computers? Ubuntu server, Windows desktop, FreeNAS server.
4) should a optane 800p work for this use case? This system really just is a homelab, my largest intensity thing will be syncthing, a little plex action (which is mostly all read not write) and maybe a few other cool things down the road as I learn more about what is possible. I am absolutely not worried about the 800p’s write threshold of ~350tb. My FreeNAS pool is only 10x4TB and not much ever gets deleted, it just slowly fills up until one day I’ll have to think about larger drives or another vdev.
What kind of VMs will you be using? How Many users? I can't recommend a specific SLOG drive but you don't need to go nuts either.
4) I noticed you gave your 16GB system 64GB or L2ARC which I always thought people said don’t even bother with until you have an “abundant” amount of RAM. If the 800p is enough, and I am looking at the 58 gb model, would it make sense to take the same approach? 20 GB to SLOG, few GB to swap, and the rest give to L2ARC?
As for L2ARC and RAM, you generally won't get much out of it unless you running a fair number (5+) VMs with a consistent workload. My guideline here is add RAM until its unfeasible. Then look into you working set size and do some math to find the smallest possible L2ARC to suit your needs as the L2ARC uses the ARM memory to map the L2ARC and you want to keep as much in RAM as you can. As for sharing the drive, the SLOG will never be more than a few GB even on EXTREMELY fast and busy servers. Many people go with larger drives simply because they are faster. Also its considered a bad idea to share a device for SLOG and L2ARC but you are free to experiment (with extreme caution). You already have swap. A small part of your disks in you pool is dedicated to swap. If your using swap, get more RAM.
That came out of left field! Im not an expert here but I am will to be that ESXi has better documentation. Just keep your version in mind when searching. Also be mind full of the fact that you are running a standalone host and not vSphere with vCenter.