Halfway done low-budget ESXi build

Status
Not open for further replies.

Shockwaver

Dabbler
Joined
Mar 21, 2018
Messages
31
Hello everyone!
I'm currently building an home server low-budget solution that will embed few VM running on ESXi, such as Gateway/Firewall VM, General Purpose VM and Video security recording VM.
At the current stage everything works fine (on non-server CPU, non-ECC RAM, HDD passthrough to the VCC VM) except the simple fact that it hasn't got FreeNAS in it :(
Therefore I decided to rebuild the system around FreeNAS requirements trying to keep the budget as low as possible, still of course, getting the fundamentals for FreeNAS (ECC RAM) and some enhancements (SLOG drive). The other VMs got no VERY particular needs, except of course, as much CPU and RAM(also speed) will be left for them.

As of now I got:
CPU: Intel Xeon E3-1230 v6 (edit: by mistake I wrote 1220 which has 4 core and 4 threads. For this later replies will also be focusing on the lack of enough cores)
RAM: 4x Crucial CT16G4WFD824A (4x16GB DDR4-2400 UDIMM ECC CL17)
NAS HDD: 3x2TB Seagate Ironwolf (by now)
The cheapest Video Card around

What I was planning/wishing to keep out of the current configuration:
OSs HDD: Samsung NVMe 960 Pro 512 GB
VCC HDD: An old 4TB 7200rpm (as long as it will live)

Now my concerns are about the MoBo, the SLOG device (I'd really like it since I got a *real* GigEth coming into the machine) and, if needed, a SATA controller, all of them for a rough 350$ (or 400€ if you wish) left from the budget.

First off a couple of tight needs:
I need the on 4TB VCC HDD to passthrough to the VCC VM so it has not to conflict with the passthrough of the NAS HDDs and with the bare metal NVMe.
Same goes for the SLOG device.
I'd also like to squeeze those 2400MHz out from the RAM

MoBo:
I was thinking about the Asus P10S WS (Datasheet) even with 'only' 2133 MHz RAM support or the Supermicro MBD-X11SSH-LN4F (Datasheet) with 2400MHz RAM support
Given one of these 2, I'm trying to figure out if and how the passthrough will work out: I need to pass the 3 NAS drives to the FreeNAS VM and the one 4TB drive to the VCC VM, will I be able to do so or passing them will involve the SATA controller and I cannot redirect it to the 2 different VMs? Still studing the docs but I could use some help...

SLOG device:
Here my concerns are about the price, the size, the type and the connection.
I was reading from FreeNAS docs that the SLOG usage will cap at 16GB before committing the write to the disks, or was it 32? Anyhow this means to me I can save with reducing the disk size closing in to the minimum required, or around it (AKA I don't think I will even need 64GB). So I would need an advice on what to look that would be resilient enough to ba an SLOG device, not too big to avoid raising the cost and fast enough not to be the bottleneck of my transfers at 1Gbps
Finally, since I have to pass it through to the FreeNAS VM, I will need a port that will not conflict with the other passthroughs and the Samsung 960 Pro NVMe M.2

My kindest thanks for the attention and any possible help. :)
 
Last edited:

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112

Shockwaver

Dabbler
Joined
Mar 21, 2018
Messages
31
Thanks for the reply.
It makes sense...
In other words I would need a total number of 3 SATA controllers: 1 for the hypervisor, 1 for the FreeNAS and one for the VCC VM.
It means that if the MoBo has 3 on it I wouldn’t need a separate HBA and, more generally, that I would need a total number of HBA equal to my need of passthrough less the number of SATA controllers embed in te MoBo, correct?

I will now take a look at the thread about SLOGs, thank you again!
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
What is a VCC VM?
ESXi is more than happy to boot from USB, FreeNAS and vCenter (I think that's what you mean) VMs can reside on the same datastore and I thought that's what the NVMe was for. From there, you would pass your HBA and the attached disks to the FreeNAS VM. What does everyone think about the SLOG on a VMDK backed by the NVMe? Not ideal but still better than sync=disabled.

If I'm missing something. I haven't had my coffee yet.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Also your CPU NEEDs more cores! A quad for ESXi, FreeNAS, and anything else will choke under almost any load. FreeNAS needs two cores and that leaves two for everything else before you start looking at CPU ready times.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
In other words I would need a total number of 3 SATA controllers: 1 for the hypervisor, 1 for the FreeNAS and one for the VCC VM.

In your case, you can boot ESXi from USB, provision the NVMe drive as a VMFS datastore (hopefully?) and pass the SATA controller through to your VCC VM. You would then only need one additional controller (the SAS HBA) to pass through to FreeNAS, and you're all set.

What does everyone think about the SLOG on a VMDK backed by the NVMe? Not ideal but still better than sync=disabled.

I'd say "false sense of security" and probably "don't do it."
 

Shockwaver

Dabbler
Joined
Mar 21, 2018
Messages
31
My bad!
VCC is an old habit of mine to call CCTV, so it's a VM used to record streams of IP cameras.
ESXi AND the VMs will reside on the NVMe.
Together with the NAS disks, on the same HBA controller, I would pass also the SLOG device... that was my understanding.
Of course you are right about CPU cores, but as I said it's a low budget shot by now, so I was planning not to reserve any cpu core for any of the VMs, ESXi will balance the load for the more demanding one at the moment... I could give priorites thou.
Next step, at least 8 cores, but not by now
 

Shockwaver

Dabbler
Joined
Mar 21, 2018
Messages
31
In your case, you can boot ESXi from USB, provision the NVMe drive as a VMFS datastore (hopefully?) and pass the SATA controller through to your VCC VM. You would then only need one additional controller (the SAS HBA) to pass through to FreeNAS, and you're all set.
Wait... ATM I am keeping ESXi AND the VMs all on one single datastore on the NVMe succesfully! I mean... it's been up and running for 1 year in this configuration already, am I doing wrong?
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Wait... ATM I am keeping ESXi AND the VMs all on one single datastore on the NVMe successfully! I mean... it's been up and running for 1 year in this configuration already, am I doing wrong?
Not wrong, just not ideal. Its convenient and a little bit safer to keep you boot and data media separate.
Of course you are right about CPU cores, but as I said it's a low budget shot by now, so I was planning not to reserve any cpu core for any of the VMs, ESXi will balance the load for the more demanding one at the moment... I could give priorities thou.
Its important to only give the absolute minimum number of cores to all VMs. Due to the way the CPU scheduler/x86 virtualization works, if you have a 4 core CPU and two VMs both with 3 cores, only one VM can run per clock cycle. This is due to the fact that a VM needs the total number of cores configured to be available in order to execute. If FreeNAS has 2 cores, two other single core VMS can run in the same clock cycle assuming ESXi itself is idle during that clock.
More information: https://docs.vmware.com/en/VMware-v...er-server-65-monitoring-performance-guide.pdf
Search the doc for "CPU Ready"
I hope that makes some sense to you. I'm not saying this wont work, or its a stupid idea, I just want to to know the limitations going in.
 

Shockwaver

Dabbler
Joined
Mar 21, 2018
Messages
31
If FreeNAS has 2 cores, two other single core VMS can run in the same clock cycle assuming ESXi itself is idle during that clock.
More information: https://docs.vmware.com/en/VMware-v...er-server-65-monitoring-performance-guide.pdf
Search the doc for "CPU Ready"
I hope that makes some sense to you
That surely makse sense.
Well then, after building up the system I will keep an eye to the chart for some time. If any of the VM is having an high CPU ready time and Load I will then specifically redistribute my poor and lone cores in a better way than just giving 4 vCPU to all the VMs
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
I agree with kdragon75, if you haven't already purchased the CPU, up it to a E3-1230 V6, which is 4 core 8 thread. I didn't even know they made E3 CPUs without HT. Also agree on the Supermicro MB, known quantity with lots of support around.

Depending on the number of cameras streams (<10) you likely wont need much performance, an easier method would be to setup the 4TB as a VMDK rather than worry about passthrough. It's basically just bulk storage.

That leaves you with:
BOOT: USB drive
VM store: 960 PRO
CCTV: VMDK on 4TB HDD
SATA Drive Controller: Passthrough to FreeNAS VM

If you want to pass the onboard SATA controller to FreeNAS that wont work on ESXi past 6.5 U2, 6.7 does not currently allow that. If you want to run 6.7 it would require an added controller that you can passthrough.

Highly likely you wont need a SLOG on a basic build. Since you can add it afterwards might be best to start without and see what your performance is like.
 

Shockwaver

Dabbler
Joined
Mar 21, 2018
Messages
31
Ups... again my bad! It actually IS the 1230 v6 with HT!!
Sorry!
About CCTV I’d rather like to keep the storage out from the NAS for a couple of reasons
1. It’s an almost constant stream of 4x 1080p and 2x 4K cams. I’d like not to keep the NAS busy with it as it is an useless stream of data.
2. I’d like to manage the drive from the same (Win) environment where I’m accessing the cams application via RDP from.
As for the SLOG I’d like to set it up while I’m at it... [emoji28]
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457

Shockwaver

Dabbler
Joined
Mar 21, 2018
Messages
31
That would indeed be the worst way to configure things.
Alright some background is now needed...
Apart from the CCTV and the GW VMs that are in constant processing even though with 1~2% CPU load (out of an i7 6700k 8 vCPU given to each) the other VMs are basically idling for most of the time. From time to time I will connect to my general purpose machine for some compiling and from time to time I will move ISOs or vids to the pool... the svn server also will do some job from time to time. What I mean is that chances are my VMs will not much find themselves contending the CPU. On the contrary I’d like to squeeze all the CPU cores when I will terminal server or VNC a VM getting the most processing power possible in that moment. This was the idea behind assigning 8 vCPU to all of them. Am I thinking wrong?
 

Shockwaver

Dabbler
Joined
Mar 21, 2018
Messages
31
If you want to pass the onboard SATA controller to FreeNAS that won't work on ESXi past 6.5 U2, 6.7 does not currently allow that. If you want to run 6.7 it would require an added controller that you can passthrough.
And that... that is the worst info you could give me... that means I surely need 2 separate HBA. Period.
Am I at least able to passthrough the onboard nics?? Because if not things get really ugly here... are this info available in ESXi docs?
 
Last edited:

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
all right some background is now needed...
Apart from the CCTV and the GW VMs that are in constant processing even though with 1~2% CPU load (out of an i7 6700k 8 vCPU given to each) the other VMs are basically idling for most of the time. From time to time I will connect to my general purpose machine for some compiling and from time to time I will move ISOs or vids to the pool... the svn server also will do some job from time to time. What I mean is that chances are my VMs will not much find themselves contending the CPU. On the contrary I’d like to squeeze all the CPU cores when I will terminal server or VNC a VM getting the most processing power possible in that moment. This was the idea behind assigning 8 vCPU to all of them. Am I thinking wrong?
You will actually get better CPU utilization with fewer cores. The pfSense VMs (depending on what there used for) will not benefit from more than 2 cores and really only need one. Trust me, this is a big part of my job... The VMs will be FASTER with FEWER cores. I understand you can lead a horse to water but you cant make it drink...
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
that means I surely need 2 separate HBA. Period.
Is there a technical reason the VCC needs a passthrough disk?
Am I at least able to passthrough the onboard nics??
What is this for? Again, is there a technical requirement?
It seems you are making things far more difficult than you need to.
 

Shockwaver

Dabbler
Joined
Mar 21, 2018
Messages
31
I wish I was...

The CCTV VM is needing low level handling of the disk where it stores video streams since the application (that is a storage server) won’t accept a nas destination and won’t behave with an iSCSI. The alternative would be setting a quota for each camera and make them point to a NFS directly, but I will lose storage wrap around capabilities the storage server has and I will need to re initialize the storage space every 3 weeks or so. Also I don’t want FreeNAS to constantly process the video streams

On the other hand OPNsense won’t behave properly with a bridged adapter when it comes to handling an annoying PPPoE connection, the one I need to access internet.

[emoji53]
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
For the CCTV VM if you add a VMDK (Virtual disk) it will look like a local normal disk. Even if it's backed by iSCSI.

For open sense, VMware does not use bridges. It's a proper virtualized switch. Of you need promiscuous mode or forged MAC transmits, you can enable that too. Also virtualized gateway routers tend to be a PITA.
 

Shockwaver

Dabbler
Joined
Mar 21, 2018
Messages
31
an easier method would be to setup the 4TB as a VMDK rather than worry about passthrough. It's basically just bulk storage
For the CCTV VM if you add a VMDK (Virtual disk) it will look like a local normal disk
I guess you both are right it should easily work...

For open sense, VMware does not use bridges. It's a proper virtualized switch
I tried and for some reason didn’t work, PPPoE didn’t handshake successfully. Truth to be told I gave up fairly too quick and used the passthrough... I could give it another try.

However, with the need of one less passthrough of the disk things are fine even if I will eventually end up again passing one nic to the GW
 
Last edited:
Status
Not open for further replies.
Top