ESXi Question

Status
Not open for further replies.

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
So I am somewhat frustrated with that state of jails and bhyve, but am pretty happy with FreeNAS itself. Granted, I am a super noob level user and that may be the actual issue. But that being said, I was able to setup a vpn on a raspberry pi in next to no time, and have still never successfully got one working in a FreeNAS jail.

Point is, I am really considering running FreeNAS and other guests under ESXi, although, being the noob I am, I have never done this. Thus, questions...

I understand I will want more RAM, and possible even a Xeon over the i3 I have now. These are easily addressable issues if I pursue this path. But what I currently am unsure about is how to pass hdd’s through ESXi to FreeNAS. Do you have to pass an entire controller, or can you pass specific drives? I currently have a HBA with 2 mini SAS going to 8 SATA, but I use a couple of my mobo’s sata ports as well for my FreeNAS pool. I plan to run my current 4tbx10 vdev for FreeNAS, and a couple SSD’s for various guests under ESXi. With my current hardware I would have to utilize the HBA for FreeNAS pool and the mobo controller for FreeNAS pool + other guests. Is this possible? If not, recommendations?


Sent from my iPhone using Tapatalk
 

silverback

Contributor
Joined
Jun 26, 2016
Messages
134
I don't think any motherboard ports can be passed through unless it is an onboard HBA. So you will probably need another HBA to passthrough more than 8 disks.
Is it not possible for you to run a VPN on your firewall/router?
 

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
I don't think any motherboard ports can be passed through unless it is an onboard HBA. So you will probably need another HBA to passthrough more than 8 disks.
Is it not possible for you to run a VPN on your firewall/router?

I don’t remember what the mobo has off hand, but it’s a server chipset. Not sure if that is “an HBA though”.

Is it possible, technically yes. I run google WiFi and I can jailbreak it or whatever the term is, but I don’t want to do that, plus that only addresses one of my concerns. I have a few scripts that ran fine on 9.x and refuse to run now under 11.1 U4, and I can’t even get them to run via webUI cron jobs. I have to physically ssh into my box to manually run them, I have a bunch of threads on the forum somewhere, people tried to help but it never worked so I gave up. ESXi with a fedora/arch/ubuntu or the like would be a nice addition as there will be much more tutorials and better support IMO.


Sent from my iPhone using Tapatalk
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Short answer is, you want to be passing entire disk controllers through to freenas, not individual drives.

You can sometimes pass in the onboard sata controller, but then you need to work out how to boot your ESXi and where to store the freenas virtual disk.

Ideal situation is to pass the HBA through to FreeNAS, and leave the onboard sata for ESXi.

I assume you've seen my thread, where I go into some detail:
https://forums.freenas.org/index.ph...node-304-x10sdv-tln4f-esxi-freenas-aio.57116/
 

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
Short answer is, you want to be passing entire disk controllers through to freenas, not individual drives.

You can sometimes pass in the onboard sata controller, but then you need to work out how to boot your ESXi and where to store the freenas virtual disk.

Ideal situation is to pass the HBA through to FreeNAS, and leave the onboard sata for ESXi.

I assume you've seen my thread, where I go into some detail:
https://forums.freenas.org/index.ph...node-304-x10sdv-tln4f-esxi-freenas-aio.57116/

I have glanced at your thread from time to time. Lot of good info on there, and knowing I need to pass the HBA itself, I either need to find another HBA or figure out how break mine out into more sata ports.


Sent from my iPhone using Tapatalk
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I have glanced at your thread from time to time. Lot of good info on there, and knowing I need to pass the HBA itself, I either need to find another HBA or figure out how break mine out into more sata ports.


Sent from my iPhone using Tapatalk

Get a SAS Expander. The intel ones are good.
 

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
Get a SAS Expander. The intel ones are good.

Ok that’s easy enough. Except how do they work. Do I have to plug the expander into the HBA via sas?


Sent from my iPhone using Tapatalk
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Yes. You run one or two sas cables from your HBA to the expander, and then the rest of the expanders ports can be used for running to HDs, or to other expanders.

Some of the intel sas expanders might *look* like PCIe cards, but they're not actually, and can simply be put wherever you want an powered via a molex connector.
 

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
Short answer is, you want to be passing entire disk controllers through to freenas, not individual drives.

You can sometimes pass in the onboard sata controller, but then you need to work out how to boot your ESXi and where to store the freenas virtual disk.

Ideal situation is to pass the HBA through to FreeNAS, and leave the onboard sata for ESXi.

I assume you've seen my thread, where I go into some detail:
https://forums.freenas.org/index.ph...node-304-x10sdv-tln4f-esxi-freenas-aio.57116/


I did read through a bit of this this afternoon, ole ray more to go. But I am curious about the “need” for a SLOG. Is the need for a SLOG greater in a virtualized situation? I have a random 120GB 840 evo laying around I could decide to SLOG, it doesn’t have power protection obviously but from my understanding you would need many things to go perfectly wrong at the same time, and then you loose whatever was being written at the time; to me that’s not a huge issue.

I actually have 3 120 evo’s sitting on my desk, so in my mind that will be 1 SLOG, and then maybe 1 for ESXi and one for VM’s?


Sent from my iPhone using Tapatalk
 

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
Actually, if I am going to do it, I guess go full ass not half ass. Or at least partial ass lol.

Would one of these and a PCIe to m.2 rider card work?

Intel Optane Memory M.2 2280 32GB PCIe NVMe 3.0 x 2 (MEMPEK1W032GAXT) https://www.amazon.com/dp/B06Y2VWP6N/ref=cm_sw_r_cp_api_xApDBbFMCTBW1


Or possibly this?

Intel Optane Memory Module 32 GB PCIe M.2 80mm MEMPEK1W032GAXT https://www.amazon.com/dp/B06XSXX3NS/ref=cm_sw_r_cp_api_wCpDBb39ES9CS

Via:

SilverStone Technology SST-ECM20 Dual M2 to PCI-E X4 and SATA 6G Adapter Card (ECM20) https://www.amazon.com/dp/B01798WOJ0/ref=cm_sw_r_cp_api_hFpDBbJPH0KYF

Sent from my iPhone using Tapatalk
 
Last edited:

Turgin

Dabbler
Joined
Feb 20, 2016
Messages
43
Do you intend to provide storage from the freenas VM back to esxi for a datastore? If so, you will probably want a slog. If not, then it depends on the use case, but most likely not.
 

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
Do you intend to provide storage from the freenas VM back to esxi for a datastore? If so, you will probably want a slog. If not, then it depends on the use case, but most likely not.

I’m going to go with yes. I don’t have a particular idea in mind at the moment, but I’d rather build it with the capability from the start.

Just to clarify, this would be for giving other guests mass storage on the FreeNAS pool? Or giving ESXi storage on the pool to install guests on?

If I want to give other guests mass storage as previously stated, that has to be passed back through ESXi?

Like I said, I’m hugely a noob with this, but everyone has to start somewhere.


Sent from my iPhone using Tapatalk
 

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
Well, now I have landed on a 58gb 800p.

Time to start actually looking into what is required, and what hardware I will want to upgrade on my box. I only have an i3 and 20 GB or ram. I probably could get away with giving FreeNAS 12 GB, but I would feel more comfortable with a bit more I think, so possibly another 8 GB stick will be required.

But that i3, that is the real question. If I just plan to let FreeNAS be a file system, with a single jail (syncthing) so it can have direct mounted access to my pool (is there a way to mount FreeNAS pool dataset to an ESXi guest?) nothing within FreeNAS will be very CPU intensive. Could I get away with giving it a single core? Or is that to little headroom?

If I was to upgrade to anything it would likely just be a e3-1225v5.... would that suffice for a few Linux guests doing very simple things? Or better yet, may the i3 be enough?


Sent from my iPhone using Tapatalk
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
You can mount FreeNAS shares in clients the same as you would normally, except they have internal networking at +10gbit speed.

Running FreeNAS at 10gbit speeds can actually use a fair chunk of cpu. If you want to saturate it.

The alternative is to mount a FreeNAS share to the ESXi host and then have virtual disks which appear as hard disks to the guests. This requires a slog for performance.
 

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
You can mount FreeNAS shares in clients the same as you would normally, except they have internal networking at +10gbit speed.

Running FreeNAS at 10gbit speeds can actually use a fair chunk of cpu. If you want to saturate it.

The alternative is to mount a FreeNAS share to the ESXi host and then have virtual disks which appear as hard disks to the guests. This requires a slog for performance.

I am really doing nothing super involved at all, so a crazy 900p slog is way overkill I’m sure. That said, do you think an 800p will suffice? I am not even 100% sure I will be mounting to ESXi, but if nothing else it would be nice to have a bit of a performance boost and a SLOG may be just what I am after.

Also for whatever reason I am having a terrible time finding the exact same ram I already have for whatever mystical reason that may be, and I really wonder if the 20GB I have will suffice, but I suppose that as well as the CPU can be determined once I actually implement this, hardware upgrades are pretty easy in the future if needed.

I had planned to keep syncthing in a FreeNAS jail so it could have direct access mounted access to my photo library directory, but I am not starting to think running it in Ubuntu would be better. And I think this dictates the idea of passing a dataset to ESXi so it can be used as a virtual disc for syncthing vs letting Ubuntu SMB or NFS into the share, even at 10GB. My photo library is ~1tb, with ~100,000 pictures and over 100,000 Lightroom database files. Having it sync over a network adapter I think would be a huge amount of overhead. Am I correct in this thought? And if so, passing that dataset to ESXi so it can be used in Ubuntu as a mounted volume would be a more proper way to do this, correct? Thus the need for a SLOG.

Also, thanks for all your help.


Sent from my iPhone using Tapatalk
 
Last edited:

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
@Stux I have read through your thread, and checked out most of the associated links that you and others provided, so I have tried to read up on as much as I can.

I am still confused about a few things.

1) what is the difference between mounting a SMB/NFS share in a guest vs sending it back to ESXi to be given to a guest as a virtual disc? If I understand correct, ESXi is still mounting that as a NFS, but then providing it to a guest as a virtual disc. What is the difference..? All I can figure is the guest sees it as a normal hard drive and thus uses its virtual SATA controller to reads and write data which should provide greater speed and IOPS. But then it hits ESXi>FreeNAS as NFS, so, how is it really different? Does that not just become the bottleneck?

2) can you mount the same datastore as a virtual disc multiple guest VM’s? Or do they have a 1:1 relationship as in once a datastore is given as a virtual disc to one guest, it can no longer be given to others?

3) can other guests access a datastore that is given to a guest as a virtual disc via SMB? I want to give Ubuntu a virtual disc for syncthing to increase its IOPS performance (assuming that is the best way) but my windows gaming PC/photo editing machine will need SMB access to that data.

4) should a optane 800p work for this use case? This system really just is a homelab, my largest intensity thing will be syncthing, a little plex action (which is mostly all read not write) and maybe a few other cool things down the road as I learn more about what is possible. I am absolutely not worried about the 800p’s write threshold of ~350tb. My FreeNAS pool is only 10x4TB and not much ever gets deleted, it just slowly fills up until one day I’ll have to think about larger drives or another vdev.

4) I noticed you gave your 16GB system 64GB or L2ARC which I always thought people said don’t even bother with until you have an “abundant” amount of RAM. If the 800p is enough, and I am looking at the 58 gb model, would it make sense to take the same approach? 20 GB to SLOG, few GB to swap, and the rest give to L2ARC?

5) proxmox..?


Sent from my iPhone using Tapatalk
 
Last edited:

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
I’m going to go with yes. I don’t have a particular idea in mind at the moment, but I’d rather build it with the capability from the start.

Just to clarify, this would be for giving other guests mass storage on the FreeNAS pool? Or giving ESXi storage on the pool to install guests on?

If I want to give other guests mass storage as previously stated, that has to be passed back through ESXi?

Like I said, I’m hugely a noob with this, but everyone has to start somewhere.
Don't overthink it. Aside from the small ESXi datastore used to boot your FreeNAS, Think of FreeNAS as a separate storage server. If you want to provide storage to ESXi to house VMs, use iSCSI. If you want a file level share, setup SMB/FTP/AFP.
I don't remember if Stux covers this but you should be sure to use VMXNET3 network cards for FreeNAS. You also need to do some reading on iSCSI, multipathing, and iscsi vmkernel port binding. I know it sounds like a lot but its not bad once you do it 50 times.

1) what is the difference between mounting a SMB/NFS share in a guest vs sending it back to ESXi to be given to a guest as a virtual disc? If I understand correct, ESXi is still mounting that as a NFS, but then providing it to a guest as a virtual disc. What is the difference..? All I can figure is the guest sees it as a normal hard drive and thus uses its virtual SATA controller to reads and write data which should provide greater speed and IOPS. But then it hits ESXi>FreeNAS as NFS, so, how is it really different? Does that not just become the bottleneck?
"mounting a SMB/NFS share in a guest vs sending it back to ESXi to be given to a guest as a virtual disc" That's about it. It all depends on where your data is and how to best consume it for a given application. Don't overthink it, it's no different that having all separate servers (kinda-sorta-not really).
"virtual SATA controller" most VM versions preset to SCSI of some sort. For you boot drives leave the defaults, for other drives, add a pvscsi controller and connect your vmdk to that. Don't forget VMware tools or the disk may not show up as it includes the drivers.
2) can you mount the same datastore as a virtual disc multiple guest VM’s? Or do they have a 1:1 relationship as in once a datastore is given as a virtual disc to one guest, it can no longer be given to others?
You don't mount the datastore to a VM. The datastore holds all of your VMs and their files including the VMDK's (the virtual disks). ESXi mounts the datastore via NFS (dont use this please), iSCSI, Fiber Channel, Direct attached disk/array, FCoE, etc...
Put another way, VMs are files, files get saved to a datastore.
3) can other guests access a datastore that is given to a guest as a virtual disc via SMB? I want to give Ubuntu a virtual disc for syncthing to increase its IOPS performance (assuming that is the best way) but my windows gaming PC/photo editing machine will need SMB access to that data.
See "2)" also SMB and datastores do not have anything to do with each other. A datastore is just a filesystem where VMs get saved. So if I make a windows file server VM with three disks, one for boot, one for the finance files (F:) and one for engineering files (E:), I can use SMB in windows to share those drives and the files on them. But nobody outside of ESXi (or FreeNAS if using NFS) has any clue that those drives (C:,F:,E:) are just vmdk files.
"I want to give Ubuntu a virtual disc for syncthing to increase its IOPS performance (assuming that is the best way) but my windows gaming PC/photo editing machine will need SMB access to that data."
In this case you can give ubuntu a hard drive and share the folder (from within Ubuntu) with SMB or you could have a SMB share on FreeNAS suffer the small performance hit. The biggest factor in IOPS will be your zpool anyway. Again, how whould you do this with two/Three physical computers? Ubuntu server, Windows desktop, FreeNAS server.
4) should a optane 800p work for this use case? This system really just is a homelab, my largest intensity thing will be syncthing, a little plex action (which is mostly all read not write) and maybe a few other cool things down the road as I learn more about what is possible. I am absolutely not worried about the 800p’s write threshold of ~350tb. My FreeNAS pool is only 10x4TB and not much ever gets deleted, it just slowly fills up until one day I’ll have to think about larger drives or another vdev.
What kind of VMs will you be using? How Many users? I can't recommend a specific SLOG drive but you don't need to go nuts either.
4) I noticed you gave your 16GB system 64GB or L2ARC which I always thought people said don’t even bother with until you have an “abundant” amount of RAM. If the 800p is enough, and I am looking at the 58 gb model, would it make sense to take the same approach? 20 GB to SLOG, few GB to swap, and the rest give to L2ARC?
As for L2ARC and RAM, you generally won't get much out of it unless you running a fair number (5+) VMs with a consistent workload. My guideline here is add RAM until its unfeasible. Then look into you working set size and do some math to find the smallest possible L2ARC to suit your needs as the L2ARC uses the ARM memory to map the L2ARC and you want to keep as much in RAM as you can. As for sharing the drive, the SLOG will never be more than a few GB even on EXTREMELY fast and busy servers. Many people go with larger drives simply because they are faster. Also its considered a bad idea to share a device for SLOG and L2ARC but you are free to experiment (with extreme caution). You already have swap. A small part of your disks in you pool is dedicated to swap. If your using swap, get more RAM.
5) proxmox..?
That came out of left field! Im not an expert here but I am will to be that ESXi has better documentation. Just keep your version in mind when searching. Also be mind full of the fact that you are running a standalone host and not vSphere with vCenter.
 

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
Don't overthink it. Aside from the small ESXi datastore used to boot your FreeNAS, Think of FreeNAS as a separate storage server. If you want to provide storage to ESXi to house VMs, use iSCSI. If you want a file level share, setup SMB/FTP/AFP.
I don't remember if Stux covers this but you should be sure to use VMXNET3 network cards for FreeNAS. You also need to do some reading on iSCSI, multipathing, and iscsi vmkernel port binding. I know it sounds like a lot but its not bad once you do it 50 times.

Ok, maybe my terminology is wrong, but this is helpful. Just to clarify then, I plan to boot ESXi off of a usb stick (maybe an SSD, I’m unsure, I have a plethora of each...). This leaves me with 2 840 Evo 120’s that I plan to use for my VM boot volume. So in this situation I have 2 options I believe, create a vdev in FreeNAS with those 2 840 Evo’s and pass that back fo ESXi via iSCSI, or let them be stand alone outside of FreeNAS (does EAXi support any sort of RAID setup?)? I assume the main advantage of this method is ease of setup, main downside is the VM’s have no interaction with ZFS and this don’t benefit from ZFS’s try hard attitude of “I will do my best in every situation to keep your data safe”. I also wouldn’t be able to boot FreeNAS off the 840 evo’s if I don’t go the route of leaving the 840’s outside of FreeNAS for obvious reasons.

"mounting a SMB/NFS share in a guest vs sending it back to ESXi to be given to a guest as a virtual disc" That's about it. It all depends on where your data is and how to best consume it for a given application. Don't overthink it, it's no different that having all separate servers (kinda-sorta-not really).
"virtual SATA controller" most VM versions preset to SCSI of some sort. For you boot drives leave the defaults, for other drives, add a pvscsi controller and connect your vmdk to that. Don't forget VMware tools or the disk may not show up as it includes the drivers.

Ok maybe once again I miss understood some things. The only application I’m currently worried about IOPS for is syncthing due to the massive amount of files I am trying to sync (I say massive, and some sysasmin laughs as my “massive” is only a couple hundred thousand”. I plan to migrate syncthing out of a FreeNAS jail and into an Ubuntu VM which will have a 10 GB vlan, is it safe to assume I am overthinking this and should just let it do this sync over the virtual network? I did try this same approach once before in a windows VM in FreeNAS under bhyve and it was abominably slow, but this could be a function of normal 1GB Ethernet, plus no SLOG, plus bhyve not being phenomenal? It was slow, and the CPU usage was MASSIVE.

You don't mount the datastore to a VM. The datastore holds all of your VMs and their files including the VMDK's (the virtual disks). ESXi mounts the datastore via NFS (don't use this please), iSCSI, Fiber Channel, Direct attached disk/array, FCoE, etc...
Put another way, VMs are files, files get saved to a datastore.


See "2)" also SMB and datastores do not have anything to do with each other. A datastore is just a filesystem where VMs get saved. So if I make a windows file server VM with three disks, one for boot, one for the finance files (F:) and one for engineering files (E:), I can use SMB in windows to share those drives and the files on them. But nobody outside of ESXi (or FreeNAS if using NFS) has any clue that those drives (C:,F:,E:) are just vmdk files.
"I want to give Ubuntu a virtual disc for syncthing to increase its IOPS performance (assuming that is the best way) but my windows gaming PC/photo editing machine will need SMB access to that data."
In this case you can give ubuntu a hard drive and share the folder (from within Ubuntu) with SMB or you could have a SMB share on FreeNAS suffer the small performance hit. The biggest factor in IOPS will be your zpool anyway. Again, how whould you do this with two/Three physical computers? Ubuntu server, Windows desktop, FreeNAS server.

This was my confusion, terminology. I think this has set me straight.

What kind of VMs will you be using? How Many users? I can't recommend a specific SLOG drive but you don't need to go nuts either.

As for L2ARC and RAM, you generally won't get much out of it unless you running a fair number (5+) VMs with a consistent workload. My guideline here is add RAM until its unfeasible. Then look into you working set size and do some math to find the smallest possible L2ARC to suit your needs as the L2ARC uses the ARM memory to map the L2ARC and you want to keep as much in RAM as you can. As for sharing the drive, the SLOG will never be more than a few GB even on EXTREMELY fast and busy servers. Many people go with larger drives simply because they are faster. Also its considered a bad idea to share a device for SLOG and L2ARC but you are free to experiment (with extreme caution). You already have swap. A small part of your disks in you pool is dedicated to swap. If your using swap, get more RAM.

That came out of left field! Im not an expert here but I am will to be that ESXi has better documentation. Just keep your version in mind when searching. Also be mind full of the fact that you are running a standalone host and not vSphere with vCenter.

Noted, and thanks. Maybe I won’t worry so much about L2ARC. I wasn’t worried about SWAP, and I know I shouldn’t really need it, but I figure if anything does go to SWAP better to have it on a m.2 then a RAID array stealing IOPS from the array.


Sent from my iPhone using Tapatalk
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
This leaves me with 2 840 Evo 120’s that I plan to use for my VM boot volume.
They would be a formatted with VMFS datastore to hold the FreeNAS VM.
create a vdev in FreeNAS with those 2 840 Evo’s and pass that back fo ESXi via iSCSI
Then you run in to a "what came first the chicken or the egg" situation. Also remember, iSCSI you connect to over the network and HBA's get passed through. Let ESXi have the SSDs (or at least one) to use as a datastore for FreeNAS.
or let them be stand alone outside of FreeNAS (does ESXi support any sort of RAID setup?
Nope. But if you have a RAID card you can use that.
main downside is the VM’s have no interaction with ZFS and this don’t benefit from ZFS’s try hard attitude of “I will do my best in every situation to keep your data safe”
They still fully benefit. At the (near) bottom of your storage stack you still have ZFS doing all of its magic.
The only application I’m currently worried about IOPS for is syncthing due to the massive amount of files I am trying to sync...but this could be a function of normal 1GB Ethernet
Yeah. Everything in ESXi will essentially be 10gbe to the outside.. If all you have is 1gbe, then that's it... 120MB/s best case.
Also yeah... where I work we measure out data footprint in PetaBytes.
 

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
They would be a formatted with VMFS datastore to hold the FreeNAS VM.

Then you run in to a "what came first the chicken or the egg" situation. Also remember, iSCSI you connect to over the network and HBA's get passed through. Let ESXi have the SSDs (or at least one) to use as a datastore for FreeNAS.

Nope. But if you have a RAID card you can use that.

They still fully benefit. At the (near) bottom of your storage stack you still have ZFS doing all of its magic.

Yeah. Everything in ESXi will essentially be 10gbe to the outside.. If all you have is 1gbe, then that's it... 120MB/s best case.
Also yeah... where I work we measure out data footprint in PetaBytes.


Ok, so let me give you the rundown of what hardware I have available, and with this information I think I now have a plan, maybe. Lol.

So, as far as SSD storage goes, I have my current Kingston ssdnow 90 GB which is my FreeNAS boot drive, 2 840 Evo 120’s sitting on my desk doing nothing, and some flash drives I don’t really use.

Plan: ESXi will boot from a flash drive, FreeNAS will boot from SSDnow as it currently does (it won’t just be plug and play, I get that, I’ll have to install FreeNAS through ESXi when the times comes, and hopefully just load my backup and it should work?), and 2 840 Evo’s will be a mirrored vdev in FreeNAS, passed to ESXi for other guests to boot from.

One guest will for sure be Ubuntu for automation/syncthing/openvpn/plex. Ubuntu will boot from 840 Evo vdev > iSCSI ESXi and Ubuntu will have my FreeNAS /tank/photo directory mounted via SMB so syncthing can do its thing and /tank/media for plex.

Since this necessitates iSCSI, I 100% need a SLOG, and from my research it looks like the 800p should be sufficient for my needs. I 100% need more RAM as well, and very possibly, almost certainly really, need a Xeon, and I am currently looking at a e3-1230v5, 4 core 8 threads @ 3.4.

I also need a SAS expander since my HBA only has two SAS ports. Would in Intel RES2SV240 work well for this? Both SAS ports from HBA to expander, then remaining 4 SAS ports on expander to 4x SATA for 16 total drives available to FreeNAS via PCIe passthrough from ESXi to FreeNAS. 10x4TB and 2 120 840 Evo for ESXi VM boot storage.

Sound about right?


Sent from my iPhone using Tapatalk
 
Last edited:
Status
Not open for further replies.
Top