Check my build... please.

Status
Not open for further replies.

wafliron

Dabbler
Joined
Aug 25, 2016
Messages
13
Hi all,

Firstly, this post isn't strictly about a FreeNAS build, but I hope that will be forgiven - FreeNAS is a major part of it, and the knowledge in these forums has already been an invaluable part of my build research. I'm hoping I can tap into that knowledge just a little further :)

So - the time has come for me to replace my aging (coming up on 6 years old) AiO home server (which runs various VMs via ESXi, including a virtualised copy of NexentaStor to provide "loopback" NFS storage to ESXi, plus NAS services). I've spent the better part of a month (on-and-off) doing my research and putting together a build list for the new box. I'm hoping the experts here don't mind casting their eye over it, and letting me know if I've made any dumb decisions or wrong turns. There's also a couple of specific questions along the way.

Firstly, context - what is this home server going to be doing? Again, it will be an AiO machine running ESXi on the bare metal. On top of that I plan to run a virtualised copy of FreeNAS, to provide "loopback" storage to ESXi for the other VMs via NFS, plus providing NAS services to clients on the network via CIFS. The NAS will server up all the normal data you'd expect from a home NAS - documents, photos, music, movies, TV, etc.

Apart from FreeNAS, the machine will also run a number of other VMs - a couple of light-load Windows VMs for things like Active Directory (and associated services), WSUS, print server and centralised AV (Trend WFB), a router/VPN appliance, plus a few "experimental" VMs from time-to-time when I want to... experiment.

I'll also be using this box to torrent, handle cloud backups (CrashPlan) and run Plex (wanting the ability to transcode two streams at once - including transcoding of very-high-def video (e.g. 4K h264/h265) in the future). On my current AiO these services (Plex, CrashPlan, torrents) are run inside a Windows VM, but with the new box I'd assume it makes more sense to run these inside the FreeNAS VM instead.

Finally, future-proofing - I want this server to last for at least 5 years, and while it is starting life with six HDDs, it needs to be able to handle an additional six in the future (probably 1-2 years away).

With that all out of the way, here's the build list:

CPU / Motherboard / RAM:

Either:

SuperMicro X11SSL-CF
Intel Xeon E3-1240 v5
32GB (2 x 16GB) DDR4-2400 DIMMs from the compatible memory list for this MB


Or:

SuperMicro X10SRH-CF
Intel Xeon E5-2620 v4
32GB (4 x 8GB) DDR4-2400 DIMMs from the compatible memory list for this MB


Either way, FreeNAS would get 16GB of RAM, and the other 16GB would be for the other VMs. Deliberately not filling all the DIMM slots with both options, so I can easy future expansion.

The choice between these two options is one I'm struggling with. Do I need the extra horsepower of the E5 now? Probably not. Will I appreciate it in the future? Maybe, especially with some unknowns around the demands of transcoding very-high-def video.

The price difference to the E5 is about $300 AUD (~$225 USD) - about 9% extra on top of the cost of the E3 option. I don't think there's any other appreciable differences here other than performance and price - I don't need the extra features (additional PCIe slots, etc) of the X10 board, maybe there's an increase in idle power with the E5 but I don't imagine it'll be significant, etc.

I'd love to hear some opinions from others on this topic. Would also appreciate any suggestions for a decent HSF (for either option) - this is the one component I haven't had a chance to research yet.

Storage:

VM Storage


1 x Samsung EVO 850 500GB
1 x Crucial MX200 500GB


Through partitioning, I'll use ~16GB of each of these drives to provide FreeNAS with a mirrored boot pool. The remainder of the drives will form a second mirrored pool to provide "loopback" storage to ESXi for the remainder of my VMs virtual disks.

For both pools I'll definitely enable compression, and probably dedupe as well. Obviously de-dupe comes with a memory overhead, but plenty of RAM available and we're not talking about de-duping that much data - and I'd like to maximise what I can fit on the VM virtual disk pool.

I'm proposing two different SSDs to try and eliminate any risk of firmware bugs / manufacturing faults / etc causing the simultaneous failure of both SSDs. The particular models were chosen based on representing good value, and having similar performance characteristics. Not sure if I'm being too paranoid here though, and should just get two of the same SSD...

I don't plan on overprovisioning these drives given my write workload isn't that heavy - it's a home server, after all. Interested to hear if anyone disagrees, though. I also haven't been able to find much guidance about an appropriate ZFS recordsize for ESXi-based VM virtual disk storage over NFS. What little I've found suggests decreasing from default down to 16KB is possibly a good idea, but it's far from conclusive. Again, interested in any thoughts on this.

Mass Data Storage

3 x WD Red 4TB
3 x Seagate NAS 4TB


These six drives will be combined into one RAIDZ2 vdev to provide a pool for mass data storage / NAS use. I spent a while agonising over striped mirrors vs RAIDZ2, but ultimately went for the latter - RAIDZ2 is more space-efficient, for the use-case I don't need the additional IOPs of mirrors and I'm OK with expanding in larger drive groups (rather than two at a time). Also, importantly, given this is a home server and I won't be holding spares - and may take several days to a week to source replacements for failed drives - RAIDZ2 seemed safer, despite the longer resilver times.

Again, mixing drive vendors here to reduce the chance of simultaneous failures. I would've preferred three different vendors, but I don't think there's a third vendor who makes 5,xxx rpm NAS drives? Assuming that's correct, will try and get drives from different batches as well.

For this pool, no de-dupe (wouldn't make any sense). I don't think compression makes much sense either given most of it will be filled with incompressible data (music, movies, TV), but it sounds like the general best practice is just to leave it on anyway?

SLOG / L2ARC

1 x Intel DC S3700 100GB (used from eBay)

I'll partition this drive to provide SLOG for each of my pools. I've considered carving off a small amount (20GB?) as L2ARC for the mass data storage pool as well, in case it does end up with some hot data from time-to-time, but I'm not sure if that really makes sense or not. Interested to hear any thoughts on the matter.

HBAs / Misc

1 x on-board 8-port LSI SAS controler
1 x LSI SAS2008-based PCIe card


Each of my motherboard options has a 8-port LSI SAS controller on-board. I'll also add a SAS2008-based PCIe card (whatever type I can find cheapest on eBay (that isn't fake!)) and quality SF-8087-to-4xSAS cable. Both will be cross-flashed to IT mode, of course.

It's a little annoying that I'm a single drive over what the on-board controller could handle on its own - but not the end of the world as I'll definitely be adding more disks to this server in the future.

I'll also be buying a small USB thumb drive to boot ESXi off.

Case & PSU

Fractal Design Define R2 XL case

Seems well regarded, reasonable price, can handle 12 x HDDs + a couple of SSDs, and the large physical size isn't an issue for me. I'll also add an additional 140mm fan (and reposition some of the bundled ones) so I have 2 x front 140mm, 1 x back 140mm, and 1 x top 140mm.

Seasonic G-550 PSU

On the PSU front, I've done the math according to jgreco's guide (https://forums.freenas.org/index.php?threads/proper-power-supply-sizing-guidance.38811/), and I estimate my max potential power usage will be around 420W initially. Once I add another six HDDs, that figure goes up to 590W. Obviously the latter figure is a little above the rated 550W of this PSU, but I've been conservative when estimating max potential power usage for some of my components.

Estimated idle watts is around 100W / 120W (6 HDDs / 12 HDDs), which is pretty close to the line re the "20% of PSU max watts" (120W) guideline to ensure max efficiency, but I think it should be OK? And I'm not sure what else I can do to better "meet" this guideline, as I think a 450W / 500W PSU would likely be undersized, especially once I add another six HDDs.

The only other issue with this PSU is insufficent SATA power connectors, but that seems an easy fix with a couple of quality MOLEX-to-4xSATA adapters.


And that, folks, is just about that. Not sure how many will have made it this far - if you have, thanks for reading. And thanks in advance for any suggestions / feedback / answers.
 
Last edited:

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Hello! Welcome to the forum!
Storage:

VM Storage


1 x Samsung EVO 850 500GB
1 x Crucial MX200 500GB


Through partitioning, I'll use ~16GB of each of these drives to provide FreeNAS with a mirrored boot pool. The remainder of the drives will form a second mirrored pool to provide "loopback" storage to ESXi for the remainder of my VMs virtual disks.

For both pools I'll definitely enable compression, and probably dedupe as well. Obviously de-dupe comes with a memory overhead, but plenty of RAM available and we're not talking about de-duping that much data - and I'd like to maximise what I can fit on the VM virtual disk pool.

I'm proposing two different SSDs to try and eliminate any risk of firmware bugs / manufacturing faults / etc causing the simultaneous failure of both SSDs. The particular models were chosen based on representing good value, and having similar performance characteristics. Not sure if I'm being too paranoid here though, and should just get two of the same SSD...

I don't plan on overprovisioning these drives given my write workload isn't that heavy - it's a home server, after all. Interested to hear if anyone disagrees, though. I also haven't been able to find much guidance about an appropriate ZFS recordsize for ESXi-based VM virtual disk storage over NFS. What little I've found suggests decreasing from default down to 16KB is possibly a good idea, but it's far from conclusive. Again, interested in any thoughts on this.
You can't passthrough (or "loopback") the drives FreeNAS boots from... FreeNAS has to boot from an ESXi datastore, controlled directly by ESXi. If you want to run all of your VMs from an SSD-based datastore, that's fine: you could set up a RAID1/mirrored datastore using one of your SAS controllers and store all the VMs there. But you can't do that and pass the same controller through to FreeNAS via VT-d, nor can you partition devices and use one partition as an ESXi datastore and pass the other partition through to FreeNAS.

The SSDs you've chosen are 'consumer' class devices and would probably work fine... but if you really want performance, the Intel DC S3500 devices are just about bulletproof, have withstood the test of time, and would serve well as an ESXi datastore.

Also, 16GB RAM isn't 'plenty' for deduplication purposes, and it's doubtful you need deduplication anyway. :)
 
Last edited:
Joined
Mar 22, 2016
Messages
217
To add to the above:

You may want to swap the processor for an E5-1650 V4 if you decide to go the x10 route. The extra Ghz would be very beneficial while still retaining the ability to get 256gb + of ram out of it.

As far as I know you can't use the same device for SLOG and a L2ARC, just as a heads up.

For deduplication, I believe, it's recommended 5gb of ram for every TB of storage, or at least 80gb's of RAM for what you were aiming for.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
CPU / Motherboard / RAM:

Either:

SuperMicro X11SSL-CF
Intel Xeon E3-1240 v5
32GB (2 x 16GB) DDR4-2400 DIMMs from the compatible memory list for this MB


Or:

SuperMicro X10SRH-CF
Intel Xeon E5-2620 v4
32GB (4 x 8GB) DDR4-2400 DIMMs from the compatible memory list for this MB


Either way, FreeNAS would get 16GB of RAM, and the other 16GB would be for the other VMs. Deliberately not filling all the DIMM slots with both options, so I can easy future expansion.

The choice between these two options is one I'm struggling with. Do I need the extra horsepower of the E5 now? Probably not. Will I appreciate it in the future? Maybe, especially with some unknowns around the demands of transcoding very-high-def video.

The price difference to the E5 is about $300 AUD (~$225 USD) - about 9% extra on top of the cost of the E3 option. I don't think there's any other appreciable differences here other than performance and price - I don't need the extra features (additional PCIe slots, etc) of the X10 board, maybe there's an increase in idle power with the E5 but I don't imagine it'll be significant, etc.

I'd love to hear some opinions from others on this topic. Would also appreciate any suggestions for a decent HSF (for either option) - this is the one component I haven't had a chance to research yet.
Since you are running VMs and not just FreeNAS alone, you might be happier with the option providing the most RAM and the more powerful CPU. But honestly, either board will probably be fantastic!

Storage:

Mass Data Storage

3 x WD Red 4TB
3 x Seagate NAS 4TB


These six drives will be combined into one RAIDZ2 vdev to provide a pool for mass data storage / NAS use. I spent a while agonising over striped mirrors vs RAIDZ2, but ultimately went for the latter - RAIDZ2 is more space-efficient, for the use-case I don't need the additional IOPs of mirrors and I'm OK with expanding in larger drive groups (rather than two at a time). Also, importantly, given this is a home server and I won't be holding spares - and may take several days to a week to source replacements for failed drives - RAIDZ2 seemed safer, despite the longer resilver times.

Again, mixing drive vendors here to reduce the chance of simultaneous failures. I would've preferred three different vendors, but I don't think there's a third vendor who makes 5,xxx rpm NAS drives? Assuming that's correct, will try and get drives from different batches as well.

For this pool, no de-dupe (wouldn't make any sense). I don't think compression makes much sense either given most of it will be filled with incompressible data (music, movies, TV), but it sounds like the general best practice is just to leave it on anyway?
I humbly suggest HGST drives in lieu of Seagates; I use both the UltraStar and Deskstar NAS models.

SLOG / L2ARC

1 x Intel DC S3700 100GB (used from eBay)

I'll partition this drive to provide SLOG for each of my pools. I've considered carving off a small amount (20GB?) as L2ARC for the mass data storage pool as well, in case it does end up with some hot data from time-to-time, but I'm not sure if that really makes sense or not. Interested to hear any thoughts on the matter.
The S3700 is a good choice for a SLOG device, but it must be used solely for that purpose; you can't partition it and use one partition for SLOG and another for L2ARC; see this thread for details. Also, you'll need a separate, dedicated SLOG device for each pool (but more on that later). And chances are you don't need an L2ARC at all; you'd be better served to max out the RAM on your motherboard before considering L2ARC.

Article on SLOG devices by iXsystems:
https://www.ixsystems.com/blog/o-slog-not-slog-best-configure-zfs-intent-log/

HBAs / Misc

1 x on-board 8-port LSI SAS controler
1 x LSI SAS2008-based PCIe card


Each of my motherboard options has a 8-port LSI SAS controller on-board. I'll also add a SAS2008-based PCIe card (whatever type I can find cheapest on eBay (that isn't fake!)) and quality SF-8087-to-4xSAS cable. Both will be cross-flashed to IT mode, of course.

It's a little annoying that I'm a single drive over what the on-board controller could handle on its own - but not the end of the world as I'll definitely be adding more disks to this server in the future.
You may not need the add-on card right away.

Consider this suggestion:
  • Boot ESXi from a USB flash drive (as you indicate below)
  • Install two small SSDs on the motherboard's SATA ports; each as a separate ESXi datastore. Install FreeNAS on these with the 'mirrored' installation option. (I recommend a pair of 80GB Intel DC S3500 SSDs)
  • Pass the motherboard LSI controller through to FreeNAS via VT-d. Install 7 x 4TB HDD in a RAIDZ2 pool, with an Intel DC S3700 SLOG device installed in the 8th port. You could instead use 3 mirrored pairs as you described and either mirror the SLOG device or simply have one an unused port on your SAS controller (which idea I loathe!)
  • When the time comes to add more drives, install an LSI HBA, pass it through to FreeNAS via VT-d, and plug 'em in!
Just a suggestion... but this is basically the layout I use, and it works very well. You only have one pool, so you only need a single SLOG device. Though based on a RAIDZ2 pool, VM performance is quite acceptable, probably owing to my using a virtual storage network as described in this excellent blog post:

https://b3n.org/freenas-9-3-on-vmware-esxi-6-0-guide/

I'll also be buying a small USB thumb drive to boot ESXi off.
This is what I do, using a SanDisk Cruzer Fit 16GB USB 2.0 model.

Case & PSU

Fractal Design Define R2 XL case

Seems well regarded, reasonable price, can handle 12 x HDDs + a couple of SSDs, and the large physical size isn't an issue for me. I'll also add an additional 140mm fan (and reposition some of the bundled ones) so I have 2 x front 140mm, 1 x back 140mm, and 1 x top 140mm.

Seasonic G-550 PSU

On the PSU front, I've done the math according to jgreco's guide (https://forums.freenas.org/index.php?threads/proper-power-supply-sizing-guidance.38811/), and I estimate my max potential power usage will be around 420W initially. Once I add another six HDDs, that figure goes up to 590W. Obviously the latter figure is a little above the rated 550W of this PSU, but I've been conservative when estimating max potential power usage for some of my components.

Estimated idle watts is around 100W / 120W (6 HDDs / 12 HDDs), which is pretty close to the line re the "20% of PSU max watts" (120W) guideline to ensure max efficiency, but I think it should be OK? And I'm not sure what else I can do to better "meet" this guideline, as I think a 450W / 500W PSU would likely be undersized, especially once I add another six HDDs.

The only other issue with this PSU is insufficent SATA power connectors, but that seems an easy fix with a couple of quality MOLEX-to-4xSATA adapters.
Fractal Design and Seasonic both have a good reputation. I use a Fractal Design case myself. You might also consider EVGA power supplies in addition to Seasonic, I've used this model for over a year with good results.

If you're really intending to add another 8 drives down the road, hadn't you better consider their power requirements when selecting your PSU? Otherwise you'll have to swap the PSU later.

Good luck!
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
On my current AiO these services (Plex, CrashPlan, torrents) are run inside a Windows VM, but with the new box I'd assume it makes more sense to run these inside the FreeNAS VM instead.
Aside from the other great advice, I would presume you are thinking about running these VMs in ESXi with the actual VM files residing on FreeNAS (NFS or iSCSI). Hopefully, you are not thinking about running VMs in FreeNAS which is a VM in itself. That would be like VM Inception and more than likely not too good...
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Aside from the other great advice, I would presume you are thinking about running these VMs in ESXi with the actual VM files residing on FreeNAS (NFS or iSCSI). Hopefully, you are not thinking about running VMs in FreeNAS which is a VM in itself. That would be like VM Inception and more than likely not too good...
Good point! Virtualized widgets running virtualized widgets running...

Big fleas have little fleas,
Upon their backs to bite 'em,
And little fleas have lesser fleas,
and so, ad infinitum.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Re the E5, you've selected an 8 core 2.1Ghz processor, which can turbo up to 3ghz.

You could get a 6 core processor, which runs at 3.6Ghz and can turbo up to 4ghz... ie the E5-1650v4. You're not paying the dual-processor tax, but you are paying for the high base speed.

6*3.6 = 21.6ghz, and 8 * 2.1 = 16.8ghz.

At minimum base speed, you get more core*ghz, which means its faster ;), a simplistic measure, but actually surprisingly good metric with processors from the same family.

(btw, mine runs at 3.8ghz running 12 threads of Prime95 )

Also, single thread, and low threaded performance would be significantly better on the 1650.

BUT the 1650 is more expensive. I got mine from superbiiz via ebay shipped to AU for about 850AUD.

For the system you're planning, I would go with the E5.

The 8 core E5-16xx processors are excessively expensive in my opinion.

Also, some of the supermicro E5 X10 boards support 10 sata ports. 6 hds + 3 ssds is 9... that leaves one.

Then you add an HBA to add your next vdev.

Also, you could go for PCIe SSDs (plenty of lanes on the E5 platform), and that woudn't use up your sata ports...

And then maybe you would be tempted to go to 8 drives instead of six ;)

I think if you went with the Skylake system, you might out grow it before the 5 years is up. I don't think that would happen with the E5.
 
Last edited:

wafliron

Dabbler
Joined
Aug 25, 2016
Messages
13
Firstly, thanks everyone for the input - really appreciated!

Hello! Welcome to the forum!
You can't passthrough (or "loopback") the drives FreeNAS boots from... FreeNAS has to boot from an ESXi datastore, controlled directly by ESXi. If you want to run all of your VMs from an SSD-based datastore, that's fine: you could set up a RAID1/mirrored datastore using one of your SAS controllers and store all the VMs there. But you can't do that and pass the same controller through to FreeNAS via VT-d, nor can you partition devices and use one partition as an ESXi datastore and pass the other partition through to FreeNAS.

Of course ESXi needs some local storage for FreeNAS... that was really dumb of me. Thanks for correcting! I must have some sort of mental block on this topic - IIRC I made exactly the same mistake when speccing up my current home server 6 years ago!

That said, a question, just to be sure - do FreeNAS's boot disk(s) actually have to be VDMKs stored on ESXi-local-storage? Or is it possible to just use ESXi-local-storage for the vmx (and associated misc files - but no virtual disks), passthrough the SAS controllers, and have FreeNAS boot from disks attached to the SAS controllers?

Assuming the answer to the second question is no, is there any simpler / cheaper way of achieving this other than grabbing two of the smallest / cheapest SSDs I can find, connecting them to the MB's onboard SATA ports, and using them for ESXi-local-storage?

The SSDs you've chosen are 'consumer' class devices and would probably work fine... but if you really want performance, the Intel DC S3500 devices are just about bulletproof, have withstood the test of time, and would serve well as an ESXi datastore.

Understood. I'd love to invest in something like S3500s for the VM Storage Pool, but I just can't justify the cost for my use case (it's a home server, after all). New, they're something like 3x+ the price of the MX200/EVO 850, and even used they're maybe 1.5x the price (not to mention questions about whether they'll last another 5+ years on top of their existing age). Basically, I'm thinking the MX200 + EVO 850 (+ S3700 SLOG to better handle sync writes) will be a good combo of (reasonable) performance and price.

Also, 16GB RAM isn't 'plenty' for deduplication purposes, and it's doubtful you need deduplication anyway. :)

Isn't it "plenty" when I'm talking about de-duping a maximum of about 420GB of data (80% of the capacity of the VM Storage Pool)? From what I've read, even worst-case, this will require about 2.1GB of RAM (out of 16GB available). I probably do need to further evaluate what space savings I'd achieve from de-dupe, but I'd expect them to be reasonable for the VM Storage Pool, given it will contain (for example) multiple VMs running exactly the same OS.

You may want to swap the processor for an E5-1650 V4 if you decide to go the x10 route. The extra Ghz would be very beneficial while still retaining the ability to get 256gb + of ram out of it.

Is there a specific reason you suggest this processor over the E5-2620 v4? Back-of-the-envelope, they appear to have a pretty similar amount of total processing power (assuming a multi-threaded workload) - but the E5-1650 v4 is at least $200 AUD more expensive.

As far as I know you can't use the same device for SLOG and a L2ARC, just as a heads up.

The S3700 is a good choice for a SLOG device, but it must be used solely for that purpose; you can't partition it and use one partition for SLOG and another for L2ARC; see this thread for details. Also, you'll need a separate, dedicated SLOG device for each pool (but more on that later). And chances are you don't need an L2ARC at all; you'd be better served to max out the RAM on your motherboard before considering L2ARC.

Hmm, I thought putting and SLOG and L2ARC (or multiple SLOGs) on the same SSD wasn't recommended, rather than impossible? The thread you linked to seems to suggest this (possible but not best practice) - and I've also found resources like this one - https://clinta.github.io/FreeNAS-Multipurpose-SSD/ - which appears to have instructions for doing it.

I understand using one SSD for SLOG and L2ARC (or multiple SLOGs) isn't best practice as it creates contention, but similar to my logic around MX200 + 850 EVO, it strikes me as likely being a good combo of (reasonable) performance and price.

You may not need the add-on card right away.

...

Thanks for the suggestion. I'll give it some further thought, but at present I'm pretty fixed on using SSDs storage for the VM Storage Pool - experience with my current home server (which stores VMDKs on spinners + SLOG + L2ARC) suggests I'd appreciate the extra disk performance.

Fractal Design and Seasonic both have a good reputation. I use a Fractal Design case myself. You might also consider EVGA power supplies in addition to Seasonic, I've used this model for over a year with good results.

If you're really intending to add another 8 drives down the road, hadn't you better consider their power requirements when selecting your PSU? Otherwise you'll have to swap the PSU later.

Is there a particular reason you suggest the EVGA PSU over Seasonic? Or just offering an alternative that might be sized / priced better?

Re the extra HDDs, I have considered this in my power requirements - details in my OP, but I estimate I need 420W with the initial spec (six HDDs total), and about 590W once I add another six HDDs (12 total) - both numbers have some "fat" built into them as well, hence why I think a 550W PSU should be OK.

Aside from the other great advice, I would presume you are thinking about running these VMs in ESXi with the actual VM files residing on FreeNAS (NFS or iSCSI). Hopefully, you are not thinking about running VMs in FreeNAS which is a VM in itself. That would be like VM Inception and more than likely not too good...

My understanding was that FreeNAS has plugins available for CrashPlan, Plex and torrenting, but I haven't done much research on the topic. Sounds like you're suggesting that maybe these plugins actually run in virtual containers themselves (jails?)? Anyway - agreed that virtualisation inception isn't a good idea - if that's how the plugins work, I can just continue running these services in separate Windows VMs. Was more mentioning them as part of explaining the machine's workload (regardless of where they run).

Re the E5, you've selected an 8 core 2.1Ghz processor, which can turbo up to 3ghz.

You could get a 6 core processor, which runs at 3.6Ghz and can turbo up to 4ghz... ie the E5-1650v4. You're not paying the dual-processor tax, but you are paying for the high base speed.

6*3.6 = 21.6ghz, and 8 * 2.1 = 16.8ghz.

...

Following on from the comments I made above in reply to CookiesLikeWhoa - appreciate that the 1650v4 has more GHz available at base clock, but given my workload should be highly parallel (multiple VMs on ESXi, most significant workloads inside those VMs are multi-threaded (except perhaps SAMBA/CIFS?)) I was thinking the available GHz while under high load was probably more relevant. At low load, either processor will be fine, so I'm kinda ignoring that scenario.

Just roughing it up (and guesstimating what speed each core will run at under high load), my logic was 6 x 3.8GHz = 22.8 GHz for the 1650v4 vs 8 x 2.6GHz = 20.8GHz for the 2620v4 - so, pretty similar, and the 2620v4 is at least $200 AUD cheaper.

Not saying I'm definitely correct here... just explaining my logic.

Also, some of the supermicro E5 X10 boards support 10 sata ports. 6 hds + 3 ssds is 9... that leaves one.

Then you add an HBA to add your next vdev.

Unfortunately you can't pass-through the onboard SATA controller to an ESXi VM - so the on-board ports are close to useless :-/

Also, you could go for PCIe SSDs (plenty of lanes on the E5 platform), and that woudn't use up your sata ports...

I did consider this, and like the "elegance" of it - but the PCIe SSDs are just too expensive to justify this route for my home-server use-case. Much cheaper just to buy the second HBA.

And then maybe you would be tempted to go to 8 drives instead of six ;)

I already was... but I forced myself to be reasonable - don't need the space now, and easy to add more later :)
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
That said, a question, just to be sure - do FreeNAS's boot disk(s) actually have to be VDMKs stored on ESXi-local-storage? Or is it possible to just use ESXi-local-storage for the vmx (and associated misc files - but no virtual disks), passthrough the SAS controllers, and have FreeNAS boot from disks attached to the SAS controllers?
Yes, the FreeNAS boot images must be local. FreeNAS can't boot from images that don't exist, and the VMDK images don't exist in a FreeNAS-based datastore until after it has booted up.
Assuming the answer to the second question is no, is there any simpler / cheaper way of achieving this other than grabbing two of the smallest / cheapest SSDs I can find, connecting them to the MB's onboard SATA ports, and using them for ESXi-local-storage?
You could try to set up an ESXi datastore on USB flash drive(s), but I wouldn't recommend that because I wouldn't trust it, assuming it could be made to work at all.
Understood. I'd love to invest in something like S3500s for the VM Storage Pool, but I just can't justify the cost for my use case (it's a home server, after all). New, they're something like 3x+ the price of the MX200/EVO 850, and even used they're maybe 1.5x the price (not to mention questions about whether they'll last another 5+ years on top of their existing age). Basically, I'm thinking the MX200 + EVO 850 (+ S3700 SLOG to better handle sync writes) will be a good combo of (reasonable) performance and price.
Again, your selected SSDs will work fine. The Intel S3500s would be a step up in reliability and longevity... plus you can often find new stock on eBay for reasonable prices.
Isn't it "plenty" when I'm talking about de-duping a maximum of about 420GB of data (80% of the capacity of the VM Storage Pool)? From what I've read, even worst-case, this will require about 2.1GB of RAM (out of 16GB available). I probably do need to further evaluate what space savings I'd achieve from de-dupe, but I'd expect them to be reasonable for the VM Storage Pool, given it will contain (for example) multiple VMs running exactly the same OS.
Hmmm... if your VM images are stored on an ESXi-based datastore, deduplication doesn't come into play because FreeNAS isn't providing the datastore. If you follow my suggested design, FreeNAS boots from small SSD(s) and provides the datastore for all of your VMs, so deduplication would come into play in that case. Since you can enable deduplication on a per-dataset basis, you could certainly turn it on for your VM storage dataset, which you envision as being fairly small. Might be worthwhile giving it a try... but the deduplication documentation states that "In most cases, using compression instead of deduplication will provide a comparable storage gain with less impact on performance."
Hmm, I thought putting and SLOG and L2ARC (or multiple SLOGs) on the same SSD wasn't recommended, rather than impossible? The thread you linked to seems to suggest this (possible but not best practice) - and I've also found resources like this one - https://clinta.github.io/FreeNAS-Multipurpose-SSD/ - which appears to have instructions for doing it.

I understand using one SSD for SLOG and L2ARC (or multiple SLOGs) isn't best practice as it creates contention, but similar to my logic around MX200 + 850 EVO, it strikes me as likely being a good combo of (reasonable) performance and price.
You're correct, it's not impossible to use the same device for both SLOG and L2ARC -- it's just a terribly bad idea. Just because you can do something, doesn't mean that you should do it... :)
Is there a particular reason you suggest the EVGA PSU over Seasonic? Or just offering an alternative that might be sized / priced better?
Yes: the EVGA build quality is excellent; they receive a large number of very positive reviews on Amazon; their price is competitive; and, as I mentioned, mine has worked flawlessly for over a year.
Re the extra HDDs, I have considered this in my power requirements - details in my OP, but I estimate I need 420W with the initial spec (six HDDs total), and about 590W once I add another six HDDs (12 total) - both numbers have some "fat" built into them as well, hence why I think a 550W PSU should be OK.
Did you read the Power Supply Sizing Guide? I assure you that 550W is inadequate for a 14-drive system. I personally would use a 1000W PSU with that number of HDDs.

Good luck!
 
Joined
Mar 22, 2016
Messages
217
Is there a specific reason you suggest this processor over the E5-2620 v4? Back-of-the-envelope, they appear to have a pretty similar amount of total processing power (assuming a multi-threaded workload) - but the E5-1650 v4 is at least $200 AUD more expensive.

Like you say later SAMBA is single threaded and it helps the performance of the system to have the extra Ghz.

You should also be able to get away with assigning few vCPUs to the VM's that way.

The other thing is I would redo the calculations a bit, ESXI will look at your total Ghz based on base frequency or at least it does for my E5645's;
1650: 6*3.6 = 21.6
2620: 8*2.1 = 16.8

This makes the gap even wider. Though it's still $200 more.

Re the extra HDDs, I have considered this in my power requirements - details in my OP, but I estimate I need 420W with the initial spec (six HDDs total), and about 590W once I add another six HDDs (12 total) - both numbers have some "fat" built into them as well, hence why I think a 550W PSU should be OK.

You're quite a bit on the light side there for 12 HDD's. You're looking at operating requirements. Spin up requirements for the HDD's are much much higher than operating. They draw considerably more power at spin then they do during operation.
 

wafliron

Dabbler
Joined
Aug 25, 2016
Messages
13
Yes, the FreeNAS boot images must be local. FreeNAS can't boot from images that don't exist, and the VMDK images don't exist in a FreeNAS-based datastore until after it has booted up.

Think we're crossing wires here - I'm talking about booting FreeNAS directly from the SAS controller(s) (so, from disks attached to the SAS controller(s) that are under full control of FreeNAS), not from VMDKs stored on the SAS controller(s).

Probably easier to re-frame in a non-virtualised context: if you're running FreeNAS on bare metal, can you boot it from disks / ZFS volume directly attached to a LSI SAS controller? Or do you need to provide separate boot devices.

Either way, I've managed to get my hands on some free SSDs for local ESXi-local-storage (as per below), so I can just experiment but am "covered" either way.

You could try to set up an ESXi datastore on USB flash drive(s), but I wouldn't recommend that because I wouldn't trust it, assuming it could be made to work at all.

Agreed, that doesn't sound like a good idea. I've been able to get my hands on a couple of old Intel 320 40GBs SSDs from work for free, so I'll use those for the local ESXi datastores.

Again, your selected SSDs will work fine. The Intel S3500s would be a step up in reliability and longevity... plus you can often find new stock on eBay for reasonable prices.

Couldn't find any that were reasonably price when I looked a few days ago - but I'll certainly keep my eyes out between now and when I order my parts. Thanks.

Hmmm... if your VM images are stored on an ESXi-based datastore, deduplication doesn't come into play because FreeNAS isn't providing the datastore. If you follow my suggested design, FreeNAS boots from small SSD(s) and provides the datastore for all of your VMs, so deduplication would come into play in that case. Since you can enable deduplication on a per-dataset basis, you could certainly turn it on for your VM storage dataset, which you envision as being fairly small. Might be worthwhile giving it a try... but the deduplication documentation states that "In most cases, using compression instead of deduplication will provide a comparable storage gain with less impact on performance."

Definitely won't be storing the VM images on a ESXi-local-storage datastore - other than the FreeNAS boot disks (by the sounds of it) :)

Thanks for the quote / link re dedupe vs compression - I'd envisaged using both in this particular case, but haven't done a lot of research about how they interact when both enabled. Will tread cautiously and do more research before I commit to turning de-dupe on.

I humbly suggest HGST drives in lieu of Seagates; I use both the UltraStar and Deskstar NAS models.

Realised I forgot to reply to this part of your earlier post - I had HGST NAS drives on my list originally, but dropped them because they're 7,200rpm. Apart from not wanting the higher power draw, from what I've read it's a bad idea to mix HDDs with significantly different rpms in the same vdev (and both the WD Reds and Seagate NASs are 5,xxx rpm).

You're correct, it's not impossible to use the same device for both SLOG and L2ARC -- it's just a terribly bad idea. Just because you can do something, doesn't mean that you should do it... :)

OK, thanks for clarifying. If I decide to go down this path, I'll do some performance testing - but it seems the concensus is that I don't need L2ARC anyway, which avoids the problem.

Did you read the Power Supply Sizing Guide? I assure you that 550W is inadequate for a 14-drive system. I personally would use a 1000W PSU with that number of HDDs.

You're quite a bit on the light side there for 12 HDD's. You're looking at operating requirements. Spin up requirements for the HDD's are much much higher than operating. They draw considerably more power at spin then they do during operation.

Interesting you're both saying this, given that I did follow the Power Supply Sizing Guide to come up with my maximum potential watts (and hence PSU size) - and in every case where I could find the info, used actual maximum power figures (not idle / normal operating power requirements) provided by the manufacturers, rather than estimates.

My calculationss are attached - after a couple of minor adjustments since my OP, they come out at maximum potential power usage of 405W (including the 25% buffer) with 6 HDDs, and 573W (again including 25% buffer) with 12 HDDs. This is all based on the E5-2620v4 CPU with a max TDP of 85W - the numbers will change if I switch to the E5-1650v4 as it has a max TDP of 140W.

I don't mind bumping up the size of my PSU a bit if I need to, of course - but I don't want to grossly over-provision or else my idle / normal operating power requirements are going to fall well below the efficiency "sweet spot" for my PSU. e.g. even with only a 550W PSU, my estimated idle Watts (with 6 HDDs) are only 18 above the bottom end of the PSU's "good efficiency" range (550W x 0.2 = 110W) - and I reckon I've overestimated my idle power by a fair margin.

The other thing is I would redo the calculations a bit, ESXI will look at your total Ghz based on base frequency or at least it does for my E5645's;
1650: 6*3.6 = 21.6
2620: 8*2.1 = 16.8

This makes the gap even wider. Though it's still $200 more.

This is true (ESXi will report the base frequency), but it doesn't prevent Turbo from kicking in / from VMs benefiting from Turbo. So the calculation of what sort of performance each CPU offers under load should still be based on total GHz including Turbo benefits. For some reason I cannot find the max turbo speed for either of these processors with all cores under high load, which is the data I need to properly perform the comparison - very annoying!

Anyway - I will consider the E5-1650v4 as well - thanks. Considering a more expensive CPU does raise even further questions though: the E5-2630v4 is essentially the same price as the E5-1650v4, and while it still doesn't have the very high single core performance of the E5-1650v4, it definitely has more total compute power due to being a 10-core part (it's 10-core, 2.2GHz base, 3.1GHz max turbo).

Finally, an interesting aside - while having a look at the cost of the E5-2630v4, I noticed there's a heap of them being sold very cheaply out of Asia - e.g. http://www.ebay.com.au/itm/Intel-Xe...232152?hash=item51e4c16b58:g:EAIAAOSw8vZXMgAK and http://www.ebay.com.au/itm/Intel-Xe...764544?hash=item2370967780:g:YT4AAOSw6n5XuRmK. The first of those links mentions the CPU only being compatible with certain MBs, whereas the second says the CPU is an engineering sample with limited compatibility.

Probably a case of "when the price is too good to be true...", but interesting none-the-less!
 

Attachments

  • Power Calculations.PNG
    Power Calculations.PNG
    483.7 KB · Views: 211

wafliron

Dabbler
Joined
Aug 25, 2016
Messages
13
Sorry, one further question I forgot to ask: one way or another, looks like I'll be going for a socket 2011 processor along with the X10SRH-CF board. Can anyone suggest a good HSF for this combo, which won't block the DIMM slots closest to the CPU?
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Think we're crossing wires here - I'm talking about booting FreeNAS directly from the SAS controller(s) (so, from disks attached to the SAS controller(s) that are under full control of FreeNAS), not from VMDKs stored on the SAS controller(s).

Probably easier to re-frame in a non-virtualised context: if you're running FreeNAS on bare metal, can you boot it from disks / ZFS volume directly attached to a LSI SAS controller? Or do you need to provide separate boot devices.
I think the confusion is due to the fact that you mention wanting to run an AiO with ESXi as the OS and FreeNAS as a VM:
Firstly, context - what is this home server going to be doing? Again, it will be an AiO machine running ESXi on the bare metal. On top of that I plan to run a virtualised copy of FreeNAS, to provide "loopback" storage to ESXi for the other VMs via NFS, plus providing NAS services to clients on the network via CIFS.
So in those regards, FreeNAS will have to be installed on local media that is a Datastore within ESXi. Sure you can pass through the HBA to FreeNAS, but first FreeNAS has to exist.

All VMs created in ESXi (VMDKs) have to be in a datastore that is accessible (either locally or network) by ESXi. But, what you are saying is that somehow you are going to pass through a HBA to a VM (FreeNAS) that doesn't exist so it can boot said VM?

If you have a way of doing this, I want to know 'cuz we can make a lot of money. ;)
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Think we're crossing wires here...
Agreed! See @Mirfster's post related to this subject.
Interesting you're both saying this, given that I did follow the Power Supply Sizing Guide to come up with my maximum potential watts (and hence PSU size) - and in every case where I could find the info, used actual maximum power figures (not idle / normal operating power requirements) provided by the manufacturers, rather than estimates.

My calculationss are attached - after a couple of minor adjustments since my OP, they come out at maximum potential power usage of 405W (including the 25% buffer) with 6 HDDs, and 573W (again including 25% buffer) with 12 HDDs. This is all based on the E5-2620v4 CPU with a max TDP of 85W - the numbers will change if I switch to the E5-1650v4 as it has a max TDP of 140W.

I don't mind bumping up the size of my PSU a bit if I need to, of course - but I don't want to grossly over-provision or else my idle / normal operating power requirements are going to fall well below the efficiency "sweet spot" for my PSU. e.g. even with only a 550W PSU, my estimated idle Watts (with 6 HDDs) are only 18 above the bottom end of the PSU's "good efficiency" range (550W x 0.2 = 110W) - and I reckon I've overestimated my idle power by a fair margin.
It's great that you've gone to the trouble of running some calculations; it's always enlightening to see how much current draw each component adds to the total for the system. But did you really read the PSU sizing thread? Because the very first post is @jgreco's - considered one of the most knowledgeable members here on the forum - and he recommends 650W for an Avoton system, 750W for an E3-12xx system, and 850-1050W for an E5-160xx system when using 12 drives.

550W simply isn't adequate for a 12-drive system. Don't believe me? Browse 12-bay servers on eBay; you'll usually see redundant 750, 900, or 1200W PSUs in these systems.

We recently had an extended discussion with a user who insisted that a 500W PSU would be adequate for his 12-drive system; he's wrong, but may very well install it anyway. We tried to reason with him, we really did... "You can lead a horse to water, but you can't make him drink."

Member @Bidule0hm has gone to a great deal of trouble to measure the initial current spike drawn by HDDs at startup. The upshot is that you can't really put a lot of faith in the manufacturer's specifications and should realistically use ~30-35W per HDD in your calculations.

But I won't argue it any more... I believe in freedom, and that includes your freedom to select an under-sized PSU for your system if you've convinced yourself you know better than the engineers at Supermicro, HP, Dell, here on the forum, etc. :smile:

Good luck!
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
12*36W = 432 +140 + motherboard, ram, HBA, SSD, fans etc
 
Last edited:

wafliron

Dabbler
Joined
Aug 25, 2016
Messages
13
It's great that you've gone to the trouble of running some calculations; it's always enlightening to see how much current draw each component adds to the total for the system. But did you really read the PSU sizing thread? Because the very first post is @jgreco's - considered one of the most knowledgeable members here on the forum - and he recommends 650W for an Avoton system, 750W for an E3-12xx system, and 850-1050W for an E5-160xx system when using 12 drives.

...

But I won't argue it any more... I believe in freedom, and that includes your freedom to select an under-sized PSU for your system if you've convinced yourself you know better than the engineers at Supermicro, HP, Dell, here on the forum, etc. :)

To be clear, if the consensus is that I need more than 550W (which it is), then I'm going to listen to that and buy a bigger PSU. The whole reason I'm here and asking questions is to tap into the expertise of others on topics that I'm relatively inexperienced in. The reason I continue to ask questions is simply that I have a curious mind and like to understand the "why" of a thing.

Re @jgreco's post, I have read it, end-to-end, several times. Including the bit with example systems and PSU sizes. But, I didn't pay that much attention to the examples, as they're listed under a heading that says "TL; DR - Precalculated Guesses for the Lazy Geek" and are prefaced with text explaining they're aimed as people who can't be bothered calculating their own detailed power requirements using the information earlier in his post. Where I think I went wrong with my own calculations was focussing too much on statements like "If you are building a system with more than four drives, I encourage you to look at the specifications for your drives..." and not enough on statements like "...but still suggest that you want to reserve about 35 watts for each drive."

I think the confusion is due to the fact that you mention wanting to run an AiO with ESXi as the OS and FreeNAS as a VM:

...

So in those regards, FreeNAS will have to be installed on local media that is a Datastore within ESXi. Sure you can pass through the HBA to FreeNAS, but first FreeNAS has to exist.

All VMs created in ESXi (VMDKs) have to be in a datastore that is accessible (either locally or network) by ESXi. But, what you are saying is that somehow you are going to pass through a HBA to a VM (FreeNAS) that doesn't exist so it can boot said VM?

(Firstly, I'm only continuing this part of the discussion as a curiosity - in practice I'm sure you are right, at least when it comes to FreeNAS.)

I still don't think I'm explaining myself properly - I'll try again:

A VMDK is not a Virtual Machine - a VMDK is a piece of virtual hardware that is attached to a VM. The VM is an entity unto itself, which in the ESXi-world is stored on disk as a VMX file (small file containing VM metadata, virtual hardware config, etc) and registered against an ESXi host.

It's extremely common that a VM has one (or more) VMDKs attached to it, but having VMDKs is not a requirement. For example, you can setup an ESXi VM without any VMDKs, and network boot it via PXE. Likewise, in theory, I think you could use VMDirectPath to pass-through a PCIe device that is capable of acting as a boot device, and boot from that just like you could at the physical machine level. I say "in theory" because I've never tried it, and can't find specific examples from a quick bit of Googling.

Applying the above, in theory you can setup a FreeNAS VM with the VMX stored on local ESXi storage but no VMDKs, and pass-through a PCIe device to boot from - e.g. a device such as a HBA / RAID Controller that includes a boot option ROM.

In practice, I'm guessing this doesn't work with FreeNAS and the recommended HBAs though - I'm guessing that when flashed to IT mode the LSI HBAs probably don't include a boot option ROM. Possibly it would work in IR mode (not that that's a good idea for FreeNAS) - but I'm not sure if you can trigger a boot option ROM on a passthrough device inside a VM, or if triggering a boot option ROM is only possible during the boot process of a physical machine. I might actually try this out once all my hardware arrives as a curiosity (with the HBA flashed in IR mode).

---

Finally, on a separate note - thanks again to everyone who contributed to this thread - very helpful, and greatly appreciated.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
To be clear, if the consensus is that I need more than 550W (which it is), then I'm going to listen to that and buy a bigger PSU. The whole reason I'm here and asking questions is to tap into the expertise of others on topics that I'm relatively inexperienced in. The reason I continue to ask questions is simply that I have a curious mind and like to understand the "why" of a thing.

Re @jgreco's post, I have read it, end-to-end, several times. Including the bit with example systems and PSU sizes. But, I didn't pay that much attention to the examples, as they're listed under a heading that says "TL; DR - Precalculated Guesses for the Lazy Geek" and are prefaced with text explaining they're aimed as people who can't be bothered calculating their own detailed power requirements using the information earlier in his post. Where I think I went wrong with my own calculations was focussing too much on statements like "If you are building a system with more than four drives, I encourage you to look at the specifications for your drives..." and not enough on statements like "...but still suggest that you want to reserve about 35 watts for each drive."
Outstanding! And kudos to you, Sir (or Madam), as it takes a big man (or lady) to admit when they've made a mistake, however minor.
I still don't think I'm explaining myself properly - I'll try again:

A VMDK is not a Virtual Machine - a VMDK is a piece of virtual hardware that is attached to a VM. The VM is an entity unto itself, which in the ESXi-world is stored on disk as a VMX file (small file containing VM metadata, virtual hardware config, etc) and registered against an ESXi host.

It's extremely common that a VM has one (or more) VMDKs attached to it, but having VMDKs is not a requirement. For example, you can setup an ESXi VM without any VMDKs, and network boot it via PXE. Likewise, in theory, I think you could use VMDirectPath to pass-through a PCIe device that is capable of acting as a boot device, and boot from that just like you could at the physical machine level. I say "in theory" because I've never tried it, and can't find specific examples from a quick bit of Googling.

Applying the above, in theory you can setup a FreeNAS VM with the VMX stored on local ESXi storage but no VMDKs, and pass-through a PCIe device to boot from - e.g. a device such as a HBA / RAID Controller that includes a boot option ROM.

In practice, I'm guessing this doesn't work with FreeNAS and the recommended HBAs though - I'm guessing that when flashed to IT mode the LSI HBAs probably don't include a boot option ROM. Possibly it would work in IR mode (not that that's a good idea for FreeNAS) - but I'm not sure if you can trigger a boot option ROM on a passthrough device inside a VM, or if triggering a boot option ROM is only possible during the boot process of a physical machine. I might actually try this out once all my hardware arrives as a curiosity (with the HBA flashed in IR mode).
You won't offend many of us here with honest intellectual curiosity. I'm the same way; I like to know the whys and wherefores about things. And it may be that we haven't been clear either! :smile:

While it may be possible to boot a FreeNAS VM given only a VMX file, with all the other ancillary files located elsewhere, I can't see any advantage in doing so. And I frankly admit that I've never heard of such a thing. Virtualizing FreeNAS is considered risky enough as it is without adding yet another layer of complexity and hence, fragility.

To put things in a nutshell when it comes to virtualizing FreeNAS: FreeNAS requires boot media, stored in a VMDK file -- two if you use a mirrored installation. The boot media are, of course, stored on an ESXi datastore -- and you can't boot the FreeNAS VM from boot media stored on a datastore provided by the FreeNAS VM itself. Once FreeNAS boots up it can provide a datastore for other VMs, but again, not its own.
Finally, on a separate note - thanks again to everyone who contributed to this thread - very helpful, and greatly appreciated.
You're very welcome! We always enjoy an edifying discussion with intelligent users. Good luck with your build.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
A VMDK is not a Virtual Machine - a VMDK is a piece of virtual hardware that is attached to a VM. The VM is an entity unto itself, which in the ESXi-world is stored on disk as a VMX file (small file containing VM metadata, virtual hardware config, etc) and registered against an ESXi host.

It's extremely common that a VM has one (or more) VMDKs attached to it, but having VMDKs is not a requirement. For example, you can setup an ESXi VM without any VMDKs, and network boot it via PXE. Likewise, in theory, I think you could use VMDirectPath to pass-through a PCIe device that is capable of acting as a boot device, and boot from that just like you could at the physical machine level. I say "in theory" because I've never tried it, and can't find specific examples from a quick bit of Googling.

Applying the above, in theory you can setup a FreeNAS VM with the VMX stored on local ESXi storage but no VMDKs, and pass-through a PCIe device to boot from - e.g. a device such as a HBA / RAID Controller that includes a boot option ROM.
Out of curiosity, I personally gave this a whirl to see what happens.
So here is what I did (On an ESXi 6.0 U2 Server):
  1. Attached a LSI 9211-8i (which is in IT Mode and running P20)
    • LSI is actually attached to the backplane that has 5 x 3TB SATA Drives and 2 x Intel DC S3500 160GB SSDs
  2. Created a "Diskless" VM
    • VMX is stored on the ESXi Local SSD Datastore
    • Called it "Diskless"
    • Gave it ample resources
    • Passed-Through the LSI 9211-8i
  3. Mounted the FN ISO and booted
  4. Install did see all drives and allowed me to install (which seemed promising)
  5. However on the very first reboot, it was unable to locate the drives and wanted to PXE Boot
Seems like it is a "Which came first, the chicken or egg" type scenario. Looking in the VM BIOS I did not see anything that would allow me to really set the boot environment to use the HBA either. *** Keep in mind I do not run EFI, but don't think it makes any difference...

Screenshots for reference:
upload_2016-8-31_11-42-15.png
upload_2016-8-31_11-42-30.png
upload_2016-8-31_11-42-39.png


upload_2016-8-31_11-42-57.png
upload_2016-8-31_11-43-6.png
upload_2016-8-31_11-43-14.png

upload_2016-8-31_11-46-26.png
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Just to add one more thing, I would fathom that one could use RDM (Raw Device Mapping); but that would require even more setup to work... Simpler to do as stated and create the VMX and VMDK on a local DataStore... My 2 cents...
 
Status
Not open for further replies.
Top