First FreeNAS-in-ESXi build -- going the used E5 route.

Status
Not open for further replies.

lightheat

Dabbler
Joined
Jul 15, 2018
Messages
13
Hi gang,

I read over the hardware recommendations guide and have tentatively pieced together what I think is a reasonable build. I have the HDDs and the case; the rest I have yet to order, pending opinions. I'm no stranger to IT, and given my desire to make full use of the hardware, I'd like to virtualize FreeNAS in ESXi. I know it's preferred to install it to bare metal, I understand many of the pitfalls, and I know I'm bound to run into all the challenges that others have, but I'm open to it. I'll probably be back here when I do. :)

Here's what I've got (blue items I already have)
I chose the CPU because unbuffered ECC RAM is still stupid expensive for both DDR3 and 4, while people seem to be giving away the registered variants. And hell, might as well get that extra boost in CPU power. It's honestly more powerful than my current desktop i7-4790k (14994 vs 11182 in PassMark). If I get the dual-slot motherboard, I may not buy 2 CPUs up front; I just enjoy having the upgrade path.

I'm torn on the motherboards. I've read in numerous places that the Supermicro boards have known security vulnerabilities, particularly with their IPMIs and passwords, enough to cause Apple to drop them entirely in 2016 due to allegedly infected firmware downloaded directly from Supermicro. I don't know if this is enough to not consider them at all-- please correct me on this-- but it's enough to make me look at other brands first. (Also the alphabet soup of the various motherboard options in their lineup is dizzying.) The ASUS board seems to be the only ATX option for dual LGA2011 CPUs; that's why I'm heavily considering it. My case is too small to fix anything larger than ATX (becuz I dum). It seems ASRock is the favorite around these parts, plus I'd get a new board instead of used. It's just surprisingly expensive for a 4-year-old board.

I know I'm going to need the HBA to pass through (VT-d) the drives to the FreeNAS guest. I'm probably also going to need a separate volume for all the non-NAS hosted VMs and the ESXi OS in general. I've considered buying the recommended Intel SSDs for the write cache, but I understand that it's better to max out RAM first, so I've put that on the backburner. I'm also bottle-necked by a gigabit network for the foreseeable future. Any advantages I may not be seeing? I know the Intels are recommended because you can do some fancy SSD hacking that limits how much of the whole drive is actually usable to ...increase the wear ...leveling? I dunno, I forget. Something like that.

I plan on using RAIDZ2 with the 6 drives for ~24TB usable. I bought them in pairs many months apart, so they're not all from the same batch. I'm trying to do most of this the smart way.

The reason I want ESXi is because I want to have a Plex server (not just in jail), a CrashPlan for Small Business guest, a web server, a mail server, etc. I want the flexibility a hypervisor affords me.

Hope that covers everything. Please let me know your thoughts, especially if you think I left anything out or forgot a piece. I'm excited to get started on this!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Considering that every board that you might choose has security vulnerabilities in the form of the Intel CPU itself, whether we're talking about fun with AMT, or Spectre/Meltdown grade issues, or whatever, eliminating Supermicro is kind of like blowing your feet off with a shotgun. Also, there's really no evidence that ASUS and ASRock haven't had similar issues, it's just that as manufacturers who do not specialize in servers, problems with their products do not raise the crisis alarms in quite the same way. When I walk through colocation data centers, I see Dell, HP, and Supermicro as the primary server hardware vendors.

You WANT a manufacturer who specializes in servers. You just want to make sure that you secure your server and that you're not putting the IPMI management on the live Internet. The reason that Supermicro is the predominant manufacturer here is that in the early days of this forum, people used to halfarse their builds with random crap desktop boards or the cheapest APU they could find, and this led to all kinds of perverse issues. If you want a solid NAS, you need quality components such as Intel for the ethernet chipset - not Realtek. Supermicro's really the only option that offers small system builders a huge variety of FreeBSD-compatible options. I began pressing users to go Supermicro years ago and no one's been worse for it AFAICT.

Trying to get things like VT-d to work correctly requires full support of the mainboard in both the hardware and the BIOS, which many manufacturers, including Gigabyte, Intel, and others, have mucked up at times because it's such a niche thing. You are taking a big chance with ASUS or ASRock, might work, might not. I haven't seen issues with Supermicro's stuff in a long time. They got it right starting around their X9 server platform though I seem to recall there may be some issues with the workstation boards. Remember that this is something that has to be 100.00000% functional. Testing for months under heavy load is suggested. In general we do not recommend virtualizing FreeNAS. If you really feel you must do it, you absolutely need to read my guide that includes most of the sharpest pain points. We do see failures and I work hard to guide people away from virtualizing if possible, and from failing if they are insistent on virtualizing. If you don't read all the stuff at all those links, well, all the wisdom you need is available if you just follow the links.

At least one of the boards you are talking about appears to be a workstation, not server, board. This isn't recommended, in part because the workstation boards are designed for (and validated with) workstation loads, and as such, even if it were a Supermicro board, probably doesn't have as much testing on some of the more arcane stuff such as the VT-d we just discussed.

You will want to be very careful about RAM compatibility. E5 typically requires registered memory. Download the memory configuration guide for the board that you choose and make sure that you validate your choices.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Hi gang,

I read over the hardware recommendations guide and have tentatively pieced together what I think is a reasonable build. I have the HDDs and the case; the rest I have yet to order, pending opinions. I'm no stranger to IT, and given my desire to make full use of the hardware, I'd like to virtualize FreeNAS in ESXi. I know it's preferred to install it to bare metal, I understand many of the pitfalls, and I know I'm bound to run into all the challenges that others have, but I'm open to it. I'll probably be back here when I do. :)

Here's what I've got (blue items I already have)
I chose the CPU because unbuffered ECC RAM is still stupid expensive for both DDR3 and 4, while people seem to be giving away the registered variants. And hell, might as well get that extra boost in CPU power. It's honestly more powerful than my current desktop i7-4790k (14994 vs 11182 in PassMark). If I get the dual-slot motherboard, I may not buy 2 CPUs up front; I just enjoy having the upgrade path.

I'm torn on the motherboards. I've read in numerous places that the Supermicro boards have known security vulnerabilities, particularly with their IPMIs and passwords, enough to cause Apple to drop them entirely in 2016 due to allegedly infected firmware downloaded directly from Supermicro. I don't know if this is enough to not consider them at all-- please correct me on this-- but it's enough to make me look at other brands first. (Also the alphabet soup of the various motherboard options in their lineup is dizzying.) The ASUS board seems to be the only ATX option for dual LGA2011 CPUs; that's why I'm heavily considering it. My case is too small to fix anything larger than ATX (becuz I dum). It seems ASRock is the favorite around these parts, plus I'd get a new board instead of used. It's just surprisingly expensive for a 4-year-old board.

I know I'm going to need the HBA to pass through (VT-d) the drives to the FreeNAS guest. I'm probably also going to need a separate volume for all the non-NAS hosted VMs and the ESXi OS in general. I've considered buying the recommended Intel SSDs for the write cache, but I understand that it's better to max out RAM first, so I've put that on the backburner. I'm also bottle-necked by a gigabit network for the foreseeable future. Any advantages I may not be seeing? I know the Intels are recommended because you can do some fancy SSD hacking that limits how much of the whole drive is actually usable to ...increase the wear ...leveling? I dunno, I forget. Something like that.

I plan on using RAIDZ2 with the 6 drives for ~24TB usable. I bought them in pairs many months apart, so they're not all from the same batch. I'm trying to do most of this the smart way.

The reason I want ESXi is because I want to have a Plex server (not just in jail), a CrashPlan for Small Business guest, a web server, a mail server, etc. I want the flexibility a hypervisor affords me.

Hope that covers everything. Please let me know your thoughts, especially if you think I left anything out or forgot a piece. I'm excited to get started on this!
I concur with everything @jgreco said in his post above, and it would be wise to pay heed and also to study the links he listed.

I have built three FreeNAS-on-ESXi systems, all based on Supermicro motherboards -- see my signature below for details -- and I heartily recommend Supermicro for this use-case.

I also recommend using the slightly older VMware ESXi v6.0. It supports the Windows-based C# client, which I find to be much more intuitive and easy to use compared to the HTML5-based client which is all you get with newer versions of ESXi.

I used Benjamin Bryan's excellent FreeNAS 9.10 on VMware ESXi 6.0 Guide when I built my first all-in-one system. Despite referring to an obsolete version of FreeNAS, this is still a relevant reference for installing newer FreeNAS releases.

Good luck!
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
I also recommend using the slightly older VMware ESXi v6.0. It supports the Windows-based C# client, which I find to be much more intuitive and easy to use compared to the HTML5-based client which is all you get with newer versions of ESXi.
Nonsense. there are NUMEROUS options that you can ONLY use from the new UIs. Go with 6.7 and the FLEX/Flash supports everything while the HTML5 is 97% done. For a standalone host iots closer to 99.99% done.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Nonsense. there are NUMEROUS options that you can ONLY use from the new UIs. Go with 6.7 and the FLEX/Flash supports everything while the HTML5 is 97% done. For a standalone host iots closer to 99.99% done.

Yeah, the local VMware user group let out a collective gasp at a recent meeting where VMware reported that they had finally(!!!) made progress on enabling licensing(!!!) through the web UI.

Then there was a bit of a twitter when discussing the merits and downsides of the FLEX/Flash vs the HTML5 stuff, and it was pretty clear that the web GUI adopters in the room were not huge fans. It was also indicated that both the FLEX/Flash and the HTML5 each have mutually exclusive features as people were kvetching about needing to switch back and forth as neither is feature-complete. I was only listening with half an ear so I don't recall the specific issues.

For 6.0, there isn't much to worry about. Some of the more esoteric paid vSphere features such as vFRC can only be handled via the web GUI, hardware version 10, etc., mostly things that are kinda nice to have if you have a full enterprise license. Simultaneous vMotion+storage vMotion is also only available from the web GUI, allowing you to live migrate a VM between two hypervisors without a shared datastore. There aren't a lot of benefits to the web UI on 6.0 and it is slowish and crappy-ish compared to the legacy client.

The legacy C# client is deprecated for 6.5 and 6.7, so while it is true to say

there are NUMEROUS options that you can ONLY use from the new UIs

this is more a side effect of the C# client not being officially supported. It actually does "work" on 6.5 but with significant limitations.

If you do not need fancy features, and anyone looking to make FreeNAS-on-ESXi as an all-in-one home appliance is probably looking at free ESXi anyways, I do not see a ton of value in anything beyond 6.0 from a features point of view. However, it's also fine to just go with 6.7 and learn that and have it as a baseline too.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Yeah, the local VMware user group let out a collective gasp at a recent meeting where VMware reported that they had finally(!!!) made progress on enabling licensing(!!!) through the web UI
You mean the vSphere Client. The Web Client is the flex/flash version:p
If you do not need fancy features
VM hardware compatibility level. I don't have a matrix in front of me but there are a number of VM settings that cannot be changed in the C# client.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You mean the vSphere Client. The Web Client is the flex/flash version:p

No, the workaround people had been using was to fire up the C# vSphere Client, or to log in and assign a license at the CLI, to handle licensing. This is hilarious because it isn't a real good way to put your best foot forward for new users.

You would think that taking care of licensing would be among the first features you'd want to get implemented in the UI, unless perhaps the thing was such a dumpster fire that all your web developers were busy trying to make critical functionality work correctly. We know that the original "Web Client" UI started off as a fling, though, so it's quite possible that some devs hacked together something that worked "well enough" but never got the resources to go back and fix it right, and then when the possibility of doing it in HTML5 came along, got immediately migrated off onto that. At that point, VMware was already indicating that the C# client was EOL (or nearing it, or whatever) so I imagine this is largely an example of declaring the old thing dead before having a good "new" solution. I don't know what the reality behind the scenes at VMware actually is. However, the people that have to work with vSphere on a day-to-day basis have found the lack of a coherent single client that works correctly and does everything to be Extremely Frustrating, whether we are discussing the C# vSphere Client (doesn't do the "new stuff") or the Web Client (doesn't do random stuff/is wicked slow/etc) or the "new" "HTML5" vSphere Client (wasn't released "feature-complete" but is allegedly "much better now.")

All of which has got to be as confusing as fsck to a newbie.

VM hardware compatibility level. I don't have a matrix in front of me but there are a number of VM settings that cannot be changed in the C# client.

I have yet to see a compelling reason to move beyond hw7 in most cases. You need hw10 for vFRC or nested hypervisors. Aside from that there isn't a lot.

https://kb.vmware.com/s/article/2051652

Ironically in the earlier C# clients it would refuse to do anything at all with editing the VM settings, while the more recent C# clients will mostly let you edit what they know how to handle. I see that as a tacit admission of how badly the web GUI stuff sucked for so long. :smile:
 

lightheat

Dabbler
Joined
Jul 15, 2018
Messages
13
Hi jgreco and others:

Thank you for taking the time to write thorough response. However, I have some concerns that you maybe missed a couple things in my post, and there are a couple other things I wish to address:

eliminating Supermicro is kind of like blowing your feet off with a shotgun. Also, there's really no evidence that ASUS and ASRock haven't had similar issues, it's just that as manufacturers who do not specialize in servers, problems with their products do not raise the crisis alarms in quite the same way. When I walk through colocation data centers, I see Dell, HP, and Supermicro as the primary server hardware vendors.

That's fair; I too see those manufacturers primarily, but I cannot afford to drop $1,000+ on a mainboard from Dell or HP that might not even conform to a standard ATX form factor. Please trust me, I've looked hard into Supermicro. Bearing in mind my limitation of ATX, that limits my options to just 4 of Supermicro's whopping 86 socket R options (scroll past R3)-- hardly shooting my foot off. Also, saying that there's no evidence some someone doesn't do something is meaningless. There is plenty of evidence that I'll run into serious problems with plaintext passwords if I start programatically configuring a Supermicro motherboard. What I was seeking was to have those concerns honestly addressed with possible solutions and workarounds (maybe there was a firmware fix?), not to have my concerns dismissed as an alarmist reaction.

You WANT a manufacturer who specializes in servers. You just want to make sure that you secure your server and that you're not putting the IPMI management on the live Internet.

This is good advice, and I'll certain keep the management on its own network, away from the 'net. The ASUS mobo has a dedicated LAN port for its management interface, it seems.

The reason that Supermicro is the predominant manufacturer here is that in the early days of this forum, people used to halfarse their builds with random crap desktop boards or the cheapest APU they could find, and this led to all kinds of perverse issues. If you want a solid NAS, you need quality components such as Intel for the ethernet chipset - not Realtek.

Also good advice; I imagine that gets very frustrating seeing users choose those boards for a "server" build time and again. Please bear in mind, though, that this doesn't really apply to me. I did not pick a cheap consumer APU or CPU, nor did I seek out the worst consumer-grade motherboard possible in the hopes of shaving off a couple bucks. The two I linked above are both marketed as server boards, not workstation, with Intel NICs, not Realtek. They also have the same chipsets as their (limited) Supermicro replacements. I made sure of that before I posted. Like I said, I did indeed read your guide (it was very helpful, by the way), and chose accordingly based on my situation.

Testing for months under heavy load is suggested.

Good advice. How do most users here stress-test their set-ups? Since I chose many used components, I'm naturally going to run Memtest86 for a few days at the least, whatever the linux equivalent of Prime95 is on the CPU while watching temps (not too long, though), but I'm not sure what I can do to test the drives from the FreeNAS environment. Normally I'd just do a ridiculous 32-pass DoD boot-and-nuke, but I'm not sure how I'd do that with a software RAID.

In general we do not recommend virtualizing FreeNAS. If you really feel you must do it, you absolutely need to read my guide that includes most of the sharpest pain points. We do see failures and I work hard to guide people away from virtualizing if possible, and from failing if they are insistent on virtualizing. If you don't read all the stuff at all those links, well, all the wisdom you need is available if you just follow the links.

I really do understand the concern, and thank you for the links. This is a requirement for my build. I re-read all of your posts many times, hence why I have the HBA in my list. I don't foresee the server-grade boards I mentioned having trouble with VT-d, especially given my consumer-grade Gigabyte has no problems with it currently, but it's certainly possible. Based on your posts, passing the HBA via VT-d and giving the VM a solid 32GB of the 48GB available RAM seems like a good starting point, no?

At least one of the boards you are talking about appears to be a workstation, not server, board. This isn't recommended, in part because the workstation boards are designed for (and validated with) workstation loads, and as such, even if it were a Supermicro board, probably doesn't have as much testing on some of the more arcane stuff such as the VT-d we just discussed.

Can you please tell me which board it is, and what made you think it's not server-grade? I may have missed something.

You will want to be very careful about RAM compatibility. E5 typically requires registered memory.

I indeed chose registered memory. Please see above.

I have built three FreeNAS-on-ESXi systems, all based on Supermicro motherboards -- see my signature below for details -- and I heartily recommend Supermicro for this use-case.

I see you chose the X9DRi-LN4F+ for your E5-2***v2 build. It's in the same price range and appears to have everything I need, but this is an E-ATX board, which I cannot use. I'm starting to wonder whether I should just consider getting a new chassis at this point.

I also recommend using the slightly older VMware ESXi v6.0. It supports the Windows-based C# client, which I find to be much more intuitive and easy to use compared to the HTML5-based client which is all you get with newer versions of ESXi.

I'm a C# developer by trade so that certainly caught my eye. I'm not against using a web client, though. I'd like to use the latest stable versions of software, if possible.

I used Benjamin Bryan's excellent FreeNAS 9.10 on VMware ESXi 6.0 Guide when I built my first all-in-one system. Despite referring to an obsolete version of FreeNAS, this is still a relevant reference for installing newer FreeNAS releases. Good luck!

Thanks for the link! It appears I already have it bookmarked. :)
 

lightheat

Dabbler
Joined
Jul 15, 2018
Messages
13
I wish to correct myself: while those are indeed the only 4 ATX Supermicro boards with Dual-CPU support, there are 11 more that support a single CPU. I will look into these a bit more, but my concerns about Supermicro-specific vulnerabilities remain. I don't think I'm going to be able to avoid the CPU-based vulnerabilities, unfortunately.
 

joeinaz

Contributor
Joined
Mar 17, 2016
Messages
188
Choice of case is one of the critical facets of building a FreeNAS solution. The case needs to meet your current need, allow for reasonable growth, provide adequate cooling and be quiet enough for the area it is in. Many people chose the case first then try to find the components to meet the need. My suggestion is to design your system requirements first THEN find the components to meet the need. I have FreeNAS systems that use both ATX and E-ATX cases. The key is how I intend to deploy my solution.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
That's fair; I too see those manufacturers primarily, but I cannot afford to drop $1,000+ on a mainboard from Dell or HP that might not even conform to a standard ATX form factor.

TTBOMK there aren't Dell or HP boards that really conform to a standard ATX form factor, at least not without customizations that make that meaningless.

Please trust me, I've looked hard into Supermicro. Bearing in mind my limitation of ATX, that limits my options to just 4 of Supermicro's whopping 86 socket R options (scroll past R3)-- hardly shooting my foot off. Also, saying that there's no evidence some someone doesn't do something is meaningless. There is plenty of evidence that I'll run into serious problems with plaintext passwords if I start programatically configuring a Supermicro motherboard. What I was seeking was to have those concerns honestly addressed with possible solutions and workarounds (maybe there was a firmware fix?), not to have my concerns dismissed as an alarmist reaction.

https://www.bleepingcomputer.com/ne...ion-on-hpe-ilo4-servers-with-29-a-characters/

https://www.dell.com/community/Syst...ilities-identified-on-DRAC-iDRAC/td-p/4540177

https://pedromadias.wordpress.com/2012/06/25/all-your-asus-servers-ikvmipmi-may-belong-to-other/

https://blog.eclypsium.com/2018/08/09/bmc-ipmi-and-the-data-center-underbelly/ -- you'll need to search the page for ASRock (sorry)

So here's the smart path: *assume* these all represent a security risk and do not present the IPMI/BMC to the public Internet. Your "evidence some someone doesn't do something is meaningless" is ironically meaningless, as you should actually do just the opposite and assume that it IS potentially vulnerable and WILL have an exploit in the future. Designing stuff with that sort of paranoia mindset is how people in IT work to mitigate risk. You cannot really prove that these things ARE secure. Assume they are NOT.

Inserting relevant message ----

I wish to correct myself: while those are indeed the only 4 ATX Supermicro boards with Dual-CPU support, there are 11 more that support a single CPU. I will look into these a bit more, but my concerns about Supermicro-specific vulnerabilities remain. I don't think I'm going to be able to avoid the CPU-based vulnerabilities, unfortunately.

You can feel free to look at the UP solutions. These are very attractive. People often overlook the E5-1650 vX CPU's (3.2GHz+ six-core CPU) which gives you substantially more punch than the favored E3-1230's common around here. You can also put a massive core count E5-2697 - the v2's sometimes around $400 used - on the UP boards. This gives you an easy
way to get into the 30 GHz aggregate CPU range.

Back to previous message ----

Also good advice; I imagine that gets very frustrating seeing users choose those boards for a "server" build time and again. Please bear in mind, though, that this doesn't really apply to me. I did not pick a cheap consumer APU or CPU, nor did I seek out the worst consumer-grade motherboard possible in the hopes of shaving off a couple bucks. The two I linked above are both marketed as server boards, not workstation, with Intel NICs, not Realtek. They also have the same chipsets as their (limited) Supermicro replacements. I made sure of that before I posted. Like I said, I did indeed read your guide (it was very helpful, by the way), and chose accordingly based on my situation.

I'd say that the Z9PA-D8 is intended as a workstation board. This is evidenced by dual PCIe x16 slots (~~useless on servers) and the absence of IPMI as a built-in option. It seems like ASUS is trying to leverage the board to act in both roles, which is typically a good recipe for it to do neither well.

Good advice. How do most users here stress-test their set-ups? Since I chose many used components, I'm naturally going to run Memtest86 for a few days at the least, whatever the linux equivalent of Prime95 is on the CPU while watching temps (not too long, though), but I'm not sure what I can do to test the drives from the FreeNAS environment. Normally I'd just do a ridiculous 32-pass DoD boot-and-nuke, but I'm not sure how I'd do that with a software RAID.

There's lots of threads and discussions of this here.

https://forums.freenas.org/index.php?threads/building-burn-in-and-testing-your-freenas-system.17750/

I really do understand the concern, and thank you for the links. This is a requirement for my build. I re-read all of your posts many times, hence why I have the HBA in my list. I don't foresee the server-grade boards I mentioned having trouble with VT-d, especially given my consumer-grade Gigabyte has no problems with it currently, but it's certainly possible.

What we've seen is that there are a lot of boards that intermittently exhibit issues that cause the PCIe HBA to do bad stuff. This often presents as odd MSI/MSIX issues. No one here is really going to want to dig into trying to help debug that on some random board. It's really just cheaper and easier to pick stuff that's expected to work. Otherwise you are acting as your own guinea pig.

I'm out of time right now else I might say more.
 

lightheat

Dabbler
Joined
Jul 15, 2018
Messages
13
So here's the smart path: *assume* these all represent a security risk and do not present the IPMI/BMC to the public Internet. ... You cannot really prove that these things ARE secure. Assume they are NOT.

You raise a valid point. Based on what I had read up until your latest point, it was simply a matter of a preponderance of evidence against Supermicro as compared to the others, but as you said, I have to factor in how widespread the usage is. I was always going to keep IPMI off the internet, but even still, bad programming design is usually enough to turn me off (initially) to a product.

You can feel free to look at the UP solutions. These are very attractive. ... You can also put a massive core count E5-2697 - the v2's sometimes around $400 used - on the UP boards. This gives you an easy way to get into the 30 GHz aggregate CPU range.

Embarrassingly, I had not considered just beefing up the core count of a single CPU in lieu of a dual CPU solution.

I'd say that the Z9PA-D8 is intended as a workstation board. This is evidenced by dual PCIe x16 slots (~~useless on servers) and the absence of IPMI as a built-in option. It seems like ASUS is trying to leverage the board to act in both roles, which is typically a good recipe for it to do neither well.

Thanks. I had noticed that IPMI wasn't explicitly mentioned, but that it has its own management LAN port led me to infer that it has its own variant of IPMI. I could easily be wrong about that. And I see what you mean about the dual x16s. I'll avoid this one.


Thanks for this, I'll follow it.

What we've seen is that there are a lot of boards that intermittently exhibit issues that cause the PCIe HBA to do bad stuff. This often presents as odd MSI/MSIX issues. No one here is really going to want to dig into trying to help debug that on some random board. It's really just cheaper and easier to pick stuff that's expected to work. Otherwise you are acting as your own guinea pig.

If choosing a non-Supermicro board means I'll have a more difficult time seeking support on this site, that might be enough to tip the scales. At the moment, what I need to decide is whether to eat the cost of my current chassis (it wasn't much) and get one that can support E-ATX, or go with the single CPU solution with an ATX board and choose a beefier CPU as you suggested. Rosewill has a popular 4U case with a ton of HDD bays built-in-- might look back into that. The reason I didn't in the first place was its length, but that's no longer a limitation.

Thank you very much for the suggestions; I really appreciate it.
 

lightheat

Dabbler
Joined
Jul 15, 2018
Messages
13
OK, based on suggestions, here's my new prospective build. The things I changed are in red, things I own in blue, unchanged in black:
Also, because I forgot to mention it:

I'll either sell or repurpose the other case, and buy one that can support E-ATX, like the Rosewill. I discovered I can reduce the cost of the RAM even further if I go with 16GB RDIMMs, so I went with a solid 64GB. I changed the cages to the 5-in-3, since the Rosewill case has vertically mounted 5.25" bays. This means the vertically mounted cage slots will now be horizontal. I'll probably get some Noctua fans to improve the airflow and noise.

Is the HBA OK? I know the 9211-8i is a common one. Is there a better option?

Thanks!
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
OK, based on suggestions, here's my new prospective build. The things I changed are in red, things I own in blue, unchanged in black:
Also, because I forgot to mention it:

I'll either sell or repurpose the other case, and buy one that can support E-ATX, like the Rosewill. I discovered I can reduce the cost of the RAM even further if I go with 16GB RDIMMs, so I went with a solid 64GB. I changed the cages to the 5-in-3, since the Rosewill case has vertically mounted 5.25" bays. This means the vertically mounted cage slots will now be horizontal. I'll probably get some Noctua fans to improve the airflow and noise.

Is the HBA OK? I know the 9211-8i is a common one. Is there a better option?

Thanks!
The LSI HBA cards are highly recommended and I personally have had good luck with them in several of their incarnations (LSI 9210/9211, Dell H200, IBM M1015).

Regarding SSDs... I doubt you'll need these for an ARC cache (as you mentioned above), but you may want to consider booting from one. They're much more reliable than USB thumb drives. I boot from a mirrored pair connected to their own HBA, but that's just overkill on my part. :)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
but you may want to consider booting from one. They're much more reliable than USB thumb drives. I boot from a mirrored pair connected to their own HBA, but that's just overkill on my part. :)

Well for an ESXi datastore that's not "overkill" but just best practice. Ideally you want ESXi to have a local datastore, not just boot from a thumb drive ("fail-y"), and if you want reliability that means RAID1. This can be as simple as a low-end RAID controller in IR mode, but for a really sweet DAS datastore, look to a decent LSI 2208 based controller, and favor the ones with the supercap, such as the 9271CV-8i. Even with slowish HDD in RAID1, this will give your hypervisor a fast and responsive datastore if configured correctly. You can also go with SSD in RAID1 but then you have to make sure ESXi marks the datastore as SSD.

Most hypervisors here have three WD Red 2.5"'s that are WDIDLE'd and then five 500GB SSD's. This gives a RAID1 1TB HDD datastore with hot spare, and two RAID1 500GB SSD datastores with one hot spare between them, for what is probably about the best you can get in DAS reliability. :smile:

I don't have time to reply to the OP's previous post right now.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You raise a valid point. Based on what I had read up until your latest point, it was simply a matter of a preponderance of evidence against Supermicro as compared to the others, but as you said, I have to factor in how widespread the usage is. I was always going to keep IPMI off the internet, but even still, bad programming design is usually enough to turn me off (initially) to a product.

There's not really a preponderance of evidence against Supermicro, just different circumstances. The HP and Dell issues typically get remediated through their support channels. Remember, most of the customers who buy those systems have contractual hardware support and maintenance agreements. This is generally not the case for Supermicro systems. This leads to different behaviours when an issue occurs.

These BMC and IPMI subsystems are specialty hardware and firmware often designed and developed by a small number of outside contractors, companies like Nuvoton, AMI, ATEN, etc., which are then customized by mainboard manufacturers. Supermicro doesn't develop their own, and given the sheer scale of Supermicro compared to ASUS or ASRock, I'm pretty sure ASUS and ASRock don't either. It just isn't economical.

Embarrassingly, I had not considered just beefing up the core count of a single CPU in lieu of a dual CPU solution.

Don't feel bad. It's funny how often people totally forget the UP solutions. More interestingly, for people who want the most performance out of things like CIFS, where performance is largely tied to core speed, the E5-16xx CPU's outperform the E5-26xx CPU's by a large amount on a per-GHz-and-per-dollar basis.

If choosing a non-Supermicro board means I'll have a more difficult time seeking support on this site, that might be enough to tip the scales.

Well, not to put too fine a point on it, but ESXi support isn't really offered here. So from one perspective, "it doesn't matter" what you choose. Most of the users here who are playing with virtualization these days are going to be using FreeNAS as a Type 2 hypervisor and then running their VM's on that. For that role, you are definitely best off going with recommended hardware.

In the old days (2011-~2013), though, the bhyve hypervisor didn't exist, and even jails didn't exist. There were a lot of people here, especially with older hardware, who desperately wanted to be able to use more of their platforms. And I can tell you, there were a lot of problems. People tried all sorts of stuff, desperate to make it work. Some pretty smart people tried doing a variety of clever things and many had it implode on them. That was part of the original genesis of my dire warning not to virtualize. Something I wrote even as I had been virtualizing FreeNAS for some time. :smile: It turns out you need to be willing to commit appropriate resources to the task, on hardware capable of the task, and then be cautious about how you deploy it. So there is actually a path to safely virtualize FreeNAS, but it will not work on just any old hardware, or with RDM, or on a 4GB VM, etc.

There were many others who rejoiced at that recipe to virtualize FreeNAS, and happily virtualized their FreeNAS systems on ESXi. If you could go back about five years, you'd find a bunch of really cool people here who had deep knowledge of ESXi. There are still a few of them hanging around here, sometimes. However, the big driver for that was that FreeNAS lacked jails and a T2 hypervisor, so my observation is that most new users are not fighting the FreeNAS-on-ESXi battle. Building a robust production-grade fileserver on ESXi with FreeNAS VM is complicated, because you have to master both things. Many home users prefer to only have to master the one thing (FreeNAS) and not have to learn an arcane bit of infrastructure software (ESXi) and all of the ins and outs of hardware compatibility, design, etc. for it. This is, after all, layering one very complicated thing on top of another very complicated thing. It still amazes me that it works sometimes.

So I just want you to come away with a clear understanding that there may not be a whole lot of support on the hypervisor end of things. To the extent that you have problems with your FreeNAS VM, yes there are lots of people who will be happy to try to help, and for a properly designed FreeNAS VM with VT-d, it should be fairly similar to a bare metal FreeNAS, so you have good chances of getting general FreeNAS help.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Then there was a bit of a twitter when discussing the merits and downsides of the FLEX/Flash vs the HTML5 stuff, and it was pretty clear that the web GUI adopters in the room were not huge fans. It was also indicated that both the FLEX/Flash and the HTML5 each have mutually exclusive features as people were kvetching about needing to switch back and forth as neither is feature-complete. I was only listening with half an ear so I don't recall the specific issues.

As a followup, VMware indicates that a fully-featured HTML5 client will be completed this fall.

https://blogs.vmware.com/vsphere/20...l5-based-vsphere-client-coming-fall-2018.html

The Flash client will never be feature-complete of course, as development has not been focused on it in some time.

El Reg has a snarky^Wgreat article on this; https://www.theregister.co.uk/2018/05/10/vsphere_html_5_client_coming_at_last/

which included a link to this comment which sums it all up quite nicely.

vmware_html_5_client_blog_screen_shot.png
 

lightheat

Dabbler
Joined
Jul 15, 2018
Messages
13
Ok final update. Y'all convinced me. I trimmed down the build and am going to install freenas to bare metal.

I'm on mobile so forgive the lack of formatting (feeling lazy atm):
  • E5-2640v2
  • Supermicro X9SRH-7F (has onboard hba)
  • 2x16gb ddr3 rdimms
  • Seasonic 80+ gold 550w psu
  • 2 SanDisk u100 16gb ssds raid1 for freenas
  • 6 hgst deskstar nas 6tb
  • Athena power 5 in 3 hdd cage/backplane
Came to about 650 for all but the drives, then another 1000 for them. I'll make another server for the hypervisor.

Thanks everyone for your advice. See? I listened! ;)
 
Status
Not open for further replies.
Top