Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

Build Report: Node 304 + X10SDV-TLN4F [ESXi/FreeNAS AIO]

Kevin Horton

FreeNAS Guru
Joined
Dec 2, 2015
Messages
621
Thanks
223
The X10SDV-TLN2F board also has 10GB networking, which might be useful in the future.
 

Stux

FreeNAS Wizard
Joined
Jun 2, 2016
Messages
4,166
Thanks
1,612
My main concern besides heat with the Xeon, is the 6 SATA ports. I need to use all 6 for my disks.
This is why I use an m2 boot drive for esxi

Do you lose a SATA port if you plug in an M.2?
Certainly not if it’s a PCIe NVME M.2

Part of me wants the 8 core, very low power machine but damn those single core benchmarks are kind of worse than I expected. FreeNAS isn't *that* multi-threaded is it?
Seems pretty multithreaded to me. SMB is not.

In the benchmarks I’ve seen, an avotondenverton atom c3000 16 core compares favorably with a Xeon D1500 8 core.

Of course, need to consider the Xeon D 2500 as well now.
 
Last edited:

diskdiddler

FreeNAS Guru
Joined
Jul 9, 2014
Messages
2,133
Thanks
126
This is why I use an m2 boot drive for esxi



Certainly not if it’s a PCIe NVME M.2



Seems pretty multithreaded to me. SMB is not.

In the benchmarks I’ve seen, an avoton c3000 16 core compares favorably with a Xeon D1500 8 core.

Of course, need to consider the Xeon D 2500 as well now.

Thanks for the reply Stux.

Didn't know a PCIe NVME m.2 negates the loss of the SATA port, that's very cool and I guess makes sense (I'd still _much_ prefer USB drives for cost, redundancy (easy to add 2) etc - plus i've just personally had 0 problems with them in my rig)
I'll be looking at Denverton 3000 8 core, 3758. It seems similar to the Xeon D 1521 in my same price range. It's a difficult decision but I think I might just stick with the 25w part, after all my server cupboard... well it's a long, complicated story but it hits 33c INSIDE my house in summer (91.4)

Xeon D 2xxx series, IIRC are all higher power again.

So the last thing I guess I need to know, is which ram to get.

EDIT: one other thing, I googled, it sounded like SMB will eventually get multi-threading, right?
 
Last edited:

diskdiddler

FreeNAS Guru
Joined
Jul 9, 2014
Messages
2,133
Thanks
126

Chris Moore

Super Moderator
Moderator
Joined
May 2, 2015
Messages
9,355
Thanks
2,991
sorry to leverage your thread but I need someone smart to outright confirm for me, if these ram sticks (2 pack) would definitely work in the SuperMicro denverton boards.
I am not Stux, but it looks pretty clear to me. The board spec says it supports "Up to 256GB Registered ECC DDR4-2400MHz" and the memory is "DDR4 2400 (PC4 19200) 2Rx8 288-Pin 1.2V ECC Registered RDIMM compatible Memory".
DDR4, Registered, ECC, 2400; all boxes ticked.
 

diskdiddler

FreeNAS Guru
Joined
Jul 9, 2014
Messages
2,133
Thanks
126
I am not Stux, but it looks pretty clear to me. The board spec says it supports "Up to 256GB Registered ECC DDR4-2400MHz" and the memory is "DDR4 2400 (PC4 19200) 2Rx8 288-Pin 1.2V ECC Registered RDIMM compatible Memory".
DDR4, Registered, ECC, 2400; all boxes ticked.
Thanks Chris, I only stress because memory compatibility can be a real mess at times, especially ecc buffered, registered, unbuffered, etc.

I'll take the gamble in the next hour unless someone else said wait (!)
 

pro lamer

FreeNAS Guru
Joined
Feb 16, 2018
Messages
623
Thanks
111
it looks pretty clear to me.
On the other hand some supermicro X10 boards) are mentioned in our forums as RAM-picky... Having said that - this board is not X10. So I guess I am no help here. At least I am not surprised @diskdiddler is looking for sanity check...

Other wild guesses, I guess a bit too late...:
  • Does the store have some friendly return policy?
  • They claim the product is supermicro compatible so maybe they accept compatibility claims or at least can list tested motherboards?

Sent from my mobile phone
 

diskdiddler

FreeNAS Guru
Joined
Jul 9, 2014
Messages
2,133
Thanks
126
On the other hand some supermicro X10 boards) are mentioned in our forums as RAM-picky... Having said that - this board is not X10. So I guess I am no help here. At least I am not surprised @diskdiddler is looking for sanity check...

Other wild guesses, I guess a bit too late...:
  • Does the store have some friendly return policy?
  • They claim the product is supermicro compatible so maybe they accept compatibility claims or at least can list tested motherboards?

Sent from my mobile phone
I'll be ordering it to a house in California I'll be at in a day, but I've no real way of testing it for 2 weeks until I return to Aus.

Even then I probably need another week or two before I can consider firing the memory up with the board.

Memory can be a picky thing unfortunately.
 

Ender117

FreeNAS Experienced
Joined
Aug 20, 2018
Messages
204
Thanks
38
This is why I use an m2 boot drive for esxi



Certainly not if it’s a PCIe NVME M.2



Seems pretty multithreaded to me. SMB is not.

In the benchmarks I’ve seen, an avotondenverton atom c3000 16 core compares favorably with a Xeon D1500 8 core.

Of course, need to consider the Xeon D 2500 as well now.
Hi @Stux , I am wondering if you have any IOPS number for your pool? I am also using p3700 as my slog and my sync write IOPS was less than I expected (10K IOPS 4K writes). Now I don't know if there is something wrong with my setup or this is simply how things should be. If you can run iozone -a -s 512M -O in a sync=always dataset and post the output here that would be really helpful for me. Thanks
https://forums.freenas.org/index.php?threads/performance-of-sync-writes.70470/#post-486742
 
Last edited:
Joined
Feb 17, 2015
Messages
2
Thanks
0
Awesome build! I'm at 87% capacity for my current box (ancient Atom w/ 4 GB of ram) and picked up a node 304 maybe a year ago during a crazy sale, assuming I'd use it for my next box. Now that I'm further along, it seems like the case is pushing me towards more expensive component pieces, compared to the options of a larger box (although now that I think about it, an x10 is going to cost $200+, whereas something like a C236 WSI is about the same, so maybe it's more of a feature trade-off).
 

SMnasMAN

FreeNAS Experienced
Joined
Dec 2, 2018
Messages
111
Thanks
20
Two SLOGs might saturate it’s capacity. But better than nothing. And only if it happens together.

So give it a try and monitor it. If you see sustained 90-100% busy then you’re probably saturating the slog.

@Stux - I thought that you could NOT use a single device as SLOG on more than 1x datastore? (ie you can't use 1x p3700 as SLOG on 2x different datastores).

Am i wrong? (and if im wrong, do you just make 2x partitions on the p3700, and then for adding the SLOG to the data stores you need to do via the CLI and not the GUI?)
thanks!

EDIT- i can answer my own ? here - yes it is possible to use partitions of a SLOG , and thus spread a single intel optane across 2 or more datastores - although it is highly NOT recommended!
 
Last edited:
Joined
Mar 12, 2019
Messages
5
Thanks
0
Hey Everyone - I am a little confused about is around my networking setup and I was hoping someone could help. I have tried following the node setup as much as possible - Thanks Stux!

I am not by far a networking expert with ESXI but worked as a network/security engineer for many years.

  • I have basically a flat internal network at home all on 10.0.2.0/24 except for my DMZ where I have 4 static IPs I haven't used yet
  • Server is a Supermicro x9dri-ln4f+
  • I'm using a Mellanox 2x 40gig port card into my backbone (Brocade 6650) with only one interface plugged in currently
  • I have a pair of m.2 960 evo plus 256gig which serve up esxi/freenas (mirroring boot to the other one)
  • I have a LSI HBA in passthrough for all other drives in my supermicro 846 chassis
  • I have an intel p900 (might be replacing it with a p3700 soon for power protection) for SLOG
  • ESXI is on 10.0.2.50
  • Storage Network mtu 9000 is configured as 10.0.3.1 within ESXI
  • FreeNAS is on 10.0.2.60 (vmx0)
  • FreeNAS Storage Network that I will serve up NFS is on 10.0.3.2 (vmx1) for VMs

I have 6x 12tb Seagates in a mirrored pair pool that I would like to serve up SMB on 10.0.2.0/24 network

For fast VM storage I have 4x 1tb Crucial SSDs in a mirrored pair pool (also off the HBA) that I plan on doing NFS off of on the 10.0.3.0/24 network. It is plenty for storage space that I need on the VMs.

I am looking at serving up VMs both on the 10.0.2.0/24 network for internal VMs, but I'm also looking at serving up some VMs in a DMZ (I have not chosen a subnet yet but let's say 10.0.5.0/24 which will be mapped to a static IP on the outside coming from my untangle)

Where I am a bit lost is how to segregate the DMZ VMs (10.0.5.0/24) from my internal network as much as possible to prevent internal compromise.

My thoughts were to wire up the second interface on the Mellanox directly to my firewall (untangle) with an IP address within esxi on it's own virtual switch for the DMZ.

VM internal network 10.0.2.0/24 (Physical adapter 1)
VM DMZ network 10.0.5.0/24 (Physical adapter 2)
VM Storage network mtu 9000 10.0.3.0/24

Am I correct in the assumption that any VM spun up in 10.0.5.0, would also have access to 10.0.3.0 storage network as that is where the nfs datastore is? In the OSX example in this thread OSX only has a network adapter on the VM network. Is it getting access to the backend storage through the virtual storage switch within ESXI without defining the storage network adapter in the VM?

I am confused on how the routing internally on ESXI works with the storage network and the nfs datastore on the virtual switches. What happens if my 10.0.5.0 server is compromised - can it route to the 10.0.3.0 storage networking?

Sorry if I made this confusing, any help is appreciated. Perhaps the AIO isn't ideal for carving out a DMZ subnet.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
11,781
Thanks
3,036
Your ESXi does zero routing. It's not a router. It's a hypervisor. It does have a "default route" for its own internal operations, but this doesn't affect VM's.

The term "virtual switch" is a highly accurate description of what a vSwitch is, with a few exceptions.

What you set up for the ESXi hypervisor does not have to have anything to do with any other bit of networking, but it can also be tightly intertwined if you configure it to be. But it'll just appear to be a host like any other on your network.

---

You haven't explained what sort of topology you've used for your switching environment except to say "flat internal network" which implies that maybe you're just putting everything in the same broadcast domain and hoping that "DMZ" is some magic thing that will fix your security issues. Let me be clear - don't do this. If you do that, make it a truly flat network and skip the DMZ's and use a single network. (Don't do this, it's insecure, but at least it is honest about what's going on).

Each IP network you have needs to be a separate broadcast domain. Many networking beginners will overlay multiple IP networks on a single broadcast domain and think that this is buying them some separation. It isn't, at least not to an intruder. I run across networking "gurus" all the time who are convinced that they have good reasons to overlay networks or break the broadcast domain in various ways. Usually this just leads to misery of some sort in the future.

You can create separate broadcast domains by having multiple switches. This is the traditional but somewhat expensive way to do this. It has the benefit of being easy to understand. I'm not telling you to do this, but I want you to start out with that idea in your head to help make the next bit make more sense.

On a single switch, there is the concept of "VLANs", which many people are terrified of. Don't be. VLAN's are virtual LAN's. They are the networking equivalent of virtual machines. You can create multiple virtual switched networks within your physical switch. Your ESXi vSwitch also supports this. And by "supports" I really mean "was designed primarily for this".

So if you want to have a storage network and a DMZ network and an internal network and an external (upstream) network, you can easily have 4 VLAN's. Configure your switch to present four standard ports ("access mode") on VLAN's 10, 11, 12, and 13, and a trunk port with all 4 VLAN's going to your ESXi. Configure your ESXi vmnic0 to be connected to vSwitch0 (which it should be by default). Then configure vSwitch0 to have a port group on each of VLAN's 10, 11, 12, and 13. You can add VM's to any of these VLAN's and they will be totally isolated from other VLAN's. You can add a VMkernel port group that allows ESXi to access those networks for things like the ESXi NFS client. You can then have an Untangle VM or physical host to handle routing. You attach the networks you need to the systems where you need them.

This is how networks should be designed. You can have dozens or hundreds of VLAN's. There are some limits... if you're doing a virtual router of some sort, you are limited to 10 network cards per VM, which may be a practical limit on the complexity of your network. You don't want to do a trunk interface to a VM because it can cause some serious performance issues, though of course you can use physical hardware with a trunk interface to configure dozens or hundreds of networks.

Source: professional network engineer, been doin' this for decades...
 
Joined
Mar 12, 2019
Messages
5
Thanks
0
Your ESXi does zero routing. It's not a router. It's a hypervisor. It does have a "default route" for its own internal operations, but this doesn't affect VM's.

------

So if you want to have a storage network and a DMZ network and an internal network and an external (upstream) network, you can easily have 4 VLAN's. Configure your switch to present four standard ports ("access mode") on VLAN's 10, 11, 12, and 13, and a trunk port with all 4 VLAN's going to your ESXi. Configure your ESXi vmnic0 to be connected to vSwitch0 (which it should be by default). Then configure vSwitch0 to have a port group on each of VLAN's 10, 11, 12, and 13. You can add VM's to any of these VLAN's and they will be totally isolated from other VLAN's. You can add a VMkernel port group that allows ESXi to access those networks for things like the ESXi NFS client. You can then have an Untangle VM or physical host to handle routing. You attach the networks you need to the systems where you need them.
Thanks, this answered my questions. I wasn't sure if there was some internal routing that ESXi was doing on its own. I haven't dealt with internal ESXi networking configuration wise (virtual switches/port groups) but have been in network/security for a long time.

My current network is for all intents and purposes flat (work in progress in the homelab) because I currently have not built out my internet facing presence (no inbound currently) which I will vlan off in a separate security zone as well as carving it up a bit with more vlans - on my to-do list. I have a brocade 6610 and a 6650 as well as a physical untangle firewall. All routing between networks is done through the untangle. Currently I am just using one internal vlan for servers and it is not exposed inbound from the internet. I have a separate vlan or my Ubiquiti wireless but left that out as it doesn't really come into play in this situation.

What I want is to have a DMZ vlan for internet facing ESXI VMs hanging off my brocade switch and routed through the untangle out (vlan configured on both of those). I was going to use the 2nd port on my 40gig nic on the ESXi server to connect to the brocade on that vlan (first port is on the internal network)

I don't want any connectivity from those VMs into my other internal VMs or internal network, ESXI storage network, and was wondering if the nfs datastore that those VMs would be using which is on the ESXi storage virtual switch would be routable to the DMZ VMs themselves because they are using that datastore. From what you said it sounds like it won't be exposed - it is just the VMkernel port group that has access.
 
Top