SOLVED Multiple interfaces with DHCP? [tldr; Not possible or desireable ]

Status
Not open for further replies.

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Sorry. I am still learning.

Aren't we all. Cursing and swearing my way through some ESXi kickstart wizardry involving loading some network drivers that then need to be used to properly configure the host networking. Of course that implies a reboot inbetween. But %pre has a bunch of limitations. Desperately trying to avoid having to build a custom ESXi image.
 

NASbox

Guru
Joined
May 8, 2012
Messages
650
Sorry it looks like an attack to you. I don't know the specific incident you are thinking is an attack, but reason was likely an effort to convince the poster that they should not do what they were trying to do.

Since I'm the OP that @Chris Moore replied to, I would like to add that I've always found Chris to be professional and super helpful and I didn't feel attacked.

I work in a much different environment. .... It can literally take a week when it is fast-tracked.
I just happend to reread this and I was wondering what industry your company is in?

... iX is creating an enterprise product here, and NONE of their customers would be asking for this.
I suspect the goal is to enforce best practices so FreeNAS doesn't get blamed for connectivity problems that are really poor network configuration.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I just happend to reread this and I was wondering what industry your company is in?
I work at a US Government agency and there are multiple departments involved (around a dozen people) between the approval and implementation of a network change request.
 

FreeNASftw

Contributor
Joined
Mar 1, 2015
Messages
124
It wasn't in this thread, at least one of the other threads broke down pretty quickly, it wasn't necessarily nasty, just pointless and off topic discussion and bickering. I'm not that easily offended, don't worry :)

I wrote a big long rebuttal here but have redacted it... It was just covering old ground.
Points are:
global configuration is where default route and name servers are set, regardless of DHCP
IP reservations are self documenting
I think your Padawan made numerous errors and it would be a stretch to solely blame DHCP assignments - DHCP addresses don't disappear the second the server goes down. DHCP servers are generally critical, it would have been noticed had it done down.
VRRP and multiple other technologies exist to produce redundancy for services, including DHCP

The question is that if the product is aimed at professionals and as you say "@siconic offers the classic reasoning for wanting servers to be DHCP-configured above, and that's fine for what it is worth. Tradeoffs. " then... why make the restriction. Tradeoffs are everywhere, let people make them?

FN is used in a multitude of environments and they're all different, while setting primary/management interfaces as DHCP may not be ideal, the fact that FN has had HV functionality for some time now and with many interfaces possible via VLAN's, it doesn't seem impossible that a valid configuration could include multiple DHCP addresses.

Anyway... thank you all! It really isn't a big deal, I just stumbled across it and thought it odd.
I'll take my smack on the wrist and go configure the interfaces manually now

**P.S** I certainly lean towards the static assignment in most cases myself, I'm not arguing strongly one way or the other, just that both are possibly valid.
 

FreeNASftw

Contributor
Joined
Mar 1, 2015
Messages
124
Sorry it looks like an attack to you. I don't know the specific incident you are thinking is an attack, but reason was likely an effort to convince the poster that they should not do what they were trying to do.

Sorry Chris! I have searched for the posts I was referring to and now can't find them, it's entirely possible I was confused with responses to the same question on a different forum (SO or similar).
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
global configuration is where default route and name servers are set, regardless of DHCP

Not really correct. FreeNAS will pick up settings from DHCP if you let it. Look at your /etc/dhclient.conf

request subnet-mask, broadcast-address, time-offset,
domain-name, domain-name-servers, domain-search, host-name,
interface-mtu;

If you manually set things, such as, for example, a default route, then yes, FreeNAS will stick a supersede clause in there. So the default for a new install, one where someone hasn't walked through the whole GUI configuration and wired down nameservers or a default route, is to "do the DHCP thing."

I think your Padawan made numerous errors and it would be a stretch to solely blame DHCP assignments - DHCP addresses don't disappear the second the server goes down.

True, but sixty minutes, twelve hours, or even a day later, this might not be noticed until "too late" on a sleepy network.

DHCP servers are generally critical, it would have been noticed had it done down.

Well, it wasn't, so perhaps that argument is prima facie false ;-)

VRRP and multiple other technologies exist to produce redundancy for services, including DHCP

Yeah, but I usually don't see redundant DHCP servers deployed on any but the largest networks. Surely a network with two ESXi hosts and a FreeNAS box in a colo somewhere isn't likely to have this.

The question is that if the product is aimed at professionals and as you say "@siconic offers the classic reasoning for wanting servers to be DHCP-configured above, and that's fine for what it is worth. Tradeoffs. " then... why make the restriction. Tradeoffs are everywhere, let people make them?

Probably because in an environment where you *really* are planning to use DHCP for something like a remote-branch model, you also tend to have a simple design such as a single segment design. Also, iX developers don't write this stuff. dhclient comes from ISC. If they aren't really interested in building their own DHCP client or spending lots of developer time on trying to work out how to make it work in a deterministic fashion, I can understand that, as it isn't an advisable configuration for reasons outlined already in this thread, and their enterprise customers don't seem to be beating down the door for it.

At the end of the day, a smallish company like iXsystems has to be looking around, a little terrified, at the way storage companies have come and gone. iX has done really well overall, and doesn't seem to have fallen into the trap of trying to be everything to everyone. But this probably involves making some hard choices about where to focus.

FN is used in a multitude of environments and they're all different, while setting primary/management interfaces as DHCP may not be ideal, the fact that FN has had HV functionality for some time now and with many interfaces possible via VLAN's, it doesn't seem impossible that a valid configuration could include multiple DHCP addresses.

Sure. And it'd be nice if it supported UFS on 1GB of RAM, and the ability to serve up off an NTFS or EXT3 filesystem, but it doesn't do those things either.
 

iLikeWaffles

Dabbler
Joined
Oct 18, 2019
Messages
15
Not sure if DHCP troubles are actually relevant for non-enterprise scenarios or just dogma. My home setup just assigns static DHCP leases to everything and then hides that under local domains. Works fine with multiple interfaces - my ubuntu laptop for example is either under t450e.local and t450w.local, depending if I want to have a ethernet or wireless connection.

Exactly same scenario with my windows PC. And on my raspberry. And all IOT crap, most of which doesn't even let me set a static IP in the first place.

I like the flexibility. With all of the network configuration being stored on the router, I can pick up any machine, plug in a random RJ45 and it'll work out of the box. Even if the router dies and I need to drop in a replacement, all the machines have their SSH instantly available (albeit at random IP addresses) without the need to run around with a serial cable.

Don't know why FreeNAS has to be the outlier that can't elegantly handle DHCP on multiple interfaces. Maybe copy the solution Linux or Windows uses?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Don't know why FreeNAS has to be the outlier that can't elegantly handle DHCP on multiple interfaces.
Exclusively in regard to servers, it has been best practice for as long as I can remember, interfaces should be configured by the administrator, locally in the system, not by DHCP. What you are suggesting, having a DHCP server hand devices the same IP every time they show up on the network, That works great, it is how we do things for everything that is not part of the 'infrastructure'. That means the printers and desktop computers and laptops, etc. Servers still need to come up with the same IP address after a reboot, even if they can't reach the DHCP server, which is why they get static assignments, because servers are often remotely managed and if they come up with some other IP address, you can't reach them unless you physically travel to the data-center, which is in another building at the site where I work. Switches, routers, servers and even some management systems need static addresses so they can still work when something else is not working properly.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Exclusively in regard to servers, it has been best practice for as long as I can remember, interfaces should be configured by the administrator, locally in the system, not by DHCP. What you are suggesting, having a DHCP server hand devices the same IP every time they show up on the network, That works great, it is how we do things for everything that is not part of the 'infrastructure'. That means the printers and desktop computers and laptops, etc. Servers still need to come up with the same IP address after a reboot, even if they can't reach the DHCP server, which is why they get static assignments, because servers are often remotely managed and if they come up with some other IP address, you can't reach them unless you physically travel to the data-center, which is in another building at the site where I work. Switches, routers, servers and even some management systems need static addresses so they can still work when something else is not working properly.

Did I tell you the story about ${unnamed-party-whose-software-you-use} who had an ESXi hypervisor set up with VM's running backed by a FreeNAS host, both dependent on a static DHCP assignment? This gets really entertaining when the DHCP server crashes, taking along with it all the running VM's, and the ability to manage the hosts in question. In a data center. In another state.
 

NASbox

Guru
Joined
May 8, 2012
Messages
650
Since I am the OP on this thread, I got an alert because of the post by @iLikeWaffles. That inspired me to revisit the ideas in this thread, so I'm going to ask a couple of side questions as well as respond to @iLikeWaffles:

...
At work, we have several physically separated networks for security reasons and some have DHCP servers, but I manage three that have no DHCP server and I have to put the IP address into all of those systems manually. It is a huge management pain, but it is a requirement for the environment.
Just wanted to check my understanding of a couple of things:
I'm assuming that this was to make sure that no unauthorized machines ended up on the network as the environment was "high security".. You mention multiple admins/signoff therefore you need a "conspiracy" to get a rogue machine authorized, not just a single bad actor. Correct?


Not sure if DHCP troubles are actually relevant for non-enterprise scenarios or just dogma.
What is the downside of statically assigned DHCP other than if the server fails the network may get a bit funky until you fix it?

Does lack of DHCP make it significanlty more difficult for a bad actor who has remotely compromised a machine to piviot to another machine?

I like the flexibility. With all of the network configuration being stored on the router, I can pick up any machine, plug in a random RJ45 and it'll work out of the box. Even if the router dies and I need to drop in a replacement, all the machines have their SSH instantly available (albeit at random IP addresses) without the need to run around with a serial cable.
Unless I am missing something, DHCP reservations in the router (especially if it has a nicely laid out UI, are a self-documenting way of configuring the network. In a small network where everything depends on one firewall/router/dhcp box like a pfSense (or got forbid a consumer router), if that box fails, everything is pooched anyway until it gets fixed.

Don't know why FreeNAS has to be the outlier that can't elegantly handle DHCP on multiple interfaces. Maybe copy the solution Linux or Windows uses?
It looks like this may be a non-issue due to software changes since my OP.

I'm on 11.2-U7, and it appears that this restrictriction has disappeared. The WebUI allowed me to tick the DHCP box on one of the interfaces that I had set for static configuration (it appeared to take it OK as the staic information disappeared). I clicked cancel to about the change (so as not to break my network), so I don't know if I would have gotten an error.

@iLikeWaffles What version of FN are you using? if you can upgrade to 11.2-U7 and give it a try. (Let us know what happens-maybe this thread can be closed/marked "SOLVED".
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
You mention multiple admins/signoff therefore you need a "conspiracy" to get a rogue machine authorized, not just a single bad actor. Correct?
Absolutely. There are three offices that need to approve the action before the people with the capability to act are allowed to begin work. Then, one branch provides the hardware, another branch places the hardware and makes the physical connection to network, another branch sets port security so the device can communicate on the network and another branch makes the configuration active in the management server so DHCP can work. The MAC address and machine name must match records and there is software on the system that must check-in. It is all in the name of security and the process is lengthy. It can take a month to get a computer moved from one cubicle to another in the same building. It is divided among so many in an effort to prevent a bad actor from having unrestricted access to the network. All network ports are shut by default and if the MAC address of the hardware changes, they go back to shut. There are ways to spoof the MAC, which is why there is also software on the computer that must check-in with the network. It prevents booting from a Live-CD or something like that. They also go around periodically with signal sniffers looking for cellular or WiFi hardware as both are banned from the facility. USB flash drives are also forbidden but you can get special approval for USB hard disk drives.
Does lack of DHCP make it significanlty more difficult for a bad actor who has remotely compromised a machine to piviot to another machine?
We have a DHCP server in my environment, it is used for the desktop computers and printers, but not servers. The DHCP server will only assign an IP address to a recognized MAC address. It is intended to make it more difficult to get a physical device on the network. We have an array of Firewall and intrusion detection systems (IDS) and intrusion prevention systems (IPS), that are there to attempt to prevent or stop bad actors from outside. The network at work is operated on a "Trust No-one" principal because most data-breaches in the organization history have been from the inside.
Unless I am missing something, DHCP reservations in the router (especially if it has a nicely laid out UI, are a self-documenting way of configuring the network.
I guess it is a mater of perspective. If you have a system on the network that has the IP address assigned in the OS instead of receiving that IP address from a DHCP server, that system will not know or care if the DHCP server is up, down or sideways. At home, my network switch is a 48 port device and everything connects to it, including the device that issues DHCP addresses. The switch, my servers and my computer all have static IP addresses and they can communicate with each other always and forever, even if I disconnected the router from the network. That is the advantage, if your router or device that serves DHCP addresses should go down, nothing is "pooched" and you have some systems that are able to be pinged for testing to find out where the problem is. If the computer you are using has a DHCP address, you can't even ping the router to see if the router is down. I know it makes some additional management pain, I deal with that every day at home and at work, but it makes the network more fault tolerant and gives you more troubleshooting options. Also, if your only documentation of network configuration is in the router, where do you look when the router fails?
 

NASbox

Guru
Joined
May 8, 2012
Messages
650
Don't know why FreeNAS has to be the outlier that can't elegantly handle DHCP on multiple interfaces. Maybe copy the solution Linux or Windows uses?
Did you try using the new interface with FreeNAS v11.2-U7? I haven't had the need to reconfigure my NICs, but AFAI can see the new interface allows DHCP on all interfaces. @iLikeWaffles if you check it out, please post back here and let us know if it works, or if it's just in the UI but does nothing or throws an error..
 

nirvdrum

Cadet
Joined
Mar 1, 2016
Messages
2
@NASbox FreeNAS 11.3-U1 apparently still limits DHCP to a single interface. I hope I'm just overlooking a setting somewhere.

I can appreciate not wanting to use DHCP to set the server's IP, but I'd still like DHCP to be available on multiple interfaces so I can use DHCP for my jails. The "only one interface can be used for DHCP" restriction applies to VLAN interfaces as well. I'd like to just use DHCP for all my jails regardless of VLAN and make use of static leases for the ones that need permanent addresses. Manually assigning IPs to every jail is an option of course, but seems to defeat the purpose of having DHCP at all.
 

CPP-IT

Dabbler
Joined
Aug 14, 2017
Messages
43
Perhaps looking up DHCP Relay will help bridge the quest for DHCP across interfaces.

It sounds like the divide in opinion also hinges on the needs of the organization.

A home NAS or even a small business (20 people or less) will likely not see "DHCP is down" as a crisis, or is enough of an edge case it can be a lower priority. Likely the NAS can be physically accessed and fixes made with relative ease.

If your business relies on the server or if there also needs to be access to things like VPN endpoints the ability to know without question that a service is available at an IP address could be considered mandatory.

It's less about "this is fine, simple, and convenient" and more about "assurance that this works even in the face of failure". It seems the key is balancing of reliability and convenience.
 

Mike Smith

Cadet
Joined
Jun 18, 2020
Messages
2
Delete the interfaces and it recreates all the interfaces with DHCP4 on (but not DHCP6)... because... reasons..?

DNSCapture.JPG
 

Mike Smith

Cadet
Joined
Jun 18, 2020
Messages
2
OK, premature post... I didn't catch that the first interface to get an IP address is the only one.
 

NASbox

Guru
Joined
May 8, 2012
Messages
650
Resurrecting an old thread here because I've just run into this too.
There are multiple threads on this exact topic and none of them answer the question, instead they resort to attacking the person raising the question.

Why has someone seemingly wasted time coding it so you can't have multiple DHCP interfaces?
As the OP, I'm going to put my $0.02 worth in.... It's not that big a deal to just configure a static IP, and having lived that way for a couple of years I can see the point. My box lives on 3 sub-nets, and it actually worked out better having the ip configured where they could all have the same host number (x.x.x.hostid/24) and I can remember it if the DHCP has a hiccup.
 
Status
Not open for further replies.
Top