Some general advice about planning an nginx jail / SSL routing help?

detrarurto2

Cadet
Joined
Nov 7, 2021
Messages
2
I have my router send all incoming traffic on port 80 and 443 to an nginx jail running on my TrueNAS CORE device.
This was easy, via port forwarding.

I'm using nginx because I want to have different hostnames go to different jails.
site1.mydomain.com -> nginx sends that through to 10.0.0.31
site2.mydomain.com -> nginx sends that through to 10.0.0.32
etc.
That was easy, using proxy_pass.

But when I start adding SSL and certs, I get confused and not sure of the best idea.

It seems better to me to have all my certs and nginx cert config in the one master nginx jail.
This makes renewing certs (let's encrypt) easier as there is just one place that certs get renewed. (Instead of each jail needing to look after its own cert)

Should I have my main nginx jail set the cert + private key for a hostname, then proxy_pass to the correct jail?
Or, is it better to make my main nginx have nothing to do with ssl at all, only proxy_pass, and have the nginx instance in the target jails set the certs?
Not sure of all of the pros/cons.


The reason I ask:
I installed the official nextcloud plugin, was working fine on the jail's internal ip.
I tried to make this available via a hostname + apply certs, proxy_pass from my nginx jail to nextcloud -- I only had issues. I could not get it to work that way. I believe this was because the nextcloud plugin had its own self certs it was using in its own nginx config, so my certs when I proxy_pass to it were breaking things(?)
I was successful with other sites, so I think the nextcloud plugin just isn't for all use cases, and will install it myself manually.

Any advice is appreciated, thank you!

P.S. Maybe a somewhat related question, what are the pros and cons of having ONE mariadb instance in a jail, that is shared amongst all jails that need a db, vs, each jail having its own mariadb instance?
Multiple instances = more overall system resources needed, more security, more setup time
One instance = individual jails can killed easily and remade without worrying about backing up database
Anything else?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
[mod note: moved to Off-topic. While you may be running these things on top of FreeNAS/TrueNAS, these are basic webservice design questions that have nothing to do with FN/TN]
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I have my router send all incoming traffic on port 80 and 443 to an nginx jail running on my TrueNAS CORE device.
This was easy, via port forwarding.

I'm using nginx because I want to have different hostnames go to different jails.
site1.mydomain.com -> nginx sends that through to 10.0.0.31
site2.mydomain.com -> nginx sends that through to 10.0.0.32
etc.
That was easy, using proxy_pass.

But when I start adding SSL and certs, I get confused and not sure of the best idea.

It seems better to me to have all my certs and nginx cert config in the one master nginx jail.
This makes renewing certs (let's encrypt) easier as there is just one place that certs get renewed. (Instead of each jail needing to look after its own cert)

Should I have my main nginx jail set the cert + private key for a hostname, then proxy_pass to the correct jail?
Or, is it better to make my main nginx have nothing to do with ssl at all, only proxy_pass, and have the nginx instance in the target jails set the certs?
Not sure of all of the pros/cons.

The main "con" is that you'll find some things are building in LetsEncrypt support directly these days.

The secondary "con" is overall load. Using a server load balancer of some sort (a "proxy" is a degenerate case with a backend count of one) adds complication and design questions to a webservice, as you've discovered. For heavy duty, heavy traffic SLB, the decision often revolves around the question of if the frontend SLB has the processing power to handle it. But this is not likely to be relevant here unless you're contemplating supporting hundreds of megabits of traffic.

nginx is not the best of SLB's, and since I don't use it in that role, I can't comment specifically on what you've done. However, once you get to the size where you have more than one frontend servicing a web property, it becomes strongly favorable for the frontends to have a system to manage LetsEncrypt on their own. The one we use here has a dedicated server that intercepts the WKS challenges and maintains a list of domains that way.

The reason I ask:
I installed the official nextcloud plugin, was working fine on the jail's internal ip.
I tried to make this available via a hostname + apply certs, proxy_pass from my nginx jail to nextcloud -- I only had issues. I could not get it to work that way. I believe this was because the nextcloud plugin had its own self certs it was using in its own nginx config, so my certs when I proxy_pass to it were breaking things(?)
I was successful with other sites, so I think the nextcloud plugin just isn't for all use cases, and will install it myself manually.

Any advice is appreciated, thank you!

I would imagine that you might need to unroll some of what's going on and maybe tinker with it a bit. Usually, sites deciding to do SSL on the backend turn out to be problematic unless there's a coherent strategy defined as to who's in charge of the SSL. This stuff tends not to work right "out of the box". The usual dodge here, in my experience, is to let the frontend handle the "public" SSL, and then to have the proxy use SSL to connect to the backend, rendering the backend's insistence on managing its own SSL a moot issue (because it is seeing SSL, even if just from the frontend).

You could also assign port numbers and forward ports. I know that's ugly, but it is also guaranteed to work.

P.S. Maybe a somewhat related question, what are the pros and cons of having ONE mariadb instance in a jail, that is shared amongst all jails that need a db, vs, each jail having its own mariadb instance?
Multiple instances = more overall system resources needed, more security, more setup time
One instance = individual jails can killed easily and remade without worrying about backing up database
Anything else?

I tend to design for multiple instances, because it means that a problem with one thing doesn't interfere with other things. There's an uncomfortable number of things out there that are picky about the versions of databases that they want to work with. You don't really want to find yourself in a situation where you install ShinyThingA and SparklyTrinketB which both connect to a SharedDB1, because invariably what will happen over time is that there'll be a security issue with ShinyThingA that requires an update, and the update requires SharedDB version 2, and SparklyTrinketB does NOT support SharedDB2.
 

detrarurto2

Cadet
Joined
Nov 7, 2021
Messages
2
@jgreco - thank you for your input! I really appreciate the response.
Once I have what I want working I will add some info about exactly how I went about it.
 
Top