SOLVED 11.2-U2.1 Docker Woes

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
exportBase and basePath are the same thing... maybe I invented basePath from some other thing I was looking at.

It looks like you're OK, just need to sort out permissions.

When you create a new volume, normally it will be empty (Rancher will create the subdirectory if it doesn't exist), but if the subdirectory already eyists, there's no process to work out the permissions.

I would start just to see if it works by chmod -R 777 * on the subdirectory and see how that goes.

If it works, you could look at chown to the right user to go back to 755 or 744 later.
 

SavageAUS

Patron
Joined
Jul 9, 2016
Messages
418
exportBase and basePath are the same thing... maybe I invented basePath from some other thing I was looking at.

It looks like you're OK, just need to sort out permissions.

When you create a new volume, normally it will be empty (Rancher will create the subdirectory if it doesn't exist), but if the subdirectory already eyists, there's no process to work out the permissions.

I would start just to see if it works by chmod -R 777 * on the subdirectory and see how that goes.

If it works, you could look at chown to the right user to go back to 755 or 744 later.
Any idea what permissions I should set on FreeNAS for the shared datasets? I think I’ve got them set for nobody:nogroup or maybe media:wheel.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Any idea what permissions I should set on FreeNAS for the shared datasets? I think I’ve got them set for nobody:nogroup or maybe media:wheel.
So the answer to that is a little complicated, but predictable.

You need to know the user/userID and group/groupID of the user and group being used in the container (a linux OS with its own users/groups).

You can try bringing up a console from the Rancher GUI when looking at the container and do some poking around with ls -l to see what the permissions are on the sonarr or application directories and files to give you some clues or even look at the documentation for the container, which may specify them. Then you can set them on FreeNAS (or even just do the chown from within the container using the names) with the IDs.

As a little extra credit, here's where I would look:
https://hub.docker.com/r/linuxserver/sonarr/

Look at the bits about PUID and PGID. It seems you can even change them to match what you already have on FreeNAS if you want to do that.
 
Last edited:

SavageAUS

Patron
Joined
Jul 9, 2016
Messages
418
So the answer to that is a little complicated, but predictable.

You need to know the user/userID and group/groupID of the user and group being used in the container (a linux OS with its own users/groups).

You can try bringing up a console from the Rancher GUI when looking at the container and do some poking around with ls -l to see what the permissions are on the sonarr or application directories and files to give you some clues or even look at the documentation for the container, which may specify them. Then you can set them on FreeNAS (or even just do the chown from within the container using the names) with the IDs.

As a little extra credit, here's where I would look:
https://hub.docker.com/r/linuxserver/sonarr/

Look at the bits about PUID and PGID. It seems you can even change them to match what you already have on FreeNAS if you want to do that.
Update:
I have everything back up and running in jails except Organizr but I have also created a ubuntu vm and installed docker, just vanilla docker, no GUI no rancher no portainer and I’ve got most of what I need running in containers now as well. The only ones I’m lacking in creating containers for is Organizr and pihole. They both use port 80 so I still need to figure out reverse proxy for them.
Can I safely install nginx/traefik on my Ubuntu docker vm instead of a container? Without causing any problems with docker? Do I need to purchase a domain or anything for reverse proxy to work?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Can I safely install nginx/traefik on my Ubuntu docker vm instead of a container? Without causing any problems with docker?
You can, but why would you? nginx is a perfect use-case for docker since the config file(s) and certificate(s) are the only things that are specific to you and updating a container is much easier than going through all the install steps again.

Do I need to purchase a domain or anything for reverse proxy to work?
You can, but don't have to. duckdns.org offers free domains...

If you really want to, you can just run nginx on your public IP address only if you like remembering IP addresses.
 

SavageAUS

Patron
Joined
Jul 9, 2016
Messages
418
With my router I can register and set a dynamic dns “example.tplinkdns.com” if I register one of these can I use that address in nginx for my reverse proxy instead of duckdns? Will that work for my internal network? 192.168.x.x -> example.tplinkdns.com: port
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
With my router I can register and set a dynamic dns “example.tplinkdns.com” if I register one of these can I use that address in nginx for my reverse proxy instead of duckdns? Will that work for my internal network? 192.168.x.x -> example.tplinkdns.com: port
In principle, yes. Although I have no experience with that specific example.

What you do is register with some kind of dynamic DNS provider (duckdns or tplink or whatever) and do what's needed to update the DNS record they give you with your public IP (probably automatic on your router, need to run the duckdns container or app or manually update on their website in the case of duckdns).

Then, you set your reverse proxy (nginx, caddy, apache, squid, haProxy.........) to listen on the servername of your DNS record rather than an IP. (I prefer and use nginx, although I have haproxy in the chain for load balancing too) For improved security, you can also create a server that listens on the IP only (192.168.x.x) and even your public IP if it doesn't change often and set that server to report some kind of error (444 or whatever you like) to all queries. This way if anyone is just port scanning you, they will get the impression it's just a non-existent server... 444 is network error).

In the config for that server, you then redirect either everything (location /) or something specific (location /something/) to proxy_pass to your internal IP and port.

Then you port-forward 443 and maybe 80 on your router to your reverse proxy's internal IP address (192.168.x.x).

Then if you want to have any chance of security, you'll use certificates generated for your domain (letsencrypt does this for free, but needs you to do some more special config on your reverse proxy to allow the certificates to renew... that's all found easily in google or the docker container page for letsencrypt).

If your router/firewall supports it, I recommend using NAT reflection for your port forwarding rules so you can access your services internally and externally using the same URLs.
 
Last edited:

SavageAUS

Patron
Joined
Jul 9, 2016
Messages
418
In principle, yes. Although I have no experience with that specific example.

What you do is register with some kind of dynamic DNS provider (duckdns or tplink or whatever) and do what's needed to update the DNS record they give you with your public IP (probably automatic on your router, need to run the duckdns container or app or manually update on their website in the case of duckdns).

Then, you set your reverse proxy (nginx, caddy, apache, squid, haProxy.........) to listen on the servername of your DNS record rather than an IP. (I prefer and use nginx, although I have haproxy in the chain for load balancing too) For improved security, you can also create a server that listens on the IP only (192.168.x.x) and even your public IP if it doesn't change often and set that server to report some kind of error (444 or whatever you like) to all queries. This way if anyone is just port scanning you, they will get the impression it's just a non-existent server... 444 is network error).

In the config for that server, you then redirect either everything (location /) or something specific (location /something/) to proxy_pass to your internal IP and port.

Then you port-forward 443 and maybe 80 on your router to your reverse proxy's internal IP address (192.168.x.x).

Then if you want to have any chance of security, you'll use certificates generated for your domain (letsencrypt does this for free, but needs you to do some more special config on your reverse proxy to allow the certificates to renew... that's all found easily in google or the docker container page for letsencrypt).
Ok thanks, ill keep chugging along. At the moment i'm going through the process of setting up nginx on docker but im having trouble working out the mount points. On my docker host i have the config folder for nginx here: /mnt/nfs/configs/nginx but im unsure where to mount that into the container for it to read it.

With the linuxserver/nginx container I can map the config correctly. Which one do I need to modify for reverse proxy? nginx.conf of the “default” file?
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
linuxserver document their containers pretty well:

https://hub.docker.com/r/linuxserver/nginx/

Add this to your docker run statement:
-v /mnt/nfs/configs/nginx:/config

any and all .conf files in that directory will be processed.
 

SavageAUS

Patron
Joined
Jul 9, 2016
Messages
418
So I’m thinking that maybe the free DNS I’m using isn’t working well enough and contemplating purchasing a domain just to make things easier. Any suggestions on who to go through? I’m in Australia if that helps.
Also I’ve switched to rancher 2.x kubernetes which makes nfs shares so much easier. But I’m still stuck with port conflicts, to begin with rancher runs on port 80/443 so I want to get that behind a reverse proxy with ssl first then start reverse proxying my containers.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Also I’ve switched to rancher 2.x kubernetes which makes nfs shares so much easier. But I’m still stuck with port conflicts, to begin with rancher runs on port 80/443 so I want to get that behind a reverse proxy with ssl first then start reverse proxying my containers.
If you’re running both the server and agents on the same docker host, it’s well documented on the Rancher site that you need to resolve the port conflict by picking a new one when you run the server container.

I would have said it’s not really any easier or harder than 1.6 to do NFS, but that’s your opinion and you’re entitled to it (I’m not sure that you ever really did it the right way on 1.6 anyway).

I personally noticed that 2.0 is much more resource-heavy (and officially not compatible with the RancherOS version supplied by FreeNAS even if it does run... maybe that changed with 11.3 and I didn’t notice yet), so have stayed away from it for now other than some brief tyre kicking. This means you’re largely on your own here as I can’t offer much direct experience or insight.

You don’t have to use the kubernetes cluster node (only 80 or 443) to publish, which means you can still do whatever ports you want just as with all docker hosts.

Free domains work just fine (I use several). Even if you pay for one, you will still need to tell it your public IP whenever it changes in order to use it.

I had noticed that we share the same city of origin.
 

SavageAUS

Patron
Joined
Jul 9, 2016
Messages
418
I’m running 2.0 on a Ubuntu vm. I just can’t for the life of me work out volumes on 1.6. I did see that I can run rancher 2.0 on different ports but that doesn’t change the security warning I get when I open the website to access it. Even with those different ports for rancher there are still 2 more containers that use the same ports (80) and if I change them the containers do not work.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Even with those different ports for rancher there are still 2 more containers that use the same ports (80) and if I change them the containers do not work.
Can you show the docker ps with these additional containers with port redirections done?
 

SavageAUS

Patron
Joined
Jul 9, 2016
Messages
418
I will once I recreate everything. I wiped it all to start from scratch with rancher 1.6 to try get nfs mounts working and I had semi luck. The volumes created and mounted but did not expose the contents to the container. Could this be A. Config error or B. Because I’m trying to mount datasets and not folders?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
It's usually just permissions... you can try chmod 777 * on those directories.

What are you using as settings in Rancher for the volumes in the container?
 

SavageAUS

Patron
Joined
Jul 9, 2016
Messages
418
So i've got volumes in rancher 1.6 75% working. 2/3 volumes work. My download folder is mapped correctly and so are my TV Shows just not the config for sonarr. It is setup the same as the other volumes with the same permissions so i have no clue.
storage.PNG
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
What does the config of the service look like for the volume in the sonarr container?
 

SavageAUS

Patron
Joined
Jul 9, 2016
Messages
418
What does the config of the service look like for the volume in the sonarr container?
I think I know what you mean.
It looks like this:
sonarrconfig:/config ive also tried sonarrconfig:/config:rw
But I’ve just noticed even though the folders are being created in the right spot (except config) the folders are actually empty, so no downloads and no shows.
So the top folders are mounting but not what’s underneath and with the config the container is creating it’s own Sonarr folder and placing the files in there instead of mounting my sonarr folder as /config. Hope that makes sense.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
You can help docker to understand that it’s a directory you’re mapping by doing it like this:

sonarrconfig:/config/:rw

Then it should be a question of permissions after that.

Just another thought... are you specifying at the bottom that it’s rancher-nfs storage?
 

SavageAUS

Patron
Joined
Jul 9, 2016
Messages
418
You can help docker to understand that it’s a directory you’re mapping by doing it like this:

sonarrconfig:/config/:rw

Then it should be a question of permissions after that.

Just another thought... are you specifying at the bottom that it’s rancher-nfs storage?
I just tried adding the trailing / as suggested to no avail. Yes I am specifying rancher-nfs at the bottom.
What permissions should I have on my datasets / folders in FreeNAS and also in nfs shares?
 
Top