SOLVED 11.2-U2.1 Docker Woes

SavageAUS

Patron
Joined
Jul 9, 2016
Messages
418
Hi all,

I have previously had docker working but when i blew away my last working docker machine now i cannot get rancher to load it stays stuck at
Code:
cu -1 /dev/nmdm54B

[root@FreeNAS ~]# cu -l /dev/nmdm54B
Connected


I have tried deleting the hidden .bhyve folder, rebooting etc but still no go.
A standard VM works just fine.

Any help getting this to work will be appreciated.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Have you tried the enter key after it reports connected?
 

SavageAUS

Patron
Joined
Jul 9, 2016
Messages
418
Have you tried the enter key after it reports connected?
Yes many times. As you can see above I’m up to nmdm54 54
Would love to reset that number too.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
What NIC ate you using? I was finding that the virtIO NIC was no good. Also how much RAM... 1GB isn't enough for the latest RancherOS, must be 2GB.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Also, can you get the Serial option to work from the new GUI (although I suppose just lands you at the same stuck screen?)?
 

SavageAUS

Patron
Joined
Jul 9, 2016
Messages
418
I'm using the Intel e82545 (e1000) nic attached to my primary nic ibg0.
No matter where i try to serial from i get stuck at the same screen.
I'm giving the vm 2 cpu's and 4Gb of ram
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I note that there's some seriously weird stuff happening in the RAW disk file setup (under devices) with 11.2.

Here are the 2 different screens I see from a working docker VM I created last night... I don't know how this can work... I certainly didn't enter the additional slash in the path.

Screen Shot 2019-03-08 at 14.18.20.png
Screen Shot 2019-03-08 at 14.16.10.png


How are you doing it? new or old UI?
 

SavageAUS

Patron
Joined
Jul 9, 2016
Messages
418
I’m 100% using the new ui but I have tried from the legacy ui no difference still stalls on the same screen.
 
Last edited:

SavageAUS

Patron
Joined
Jul 9, 2016
Messages
418
So it would seem even though I had the hidden .bhyve folder I didn’t have the hidden .vm_cache folder on the correct drive.
Surely there is a way to choose which drive to hold both these hidden folders instead of them just being created at random?

But anyway I managed to get rancher up and running once I moved the .vm_cache folder to the same drive that holds the .bhyve folder.
 

SavageAUS

Patron
Joined
Jul 9, 2016
Messages
418
Even though this thread is marked solved by me I need to ask the experts as to why my containers are not getting an IP address? I’ve tried with the FreeNAS docker vm and with a Ubuntu vm running docker both will not obtain a local IP address.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
What's your networking setup?

Can we see ifconfig from the FreeNAS host with the docker and/or Ubuntu VMs running?
 

SavageAUS

Patron
Joined
Jul 9, 2016
Messages
418
Sure here is ifconfig from FreeNAS with rancher running.
Taken using my phone so I apologise if formatting isn’t good.
Code:
igb0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=2400b9<RXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,VLAN_HWTSO,RXCSUM_IPV6>
        ether 40:8d:5c:d6:23:73
        hwaddr 40:8d:5c:d6:23:73
        inet 192.168.1.2 netmask 0xffffff00 broadcast 192.168.1.255 
        nd6 options=9<PERFORMNUD,IFDISABLED>
        media: Ethernet autoselect (1000baseT <full-duplex>)
        status: active
igb1: flags=8c02<BROADCAST,OACTIVE,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=6403bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
        ether 40:8d:5c:d6:23:74
        hwaddr 40:8d:5c:d6:23:74
        nd6 options=9<PERFORMNUD,IFDISABLED>
        media: Ethernet autoselect
        status: no carrier
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
        inet6 ::1 prefixlen 128 
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x3 
        inet 127.0.0.1 netmask 0xff000000 
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
        groups: lo 
bridge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        ether 02:c0:e2:52:25:00
        nd6 options=1<PERFORMNUD>
        groups: bridge 
        id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
        maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
        root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
        member: tap0 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 11 priority 128 path cost 2000000
        member: vnet0:12 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 4 priority 128 path cost 2000
        member: vnet0:9 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 14 priority 128 path cost 2000
        member: vnet0:8 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 13 priority 128 path cost 2000
        member: vnet0:5 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 10 priority 128 path cost 2000
        member: vnet0:4 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 9 priority 128 path cost 2000
        member: vnet0:3 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 8 priority 128 path cost 2000
        member: vnet0:2 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 7 priority 128 path cost 2000
        member: vnet0:1 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 6 priority 128 path cost 2000
        member: igb0 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 1 priority 128 path cost 20000
vnet0:1: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        description: associated with jail: tautulli as nic: epair0b
        options=8<VLAN_MTU>
        ether 02:ff:60:46:b0:16
        hwaddr 02:9f:d0:00:06:0a
        nd6 options=1<PERFORMNUD>
        media: Ethernet 10Gbase-T (10Gbase-T <full-duplex>)
        status: active
        groups: epair 
vnet0:2: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        description: associated with jail: plexmediaserver as nic: epair0b
        options=8<VLAN_MTU>
        ether 02:ff:60:3d:6b:aa
        hwaddr 02:9f:d0:00:07:0a
        nd6 options=1<PERFORMNUD>
        media: Ethernet 10Gbase-T (10Gbase-T <full-duplex>)
        status: active
        groups: epair 
vnet0:3: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        description: associated with jail: radarr as nic: epair0b
        options=8<VLAN_MTU>
        ether 02:ff:60:c1:df:ca
        hwaddr 02:9f:d0:00:08:0a
        nd6 options=1<PERFORMNUD>
        media: Ethernet 10Gbase-T (10Gbase-T <full-duplex>)
        status: active
        groups: epair 
vnet0:4: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        description: associated with jail: transmission as nic: epair0b
        options=8<VLAN_MTU>
        ether 02:ff:60:03:aa:46
        hwaddr 02:9f:d0:00:09:0a
        nd6 options=1<PERFORMNUD>
        media: Ethernet 10Gbase-T (10Gbase-T <full-duplex>)
        status: active
        groups: epair 
vnet0:5: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        description: associated with jail: organizr as nic: epair0b
        options=8<VLAN_MTU>
        ether 02:ff:60:e1:fe:c6
        hwaddr 02:9f:d0:00:0a:0a
        nd6 options=1<PERFORMNUD>
        media: Ethernet 10Gbase-T (10Gbase-T <full-duplex>)
        status: active
        groups: epair 
vnet0:8: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        description: associated with jail: emby as nic: epair0b
        options=8<VLAN_MTU>
        ether 02:ff:60:79:aa:5f
        hwaddr 02:9f:d0:00:0d:0a
        nd6 options=1<PERFORMNUD>
        media: Ethernet 10Gbase-T (10Gbase-T <full-duplex>)
        status: active
        groups: epair 
vnet0:9: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        description: associated with jail: sonarr as nic: epair0b
        options=8<VLAN_MTU>
        ether 02:ff:60:a5:a4:7e
        hwaddr 02:9f:d0:00:0e:0a
        nd6 options=1<PERFORMNUD>
        media: Ethernet 10Gbase-T (10Gbase-T <full-duplex>)
        status: active
        groups: epair 
vnet0:12: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        description: associated with jail: jackett as nic: epair0b
        options=8<VLAN_MTU>
        ether 02:ff:60:b1:b8:71
        hwaddr 02:9f:d0:00:04:0a
        nd6 options=1<PERFORMNUD>
        media: Ethernet 10Gbase-T (10Gbase-T <full-duplex>)
        status: active
        groups: epair 
tap0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        description: Attached to rancher
        options=80000<LINKSTATE>
        ether 00:bd:9f:13:2d:00
        hwaddr 00:bd:9f:13:2d:00
        nd6 options=1<PERFORMNUD>
        media: Ethernet autoselect
        status: active
        groups: tap 
        Opened by PID 47757
root@freenas[~]#
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Are your jails getting IP addresses... they are in the same bridge as your tap0 interface.

What are you using for DHCP?
 

SavageAUS

Patron
Joined
Jul 9, 2016
Messages
418
Yeah my jails are getting IP addresses and work just fine. I’m using a tp-link vr900 for dhcp. Any vms I spin up work fine same as docker hosts, just not the containers I create.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
OK, so I think I see the problem now... you need to learn a little about how docker works.

Docker containers live on a network internal to the docker host(s) and don't talk to the outside world like normal VMs or jails (VNET).

In order to use a container's services, you publish the ports to the docker host, so for example if you run a docker container for plex, you would use -p 32400:32400 as part of your docker run (it's under "Port Map" in the rancher GUI when creating a service ... = container in normal docker).

At the point when your plex container is running, port 32400 would then be available on the docker host's IP (192.168.x.x), so you don't care that the container's IP is a stupid one on the 172.17.x.x. network.

Once you understand what you're doing there, you can use the public: private port combination to run the same container more than once, so you could have one plex server with -p 32400:32400 and one with -p 32401:32400... both containers think they own port 32400 but from the outside/public view only the first one does and the second has it's unique port as well... but incidentally would then need some port forwarding trickery on the firewall to connect to plex.tv as the port is non-standard.

I hope that points you in enough of the right direction. Otherwise there's plenty of material on the docker.com website to delve into to get the concepts straight.

There's a whole other world of understanding you need to get into if you want different docker containers on the same or different hosts to talk to each other since they won't look outside the 172.17 network for another container and can't see ports published to the public IP (192.168.x.x) by other containers, so if you did an nginx config for a container to proxy_pass to the plex container, it would look like this:

location /plex/ {
proxy_pass http://plex:32400/;

Rather than this:
location /plex/ {
proxy_pass http://192.168.1.42:32400/;


And you would need the nginx container to have a link (-link plex: plex ... without the space after the colon I needed to add to avoid a smilie in normal docker or under Service Links in the Rancher GUI) to allow it to be seen.

Of course I'm drastically oversimplifying things to make this digestible as a starter for you, so please do a little more reading and testing to get familiar with it.
 
Last edited:

SavageAUS

Patron
Joined
Jul 9, 2016
Messages
418
Thanks for that. I have been publishing ports but when I try to access the container, plex for instance I would go to 192.168.1.99:32400 (192.168.1.99 is rancher) and it would not load.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
For plex in particular, you would need to add /web to the end of the URL.

More generally, are you able to access the rancher server GUI? (It's a container listening on 8080 if you installed it)

What is docker ps telling you about the ports shared on the running containers?
 

SavageAUS

Patron
Joined
Jul 9, 2016
Messages
418
Yes sorry, I would add /web but I was just using plex as an example. The first container I’m trying to setup is ombi. Yes I can access rancher server on 192.168.1.99:8080 or during different installs/tests 192.168.1.99:9000 for portainer (only one at a time). They both work correctly. I’m just doing a fresh install now, I’ll set up a container and run docker ps and post results shortly.
I’m actually unsure which GUI to use, rancher or portainer.
I’m also trying to work out how to mount multiple nfs shares to my Docker host from FreeNAS.

Code:
[root@rancher Ombi]# docker ps
CONTAINER ID        IMAGE                             COMMAND                  CREATED              STATUS              PORTS                              NAMES
e5a2519daf8a        linuxserver/ombi                  "/init"                  About a minute ago   Up About a minute                                      r-ombi-696b35af
c6e913d059fb        rancher/storage-nfs:v0.9.1        "start.sh storage --…"   14 minutes ago       Up 14 minutes                                          r-nfs-nfs-driver-1-6bbf5625
412d7adcd181        rancher/dns:v0.17.4               "/rancher-entrypoint…"   18 minutes ago       Up 18 minutes                                          r-network-services-metadata-dns-1-cfe25891
2f5805cf7f18        rancher/net:v0.13.17              "/rancher-entrypoint…"   18 minutes ago       Up 18 minutes                                          r-ipsec-ipsec-connectivity-check-1-393f05b4
c4a6b4146bcc        rancher/net:v0.13.17              "/rancher-entrypoint…"   18 minutes ago       Up 18 minutes                                          r-ipsec-ipsec-router-1-64fc6f92
1f60897c485e        rancher/healthcheck:v0.3.8        "/.r/r /rancher-entr…"   18 minutes ago       Up 18 minutes                                          r-healthcheck-healthcheck-1-ea38bde4
71c2cb9c2da6        rancher/net:holder                "/.r/r /rancher-entr…"   18 minutes ago       Up 18 minutes                                          r-ipsec-ipsec-1-f288d85b
1b092097923e        rancher/metadata:v0.10.4          "/rancher-entrypoint…"   18 minutes ago       Up 18 minutes                                          r-network-services-metadata-1-c7294eec
1112081d841e        rancher/scheduler:v0.8.6          "/.r/r /rancher-entr…"   18 minutes ago       Up 18 minutes                                          r-scheduler-scheduler-1-b2b3ac15
3fd51007889d        rancher/network-manager:v0.7.22   "/rancher-entrypoint…"   18 minutes ago       Up 18 minutes                                          r-network-services-network-manager-1-c3126c5d
49bac3c0f2c6        rancher/net:v0.13.17              "/rancher-entrypoint…"   18 minutes ago       Up 18 minutes                                          r-ipsec-cni-driver-1-b671605e
97438686111c        rancher/agent:v1.2.11             "/run.sh run"            19 minutes ago       Up 19 minutes                                          rancher-agent
b02469e8fe7f        rancher/server                    "/usr/bin/entry /usr…"   21 minutes ago       Up 21 minutes       3306/tcp, 0.0.0.0:8080->8080/tcp   rancher
[root@rancher Ombi]#
 
Last edited:

SavageAUS

Patron
Joined
Jul 9, 2016
Messages
418
I’ve figured it all out. My mount points were all wrong, I was mounting a direct dataset that had the datasets in it that I needed to use; I needed to mount the child datasets separately for the containers to work.

Next question
Does this need a seperate thread?
I am currently running in jails
Plex
Emby
Sonarr
Radarr
Transmission
Jackett
Tautulli
Organizr
Ombi
And in a Ubuntu vm
Pihole
The reason pihole is seperate is due to a port conflict.
So my question is has anyone setup docker with a reverse proxy with similar containers / port conflicts?
I have never setup a reverse proxy so I have no idea where to start.
The port conflicts so far are with pihole and organizr.
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I am running an nginx reverse proxy in docker to share my apps (some docker, some jails, some standalone/VMs) out over 443.

Port conflicts on docker are easily handled in the docker run command (or in the Rancher Port settings for the service), you just pick a new non-conflicting port to use on the host and map the container internal port (can be 80 on every container if the app wants it) and map it to the selected port on the host. Optionally, you don’t even need to do that if you link the nginx container to the other containers and it can access the container by name and use the original port number without needing to publish it to the host at all.

I’m not really too keen to just write down everything I’ve done, but I will give you pointers to helpful resources and give tips on what to look at or how to get to the next stage as you go.

First is to install a container for nginx with 80 and/or 443 published and then port-forward 80 and/or 443 on your router to that container host.

Then you need to work on having an nginx config file that suits what you need (which will depend heavily on how you want to proceed.
 
Top