RancherOS

Status
Not open for further replies.

FFK

Dabbler
Joined
Apr 13, 2017
Messages
20
As a Docker-happy ex-Corral user, I am following dev advice and am trying to make RancherOS work on 9.10.2. Some success - some! I cannot make the settings I create during install (SSH key fx) stick and at boot time it boots to grub no matter what I try. If someone has a detailed guide, please share. Arguably, one of the devs (fx the one suggesting RancherOS as the Docker manager until Corral functionality is back) sh/could provide it to alleviate some of the issues faced by those of us who took the chance with Corral.


Thanks.
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
I'd like to know the answer to that one. Got RancherOS booting OK but sorting out perisitence is a bit of a puzzle and probably relies on getting the cloud-config.yml file correct, and means checking what services are actually running in rancheros itself. I don't know enough about it to make any progress.

But it is possible to get boot2docker to run on 9.10.2 via iohyve, for example. But there are other gotha's here, like scripting the mount of NFS/CIFS shares from your pool inside boot2docker so docker containers can reference them for storage as necessary. This also raises the thorny question of ACLs.

For simple containers you could use a zvol attached to a boot2docker VM as ahci-hd for all persistant storage. Pihole can run this way.

In Corral, the dev team created a version of boot2docker where they had altered the automount script to work with a file (storage.img) on the zpool :

Code:
root@box:/etc/rc.d# cat  automount 
#!/bin/sh 
echo "automount ..."; 
 
rm -rf /var/lib/docker 
mkdir /var/lib/docker 
mkdir /host 
mount -t 9p -o version=9p2000.L -o trans=virtio -o cache=mmap -o msize=512000 docker /var/lib/docker 
mount -t 9p -o version=9p2000.L -o trans=virtio -o cache=mmap -o msize=512000 mnt /host 
 
if [ ! -f /var/lib/docker/storage.img ]; then 
  /usr/local/bin/truncate -s 1T /var/lib/docker/storage.img 
  mkfs.ext4 /var/lib/docker/storage.img 
fi 
 
mount -o loop /var/lib/docker/storage.img /var/lib/docker 
 
echo "automount over." 
root@box:/etc/rc.d#  


Otherwise you could simply create a VM for your favourite linux distro and instal docker in it. I've done this with debian jessie using iohyve. See this thread: https://forums.freenas.org/index.ph...ding-freenas-corral.53502/page-15#post-371294
 

FFK

Dabbler
Joined
Apr 13, 2017
Messages
20
Thanks for the input!

Would be very good if someone could shed some light on the persistence issues with RancherOS.

The Corral approach seems smart. However, I have now abandoned Corral too - the nail in the coffin was the fact that the storage.img file got huge to the extent that it used all available disk space.

Another question: The Corral code makes use of 9p - is that coming back to 9.10? From what I gather, CIFS/NFS does not work well with all data, e.g. databases.
 
D

dlavigne

Guest
Thank you for testing! Please make bug reports as you find issues so that they can be resolved. If you post the issue numbers here, other interested testers can follow their progress.
 
  • Like
Reactions: FFK

FFK

Dabbler
Joined
Apr 13, 2017
Messages
20
Thank you for testing! Please make bug reports as you find issues so that they can be resolved. If you post the issue numbers here, other interested testers can follow their progress.
Thanks! Sure, I will make sure to report any issues as applicable, but first we need to get the basic resolved - how do we achieve a "solid" install of RancherOS. The roadblocks that KrisBee and I mentioned above (grub and persistence of settings) are currently making any progress impossible.
 

FFK

Dabbler
Joined
Apr 13, 2017
Messages
20
I'd like to know the answer to that one. Got RancherOS booting OK but sorting out perisitence is a bit of a puzzle and probably relies on getting the cloud-config.yml file correct, and means checking what services are actually running in rancheros itself. I don't know enough about it to make any progress.

But it is possible to get boot2docker to run on 9.10.2 via iohyve, for example. But there are other gotha's here, like scripting the mount of NFS/CIFS shares from your pool inside boot2docker so docker containers can reference them for storage as necessary. This also raises the thorny question of ACLs.

For simple containers you could use a zvol attached to a boot2docker VM as ahci-hd for all persistant storage. Pihole can run this way.

In Corral, the dev team created a version of boot2docker where they had altered the automount script to work with a file (storage.img) on the zpool :

Code:
root@box:/etc/rc.d# cat  automount
#!/bin/sh
echo "automount ...";

rm -rf /var/lib/docker
mkdir /var/lib/docker
mkdir /host
mount -t 9p -o version=9p2000.L -o trans=virtio -o cache=mmap -o msize=512000 docker /var/lib/docker
mount -t 9p -o version=9p2000.L -o trans=virtio -o cache=mmap -o msize=512000 mnt /host

if [ ! -f /var/lib/docker/storage.img ]; then
  /usr/local/bin/truncate -s 1T /var/lib/docker/storage.img
  mkfs.ext4 /var/lib/docker/storage.img
fi

mount -o loop /var/lib/docker/storage.img /var/lib/docker

echo "automount over."
root@box:/etc/rc.d#  


Otherwise you could simply create a VM for your favourite linux distro and instal docker in it. I've done this with debian jessie using iohyve. See this thread: https://forums.freenas.org/index.ph...ding-freenas-corral.53502/page-15#post-371294
Care to share how you achieved the booting part? I have followed this guide, but it still boots into the grub prompt and the ssh key added to the cloud config is not saved (it seems).
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
@FFK

Just getting rancheros to boot from an ISO using iohyve installed had me stumped for a while until I stumbled across this:

https://forums.rancher.com/t/installing-on-freebsds-bhyve/2652/2

I had started with the reference you quoted, but it's incomplete.

You need to set os=custom when you create your VM, which means creating a device.map and grub.cfg in order for it to boot. ( See bottom of this page: https://github.com/pr1ntf/iohyve ).

My sequence was this.

1. Use the iohyve fetch command to get the rancheros ISO.
2. Created rancher VM with os=custom
3. Created a device.map file, e.g:

Code:
[root@freenas] /mnt/iohyve/rancheros# cat device.map
(hd0) /dev/zvol/NasPool/iohyve/rancheros/disk0
(cd0) /dev/zvol/NasPool/iohyve/rancheros/disk0


Adjust paths to your setup ...

4. To first boot from the ISO, create a grub.cfg file that will boot from your virtual cd, e.g:

Code:
[root@freenas] /mnt/iohyve/rancheros# cat grub.cfg
set root=(cd0,msdos1)
linux /boot/vmlinuz-4.9.21-rancher rancher.password=rancher rancher.state.autoformat=[/dev/sda,/dev/vda]
initrd /boot/initrd-v1.0.0
boot


NOTE: I found unless you have one, and only one, newline after the grub boot command in the grub.cfg file you can have problems when attaching to the VM console.

5. On a successful boot of the ISO, follow the instructions to install rancheros as per their quick start guide: https://docs.rancher.com/os/running-rancheros/server/install-to-disk/

6. Once installed you can stop the iohvye VM. The grub.cgf must now be edited in order to boot the VM from the virtual hard drive. e.g:

Code:
[root@freenas] /mnt/iohyve/rancheros# cat grub.cfg
set root=(hd0,msdos1)
linux /boot/vmlinuz-4.9.21-rancher rancher.password=rancher
initrd /boot/initrd-v1.0.0
boot



7. Re-start the iohyve VM ...

8. The disk attached to the iohyve VM shows up as this:

Code:
[root@rancher ~]# blkid
/dev/sda1: LABEL="RANCHER_STATE" UUID="4dc8a580-9616-4848-9bf0-854e26dc2600" TYPE="ext4" PARTUUID="25975ccf-01"


This is supposed be a Persistent State Partition http://docs.rancher.com/os/storage/state-partition/

I'm not sure what "persistence" is supposed to mean in this context.

It may not be obvious, but you actually boot into a busybox console, which according to rancher docs is not persistent - http://docs.rancher.com/os/configuration/switching-consoles/ However I found that while switching consoles works, it does not give the persistence I was hoping for. After pulling a docker image, it was lost when the VM was stopped and then started again. In fact, even the change of console is lost.

It's probably a simple error/misunderstanding, but I gave up looking for the answer ...

From Dru Lavigne's comments, perhaps rancheros is slated to replace boot2docker.
 

FFK

Dabbler
Joined
Apr 13, 2017
Messages
20
Thanks, I made it this far as well now. And pulled the Rancher server, configured it, and then rebooted the VM. Once backup, everything was gone as you experienced. Not sure what is going on... Any help from others is appreciated :)
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
@FFK

I didn't say before, but I had installed rancheros on a KVM VM on my Linux Desktop with no problem. No special action was required to get persistance.
 

FFK

Dabbler
Joined
Apr 13, 2017
Messages
20
OK. I am a bit in over my head here, but could this be related to the fact that VirtFS works well Linux with KVM? I notice this in the boot messages when RancherOS is booting:

Code:
9pnet_virtio: no channels available for device config-2
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
The answer is far simpler .... I'm not sure how I missed this, but I neglected to compare the boot params that are auto used in the KVM VM to those booting via the grub.cfg in iohyve.

A quick check of dmesg output of the rancheros VM booted in KVM shows the "persitent storage partition" is passed as a boot param.

I changed my grub.cfg for iohyve to this:

[
Code:
root@freenas] /mnt/iohyve/rancheros# cat grub.cfg
set root=(hd0,msdos1)
linux /boot/vmlinuz-4.9.21-rancher rancher.password=rancher printk.devkmsg=on rancher.state.dev=LABEL=RANCHER_STATE rancher.state.wait console=tty0
initrd /boot/initrd-v1.0.0
boot

[root@freenas] /mnt/iohyve/rancheros#


And you can see the virtual disk is mounted in ranchos as /dev/sda1:

Code:
[rancher@rancher ~]$ mount | grep sda1
/dev/sda1 on /home type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /mnt type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /media type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /opt type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /usr/lib/firmware type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /etc/selinux type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /usr/sbin/iptables type ext4 (ro,relatime,stripe=2,data=ordered)
/dev/sda1 on /etc/docker type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /usr/lib/modules type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /var/log type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /usr/share/ros type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /usr/bin/ros type ext4 (ro,relatime,stripe=2,data=ordered)
/dev/sda1 on /var/lib/docker type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /var/lib/rancher type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /etc/ssl/certs/ca-certificates.crt.rancher type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /var/lib/rancher/cache type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /var/lib/rancher/conf type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /etc/resolv.conf type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /etc/hostname type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /etc/hosts type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /var/lib/docker/overlay type ext4 (rw,relatime,stripe=2,data=ordered)
 

FFK

Dabbler
Joined
Apr 13, 2017
Messages
20
The answer is far simpler .... I'm not sure how I missed this, but I neglected to compare the boot params that are auto used in the KVM VM to those booting via the grub.cfg in iohyve.

A quick check of dmesg output of the rancheros VM booted in KVM shows the "persitent storage partition" is passed as a boot param.

I changed my grub.cfg for iohyve to this:

[
Code:
root@freenas] /mnt/iohyve/rancheros# cat grub.cfg
set root=(hd0,msdos1)
linux /boot/vmlinuz-4.9.21-rancher rancher.password=rancher printk.devkmsg=on rancher.state.dev=LABEL=RANCHER_STATE rancher.state.wait console=tty0
initrd /boot/initrd-v1.0.0
boot

[root@freenas] /mnt/iohyve/rancheros#


And you can see the virtual disk is mounted in ranchos as /dev/sda1:

Code:
[rancher@rancher ~]$ mount | grep sda1
/dev/sda1 on /home type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /mnt type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /media type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /opt type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /usr/lib/firmware type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /etc/selinux type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /usr/sbin/iptables type ext4 (ro,relatime,stripe=2,data=ordered)
/dev/sda1 on /etc/docker type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /usr/lib/modules type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /var/log type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /usr/share/ros type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /usr/bin/ros type ext4 (ro,relatime,stripe=2,data=ordered)
/dev/sda1 on /var/lib/docker type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /var/lib/rancher type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /etc/ssl/certs/ca-certificates.crt.rancher type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /var/lib/rancher/cache type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /var/lib/rancher/conf type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /etc/resolv.conf type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /etc/hostname type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /etc/hosts type ext4 (rw,relatime,stripe=2,data=ordered)
/dev/sda1 on /var/lib/docker/overlay type ext4 (rw,relatime,stripe=2,data=ordered)
Thanks a lot - it is now working for me! Now I need to decide between Rancher or Portainer for the Docker manager. I am leaning towards the latter, which seems more simple and approachable...
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
RancherOS server and agents is for advanced use, portainer is certainly a simpler possible replacement for the FreeNAS Corral docker container functions. May be it could work with FreeNAS templates. But there is still the question of how to give containers access to your zpool.
 

FFK

Dabbler
Joined
Apr 13, 2017
Messages
20
Agree. Guess the interaction will be via NFS? Not optimal - wonder what is the problem with 9pfs?
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
Agree. Guess the interaction will be via NFS? Not optimal - wonder what is the problem with 9pfs?

I believe NFS is what is planned.

The Corral docker host used boot2docker with a custom macvlan. You can create one manually in boot2docker/rancheros, but can you create conatiners via portainer that will use it?

The popular pihole container needs a --cap-add=NET_ADMIN argument, but I don't see how to use that in portainer.
 

FFK

Dabbler
Joined
Apr 13, 2017
Messages
20
Yeah, it is pretty complex. To be honest, I am seriously considering moving to an Ubuntu-based solution ("native" Docker + Portainer + ZFS)...
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
Yeah, it is pretty complex. To be honest, I am seriously considering moving to an Ubuntu-based solution ("native" Docker + Portainer + ZFS)...

Well, I understand you thinking about alternatives. I have wondered if FreeNAS + docker is a marriage from heaven or hell.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I have wondered if FreeNAS + docker is a marriage from heaven or hell.
I'm not convinced that it is an especially good thing. Sure, it gives FreeNAS access to the Docker ecosystem, but it requires a Linux VM to run it. Since it's all running in a VM, there's no direct way of exposing data on the FreeNAS box to the Docker containers--you need to do a network mount of some sort. There are pros and cons, as there are of just about everything.
 
Status
Not open for further replies.
Top