rancher migration in 11.3

patsoffice

Cadet
Joined
Mar 7, 2020
Messages
1
Hrmmm. I upgraded to 11.3u1 this morning and my .bhyve_containers directory was not removed. Just an FYI for anybody that hasn't upgraded yet.
 

Kevk

Cadet
Joined
Feb 13, 2020
Messages
4
}K[0]Q@GAJAE]5{6U}59}@W.png
E]$O$9K1(71DX`(3{@%_L8Y.png
0SEH(([%2~761FAMY_]X%10.png
BMBVK0FU)T0[6H)6~03YL8E.png

I mount an NFS shared folder on another LINUX and can only access one level of directories.
I tried alpine and openwrt and got the same result.
 

Attachments

  • D4A%G4@KSMBTJ0DDN42%ZOF.png
    D4A%G4@KSMBTJ0DDN42%ZOF.png
    7.8 KB · Views: 226
  • IKW@DJZLIYT1]APE31{J}ZS.png
    IKW@DJZLIYT1]APE31{J}ZS.png
    4.4 KB · Views: 225

sgel

Cadet
Joined
Oct 3, 2019
Messages
2
After migrating to Freenas 11.3 my Docker Host (Rancher) won't start any more with error "[EFAULT] grub-bhyve timed out, please check your grub config.".
1) The path of grubconfig was in wrong dataset in freenas db (/data/freenas-v1.db , table: vm_vm, field: grubconfig).
2) The timeout in vm.py is too short (2s) change to 20s (and reboot) (line 280 in /usr/local/lib/python3.7/site-packages/middlewared/plugins/vm.py)

After this my Docker Host work again.

I opened a Jira bug on this, and it should be fixed in the near future. I think 20 seconds is too long, I did some testing and found 5 should be sufficient. Thanks for pointing this out.
 

Rudi Pittman

Contributor
Joined
Dec 22, 2015
Messages
161
Is there a newb friendly step by step walkthrough on setting up a new rancheros vm (efi) AND using the old rancheros img stored in /mnt/<name>/vm? I have 10 docker containers defined and really don't want to go back and redo all of them along with data and environment variables etc. Is downgrading back down to 11.2 even an option? Where would the files that got deleted be to "restore" them as some have indicated they've done?
 

japster

Cadet
Joined
Feb 17, 2013
Messages
6
I upgraded yesterday and nothing was removed.
You just have to edit the vm.py like jeud said in post 7 in this thread.
Edit that file, restart and everything is working again!
 

Rudi Pittman

Contributor
Joined
Dec 22, 2015
Messages
161
I upgraded yesterday and nothing was removed.
You just have to edit the vm.py like jeud said in post 7 in this thread.
Edit that file, restart and everything is working again!

So you didn't have to do this step: 1) The path of grubconfig was in wrong dataset in freenas db (/data/freenas-v1.db , table: vm_vm, field: grubconfig).

Cause I have no idea how I'm supposed to mod that. Ok changed the vm.py and gonna try a reboot.

*update* Just changing the timeout in vm.py is not sufficient to get it to boot.
 
Last edited:

buben

Cadet
Joined
Mar 9, 2016
Messages
4
So you didn't have to do this step: 1) The path of grubconfig was in wrong dataset in freenas db (/data/freenas-v1.db , table: vm_vm, field: grubconfig).

Cause I have no idea how I'm supposed to mod that. Ok changed the vm.py and gonna try a reboot.

*update* Just changing the timeout in vm.py is not sufficient to get it to boot.

The file Jeud in post #7 - /data/freenas-v1.db is sqlite3 format and should be only edited via sqlite3 shell.
In my case after upgrade the path to the grub.conf was incorrect, after correcting the path I was able to start my dockworker without reboot.
For the less technical or lazzy persons :) here is few steps to udpate the sqlite3 db file.

By the way - "I will not accept any responsibility if YOU going to screw your DB" :p
The guide:
  1. Open SSH connection to your FreeNAS server.
  2. Make sure your DB existing - ls -l /data/freenas-v1.db
    1. Expected output - "-rw-r----- 1 root operator <size> <date> /data/freenas-v1.db"
  3. Open shell to the DB - sqlite3 /data/freenas-v1.db
  4. Great, now we in the sqlite3 shell, let's print the content of the table we are interested in:
    1. select * from vm_vm;
    2. This should output all entries from vm_vm table.
  5. Now one should use his brain and find the entry that is relevant for his environment with the docker configuration.
    1. Mine was: "9|DockerHub||4|4096|1|LOCAL|/mnt/wdstorage/.bhyve_containers/configs/9_DockerHub/grub/grub.cfg|GRUB"
    2. So this path was incorect the pool name was wrong.
  6. Locate in your FreeNAS setup grub.cfg of the docker you want to resurrect.
  7. Finally, last step, let's update the entry to the correct path.
    1. Look a the entry that we printed in step 5.1, first column is ID (in my case) it has value 9 (very important).
    2. We want to update only this specific entry and specific column.
    3. shell cmd: "update vm_vm set grubconfig='/mnt/truck/.bhyve_containers/configs/9_DockerHub/grub/grub.cfg' where id=9;"
    4. Pay attention at the end we have pointed to the id=9 and grubconfig column to update with the correct path.
  8. Verify you change by running: select * from vm_vm where id=9; You should see the entry with the correct path now.
  9. Finito.
I also update the vm.py file according to #7 just in case.

Now open your FreeNAS WebUI and go to the VM tabs, start your docker VM.
Note: in my case it didn't worked for the first time I pressed start but the second time it started successfully, docker back to life :)
 

Rudi Pittman

Contributor
Joined
Dec 22, 2015
Messages
161
The file Jeud in post #7 - /data/freenas-v1.db is sqlite3 format and should be only edited via sqlite3 shell.
In my case after upgrade the path to the grub.conf was incorrect, after correcting the path I was able to start my dockworker without reboot.
For the less technical or lazzy persons :) here is few steps to udpate the sqlite3 db file.

By the way - "I will not accept any responsibility if YOU going to screw your DB" :p
The guide:
  1. Open SSH connection to your FreeNAS server.
  2. Make sure your DB existing - ls -l /data/freenas-v1.db
    1. Expected output - "-rw-r----- 1 root operator <size> <date> /data/freenas-v1.db"
  3. Open shell to the DB - sqlite3 /data/freenas-v1.db
  4. Great, now we in the sqlite3 shell, let's print the content of the table we are interested in:
    1. select * from vm_vm;
    2. This should output all entries from vm_vm table.
  5. Now one should use his brain and find the entry that is relevant for his environment with the docker configuration.
    1. Mine was: "9|DockerHub||4|4096|1|LOCAL|/mnt/wdstorage/.bhyve_containers/configs/9_DockerHub/grub/grub.cfg|GRUB"
    2. So this path was incorect the pool name was wrong.
  6. Locate in your FreeNAS setup grub.cfg of the docker you want to resurrect.
  7. Finally, last step, let's update the entry to the correct path.
    1. Look a the entry that we printed in step 5.1, first column is ID (in my case) it has value 9 (very important).
    2. We want to update only this specific entry and specific column.
    3. shell cmd: "update vm_vm set grubconfig='/mnt/truck/.bhyve_containers/configs/9_DockerHub/grub/grub.cfg' where id=9;"
    4. Pay attention at the end we have pointed to the id=9 and grubconfig column to update with the correct path.
  8. Verify you change by running: select * from vm_vm where id=9; You should see the entry with the correct path now.
  9. Finito.
I also update the vm.py file according to #7 just in case.

Now open your FreeNAS WebUI and go to the VM tabs, start your docker VM.
Note: in my case it didn't worked for the first time I pressed start but the second time it started successfully, docker back to life :)


It was very nice of you to make this writeup to help us newbs. Sadly I guess I have another reason for rancher to not be coming up as the path "
/mnt/tank/.bhyve_containers/configs/1_docker1/grub/grub.cfg" which is in that file is correct. One point not mentioned is if you logged in as user you will have to sudo to be able to edit the database.

When booting the VM this is the last thing I see:
ros-sysinit:fatal: FATAL: failed loading images from /usr/share/zos/images.tar:exit status 1

There is no /usr/share/zos on my freenas machine. Was working fine on 11.2. I upgrade to 11.3 and it's still fine till I reboot and then rancher never came up. If I create a new vm with debian and install docker is there a simple method to transfer the config's/images for the defined docker containers?
 
Last edited:

buben

Cadet
Joined
Mar 9, 2016
Messages
4
There is no /usr/share/zos on my freenas machine. Was working fine on 11.2. I upgrade to 11.3 and it's still fine till I reboot and then rancher never came up. If I create a new vm with debian and install docker is there a simple method to transfer the config's/images for the defined docker containers?

Sorry mate for your issue with docker, must be frustrating... but maybe some one with more exp will help here.
About second part, where is your docker's config/volume/images/yada yada yada are resides?

I can tell you how I did it to make it less dependent on the OS you run you docker in.
Since VM is not a jail we can't just mount dataset here, need to work with zvol.
Create a zvol that will serve you as a docker configuration storage, I went with 20GB over the top....
Next you go to the VM tab in your FreeNAS and add new device (DISK) to your VM.
Reboot the VM, SSH to it whatever and permanently mount it in you system, let's say "/mnt/storage", you will need to format the disk, create partition and all that crap to make it usable.
One you finish you need to point your docker to store it's configuration folder in that mounted disk, in the rancherOS it's simple, you just "vi /etc/docker/daemon.json" and add there something like this:
{
"data-root": "/mnt/storage/docker"
}

and reboot the OS, once you back online, go to the directory (/mnt/storage/docker) and you should see all the docker images, volumes and other happy goodies there.
From this point everything is stored in the mounted disk, that can be mounted anywhere, once you point docker to it (with few adjustments but nothing serious).

In your situation, you need to create new vm with debian (your words), mount your docker disk that been created for the rancheros and copy from it docker configuration to the new destination, then point the docker to that destination, restart docker and that should do it.
 

Rudi Pittman

Contributor
Joined
Dec 22, 2015
Messages
161
Sorry mate for your issue with docker, must be frustrating... but maybe some one with more exp will help here.
About second part, where is your docker's config/volume/images/yada yada yada are resides?

I can tell you how I did it to make it less dependent on the OS you run you docker in.
Since VM is not a jail we can't just mount dataset here, need to work with zvol.
Create a zvol that will serve you as a docker configuration storage, I went with 20GB over the top....
Next you go to the VM tab in your FreeNAS and add new device (DISK) to your VM.
Reboot the VM, SSH to it whatever and permanently mount it in you system, let's say "/mnt/storage", you will need to format the disk, create partition and all that crap to make it usable.
One you finish you need to point your docker to store it's configuration folder in that mounted disk, in the rancherOS it's simple, you just "vi /etc/docker/daemon.json" and add there something like this:
{
"data-root": "/mnt/storage/docker"
}

and reboot the OS, once you back online, go to the directory (/mnt/storage/docker) and you should see all the docker images, volumes and other happy goodies there.
From this point everything is stored in the mounted disk, that can be mounted anywhere, once you point docker to it (with few adjustments but nothing serious).

In your situation, you need to create new vm with debian (your words), mount your docker disk that been created for the rancheros and copy from it docker configuration to the new destination, then point the docker to that destination, restart docker and that should do it.

All container storage is available under /mnt/tank/shared/<appname> on freenas which becomes /mnt/nfs-1/<appname> on rancher. This gets passed to each of the docker containers. The only image I have access to is the VM image @ /mnt/tank/vm named rancher.img_docker1 which is 20GB in size. How does that affect your instructions?
 

vinistois

Dabbler
Joined
Sep 12, 2018
Messages
11
Argh.

I have 3 rancherOS vms. Since going to 11.3, they still work, but are a bitch to reboot. Press the start button 10 times, might get a successful boot, otherwise bhyve times out. I've upped the timeout in vm.py, but it doens't seem to make a difference.

Now I have one of them that just flat out refuses to reboot. The other two reboot after many attempts. There should be zero difference between them.

everything looks right in the db entries, the grub files are right where they are supposed to be.

Any clues as to what could be going on and how to fix this? I guess I'll need to migrate to EFI vms using the guide posted by @hugopoi

What a bitch, why take something working perfectly for so many people and fuck it up?
 

buben

Cadet
Joined
Mar 9, 2016
Messages
4
All container storage is available under /mnt/tank/shared/<appname> on freenas which becomes /mnt/nfs-1/<appname> on rancher. This gets passed to each of the docker containers. The only image I have access to is the VM image @ /mnt/tank/vm named rancher.img_docker1 which is 20GB in size. How does that affect your instructions?
I am using zvol as additional mounted disk to store docker configuration. By using NFS you introducing huge bottle neck and performance degradation for your system docker. You can use NFS to store specific container data but docker settings it's a bad idea.
 

Rudi Pittman

Contributor
Joined
Dec 22, 2015
Messages
161
I have setup a new vm running rancheros.....is there a way for me to mount the old rancher.img_docker1 and extract the portainer data that has the list of containers and their setup? For example I spent a couple of weeks getting transmission with vpn setup just the way I wanted. I figured I'd ask before I recreate all these since I had 10+ of them. @buben I am using nfs because the freenas tutorial I followed on youtube set it up that way....if you will provide more details on what you meant I'm willing to try another way although for a home nas I'm not sure how much of a difference it would make. Currently /mnt/tank/shared is mapped to /mnt/nfs-1.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Yes, it's possible to get RancherOS working in 11.3, but it's non-obvious. It took me a month of dinking around before I figured out what to do. For the record, I have RancherOS 1.5.5 running in a bhyve VM. No database tweaks nor source code modifications to tweak timeouts needed.
  1. First, you have to understand the prerequisites for the bhyve grub bootloader. There are 2 non-GUI settings that can ONLY be set via the REST 2.0 API. For the gory details, see my HOW-TO for grub boot. Note, to avoid the time-out issue, use single-quotes, not parenthesis, with the grub.cfg set root='hd0,msdos1' directive.
  2. Next, since RancherOS boots via syslinux, you have to generate a grub.cfg that emulates what syslinux does. Mine is as follows:

    Code:
    set timeout=0
    set default=rancheros
    
    menuentry "RancherOS" --id rancheros {
      set root='hd0,msdos1'
      linux /boot/vmlinuz-4.14.138-rancher printk.devkmsg=on panic=10 rancher.state.dev=LABEL=RANCHER_STATE rancher.state.wait rancher.resize_device=/dev/vda
      initrd /boot/initrd-v1.5.5
    }
    


  3. RancherOS 1.5.5 can boot via VirtIO. I use a 2 GB RAW file as the RANCHER_STATE boot volume, and a 25 GB zvol to house a 4 GB RANCHER_SWAP and a 20 GB Docker root partition. Both are set to 512 bytes/sector, and attach via VirtIO. See my post on Docker options for FreeNAS for more details.
You can use any old RancherOS RAW file as your VM boot volume. Just use the REST 2.0 API to set the "boot"=true attribute, and to point the VM "grubconfig" attribute to any existing grub.cfg in .bhyve_containers.
 

Rudi Pittman

Contributor
Joined
Dec 22, 2015
Messages
161
Yes, it's possible to get RancherOS working in 11.3, but it's non-obvious. It took me a month of dinking around before I figured out what to do. For the record, I have RancherOS 1.5.5 running in a bhyve VM. No database tweaks nor source code modifications to tweak timeouts needed.
  1. First, you have to understand the prerequisites for the bhyve grub bootloader. There are 2 non-GUI settings that can ONLY be set via the REST 2.0 API. For the gory details, see my HOW-TO for grub boot. Note, to avoid the time-out issue, use single-quotes, not parenthesis, with the grub.cfg set root='hd0,msdos1' directive.
  2. Next, since RancherOS boots via syslinux, you have to generate a grub.cfg that emulates what syslinux does. Mine is as follows:

    Code:
    set timeout=0
    set default=rancheros
    
    menuentry "RancherOS" --id rancheros {
    set root='hd0,msdos1'
    linux /boot/vmlinuz-4.14.138-rancher printk.devkmsg=on panic=10 rancher.state.dev=LABEL=RANCHER_STATE rancher.state.wait rancher.resize_device=/dev/vda
    initrd /boot/initrd-v1.5.5
    }
    


  3. RancherOS 1.5.5 can boot via VirtIO. I use a 2 GB RAW file as the RANCHER_STATE boot volume, and a 25 GB zvol to house a 4 GB RANCHER_SWAP and a 20 GB Docker root partition. Both are set to 512 bytes/sector, and attach via VirtIO. See my post on Docker options for FreeNAS for more details.
You can use any old RancherOS RAW file as your VM boot volume. Just use the REST 2.0 API to set the "boot"=true attribute, and to point the VM "grubconfig" attribute to any existing grub.cfg in .bhyve_containers.

How are you doing an upgrade because even after I have an efi bootable rancheros installed into a vm and running if I do a "ros os upgrade" it downloads the files but doesn't actually upgrade anything. If I modify the grub.cfg in the efi partition it will say it can't find the vmlinuz or initrd even though I can see them in /boot. The ones it DOES find are vmlinuz-4.14.73 (instead of vmlinuz-4.14.138) and initrd-v1.4.2 (instead of initrd-v1.5.5). Those were found by editing the failed grub entry and using tab to have it autocomplete the files it DID see.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
How are you doing an upgrade?

ros os upgrade upgrades the syslinux configuration. I'm booting via grub, not UEFI, so I have to manually update my grub.cfg to match the syslinux changes. For EFI, you may have to move files over to /boot/efi/EFI to get EFI grub to notice them.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
If you add the options rancher.autologin=tty1 rancher.recovery=true to the end of grub linux entry, you'll be able to access the RancherOS recovery console on the next boot, which will allow you to mount /dev/sda1 to access the /boot menu to find the paths to the new vmlinuz and initrd.
 

Rudi Pittman

Contributor
Joined
Dec 22, 2015
Messages
161
If you add the options rancher.autologin=tty1 rancher.recovery=true to the end of grub linux entry, you'll be able to access the RancherOS recovery console on the next boot, which will allow you to mount /dev/sda1 to access the /boot menu to find the paths to the new vmlinuz and initrd.

I followed the instructions here: https://blog.hugopoi.net/2020/03/01/install-rancheros-on-freenas-11-3/ and it was using vmlinuz-4.14.73 and initrd-v1.4.2 even though I could not find those files anywhere. I copied the new vmlinuz and initrd to /boot and suddenly I'm at a grub line...so I added the rancher.img as a raw device and at least got booted up..now I can mount /dev/vda1 to /mnt/efipart and get to the grub entry...I've fixed it back to the way it was and rebooted...still got the grub rescue...so I mounted vda2 and made sure I deleted the vmlinuz-4.14.138 and initrd-v1.5.5 from /boot...so it should be back to what was booting fine....still getting grub rescue. If ever get it booting again so I can get to my 12 containers I'm not touching the thing again. On top of everything else I was previously able to ssh in after assigning a password to user rancher..now even booting off the rancher image eth0 isn't getting an ip address.


*update* I resolved everything by wiping the virtual machine and starting over. Luckily since I saved the docker run "templates" on nfs recreating the 11 containers(got rid of 1) wasn't a big deal and their configs were saved as well. I'm about resolved to stop upgrading. I had everything in jails and then after an upgrade could no longer create new ones because they needed to be in new jails. To avoid that happenings to me again I decided to put everthing in docker containers...but then the support for that was dropped leaving me here. In the process of redoing everything the version was resolved.
 
Last edited:

Triumph

Dabbler
Joined
May 14, 2014
Messages
12
I followed the instructions here: https://blog.hugopoi.net/2020/03/01/install-rancheros-on-freenas-11-3/ and it was using vmlinuz-4.14.73 and initrd-v1.4.2 even though I could not find those files anywhere. I copied the new vmlinuz and initrd to /boot and suddenly I'm at a grub line...so I added the rancher.img as a raw device and at least got booted up..now I can mount /dev/vda1 to /mnt/efipart and get to the grub entry...I've fixed it back to the way it was and rebooted...still got the grub rescue...so I mounted vda2 and made sure I deleted the vmlinuz-4.14.138 and initrd-v1.5.5 from /boot...so it should be back to what was booting fine....still getting grub rescue. If ever get it booting again so I can get to my 12 containers I'm not touching the thing again. On top of everything else I was previously able to ssh in after assigning a password to user rancher..now even booting off the rancher image eth0 isn't getting an ip address.


*update* I resolved everything by wiping the virtual machine and starting over. Luckily since I saved the docker run "templates" on nfs recreating the 11 containers(got rid of 1) wasn't a big deal and their configs were saved as well. I'm about resolved to stop upgrading. I had everything in jails and then after an upgrade could no longer create new ones because they needed to be in new jails. To avoid that happenings to me again I decided to put everthing in docker containers...but then the support for that was dropped leaving me here. In the process of redoing everything the version was resolved.
I've followed HugoPol's insctructions from his website as well, and was able to get everything up and running, except for the grub.cfg part.

I don't quite understand that part.
when I do sudo reboot, it boots me to a grub> prompt.
 

Rudi Pittman

Contributor
Joined
Dec 22, 2015
Messages
161
I've followed HugoPol's insctructions from his website as well, and was able to get everything up and running, except for the grub.cfg part.

I don't quite understand that part.
when I do sudo reboot, it boots me to a grub> prompt.

I'm up and running again but I used the instructions here and the already modified iso provided: https://blog.hugopoi.net/2020/03/01/install-rancheros-on-freenas-11-3/ You should be aware rancher is coming to end of life but since it does everything I need with minimal resources I'll keep it. At one point it was not finding the right initrd etc causing it to drop to grub so I redownloaded the iso and redid the steps from the beginning and it worked that time.
 
Top