Rancher UI VM install won't start

Status
Not open for further replies.

DaPlumber

Patron
Joined
May 21, 2014
Messages
246
Configured as per the 11.1 U4 Guide instructions.

One serious bug straight off the bat: There appears to be no way to select which pool .bhyve_containers gets created on. My alphabetical first tank is *NOT* ideal.

VM Configuration:
index.php

RAW Device configuration:
index.php

ll -R of .bhyve_containers and vm-storage after attempting a start:
index.php

(I changed the go+w permissions, but it didn't change the result)
I downloaded the rancheros-bhyve-v1.1.3.img.gz file manually (above) because when I try and start the VM it sits and spins on:
index.php

But if I check from another tab the VM does not appear to start (and does not download the img.gz file):
index.php


I've destroyed and recreated the VM and the storage several times and their all now on my first alphabetical tank, but same result.

Any ideas? I can't see any reason this should not be working, or did I miss something? o_O
 

Attachments

  • Screen Shot 2018-05-27 at 8.22.25 PM.JPG
    Screen Shot 2018-05-27 at 8.22.25 PM.JPG
    35.7 KB · Views: 1,021
  • Screen Shot 2018-05-27 at 8.23.29 PM.JPG
    Screen Shot 2018-05-27 at 8.23.29 PM.JPG
    40.8 KB · Views: 902
  • Screen Shot 2018-05-27 at 8.25.41 PM.JPG
    Screen Shot 2018-05-27 at 8.25.41 PM.JPG
    91 KB · Views: 941
  • Screen Shot 2018-05-27 at 8.30.14 PM.JPG
    Screen Shot 2018-05-27 at 8.30.14 PM.JPG
    14 KB · Views: 883
  • Screen Shot 2018-05-27 at 8.30.42 PM.JPG
    Screen Shot 2018-05-27 at 8.30.42 PM.JPG
    17.4 KB · Views: 870

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
Did you solve this? I've never had a rancheros img download failure, but others have. I'd suggest deleting any docker VMs you have created via the WebUI. Then delete any/all img files at the CLI. Then finally destroy the zfs .bhyve_containers dataset. Start the process from scratch and the "start" pop-up window should hopefully show a completed dowload and environment creation message. Remember that any img file you select to use, like rancherui.img, should not pre-exist when you add devices to a docker VM.

It's only after a successful download that the necessary config files are created in .bhyve_containers which will allow the docker VM to boot, e.g:

Code:
root@freenas:/mnt/NasPool/VM # ll -R /mnt/NasPool/.bhyve_containers
total 1
drwxr-xr-x  3 root  wheel  3 May 29 10:00 configs/
drwxr-xr-x  2 root  wheel  3 May 29 09:58 iso_files/

/mnt/NasPool/.bhyve_containers/configs:
total 1
drwxr-xr-x  3 root  wheel  4 May 29 10:00 2_docker1/

/mnt/NasPool/.bhyve_containers/configs/2_docker1:
total 1
-rw-r--r--  1 root  wheel  33 May 29 10:00 device.map
drwxr-xr-x  2 root  wheel   3 May 29 10:00 grub/

/mnt/NasPool/.bhyve_containers/configs/2_docker1/grub:
total 5
-rw-r--r--  1 root  wheel  288 May 29 10:00 grub.cfg

/mnt/NasPool/.bhyve_containers/iso_files:
total 46149
-rw-r--r--  1 root  wheel  47958299 May 29 10:00 rancheros-bhyve-v1.1.3.img.gz
root@freenas:/mnt/NasPool/VM # 


I'd call the inability to select the pool where .bhyve_containers is created as a missing feature, rather than a bug. Whether you could zfs send/receive that dataset to another pool and delete the original and still expect things to work , I don't know.

If pool selection is important then create a debian/ubuntu vm to use with docker and perhaps portainer or the full blown rancherUI. In any case, the rancheros-bhyve-v1.1.3.img.gz is now seriously out of date - see: https://github.com/rancher/os/releases .

Otherwise you'd have to use the CLI tool iohyve.
 

DaPlumber

Patron
Joined
May 21, 2014
Messages
246
Nope, still not working. I’ve tried the “nuke and pave” approach destroying .bhyve from CLI and the vm from the GUI and the same thing happens. I’d love to know if there’s a log file that gives me more information than the asinine “magic” message so I could get a clue as to what’s not working?

I’m going with “design bug” for a hardcoded configuration assumption that is obviously wrong.

The only thing I haven’t tried is attempting the configuration from the new interface abomination. That *shouldn’t* make a difference, but see “design bug” above.

I already have two VMs running Ubuntu and Kubernetes, it would be nice to have the “native” implementation work.

FreeNAS can’t have it both ways: Either it’s an appliance model and everything “just works” or it’s a layer/Shell/tool with appropriate support and diagnostics for those of us who aren’t afraid of the CLI. Sorry, not sorry, pet peeve.
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
Nope, still not working...

So when you started from scratch, did the rancheros image download ever complete ? I agree the "magic" message is childish. At least using iohyve at the CLI would allow you to install the latest rancheros release and pick which pool the iohyve tool is configured on. Removing iohyve altogether if and when you want is another challenge.

My impression is that rancheros is a devs choice for managing docker containers and its implementation was shoe horned into the existing the FreeNAS 11 UI . The problem is the fundamental limitation of bootloaders and bootroms that work with bhyve. You have to use a fixed config with a known initrd & kernel to boot the rancher VM as the actual rancher iso is built with isolinux/syslinux. At best, you might be able to chainload that with the grub-bhyve loader. Instead the implementation relies on downloading a tarball image and setting up a device.map and grub.cfg file as part of the docker VM creation process which grub-bhyve can then boot. Hence you end up with a docker VM where the user cannot update the version of rancheros. So now a rancheros based Docker VM is out of date before you start.

Personally I wouldn't bother with rancheros if you already have Ubuntu and Kubernetes running. Even Rancher themselves recommend using Ubuntu, or did at least until recently, with RancherUI.
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
Quick test shows the presence of more than pool does not prevent the creation of a working Docker VM:

Code:
root@freenas:~ # zfs list
NAME													   USED  AVAIL  REFER  MOUNTPOINT
NasPool												   55.8M  17.3G	88K  /mnt/NasPool
NasPool/.bhyve_containers								 45.2M  17.3G  45.2M  /mnt/NasPool/.bhyve_containers
NasPool/.system										   7.81M  17.3G	96K  legacy
NasPool/.system/configs-9045e2c9424c490b9f5620086f4ab9c4	88K  17.3G	88K  legacy
NasPool/.system/cores									  600K  17.3G   600K  legacy
NasPool/.system/rrd-9045e2c9424c490b9f5620086f4ab9c4	  6.59M  17.3G  6.59M  legacy
NasPool/.system/samba4									 192K  17.3G   192K  legacy
NasPool/.system/syslog-9045e2c9424c490b9f5620086f4ab9c4	268K  17.3G   268K  legacy
NasPool/VM												  88K  17.3G	88K  /mnt/NasPool/VM
NasPool/home											   212K  17.3G	88K  /mnt/NasPool/home
NasPool/home/chris										 124K  17.3G   124K  /mnt/NasPool/home/chris
NasPool/iohyve											 264K  17.3G	88K  /mnt/iohyve
NasPool/iohyve/Firmware									 88K  17.3G	88K  /mnt/iohyve/Firmware
NasPool/iohyve/ISO										  88K  17.3G	88K  /mnt/iohyve/ISO
NasPool2												  45.8M  17.3G	88K  /mnt/NasPool2
NasPool2/VM											   45.5M  17.3G  45.5M  /mnt/NasPool2/VM
freenas-boot											  1.45G  13.9G	64K  none
freenas-boot/ROOT										 1.44G  13.9G	29K  none
freenas-boot/ROOT/Initial-Install							1K  13.9G   637M  legacy
freenas-boot/ROOT/default								  641M  13.9G   638M  legacy
freenas-boot/ROOT/default-20180529-083449				  836M  13.9G   836M  legacy
freenas-boot/grub										 6.85M  13.9G  6.85M  legacy
root@freenas:~ # ls -l /mnt/NasPool2/VM
total 196041
-rw-r--r--  1 root  wheel  10737418240 May 29 16:08 rancher.img
root@freenas:~ # cu -l /dev/nmdm3B
Connected


			   ,		, ______				 _				 _____ _____TM
 ,------------|'------'| | ___ \			   | |			   /  _  /  ___|
/ .		   '-'	|-  | |_/ /__ _ _ __   ___| |__   ___ _ __  | | | \ '--.
\/|			 |	|   |	// _' | '_ \ / __| '_ \ / _ \ '__' | | | |'--. \
  |   .________.'----'   | |\ \ (_| | | | | (__| | | |  __/ |	| \_/ /\__/ /
  |   |		|   |	 \_| \_\__,_|_| |_|\___|_| |_|\___|_|	 \___/\____/
  \___/		\___/	 Linux 4.9.75-rancher

		 RancherOS #1 SMP Sat Jan 6 00:16:10 UTC 2018 rancher ttyS0
		 docker-sys: 172.18.42.2 eth0: 192.168.0.211 lo: 127.0.0.1
rancher login: 
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
Trying to create a Docker VM via New UI using FreeNAS-11.2-MASTER-201805180619 failed during the download stage when the Docker VM is first started
Vm_fail.jpeg
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
Succeeded on 2nd attempt in new UI after realising taking a screenshot while system is fetching RancherOS caused this error.
docker1.jpeg
docker2.jpeg
docker3.jpeg
docker4.jpeg



But the under the hood there is no fundamental change. Rancheros image version and config files for grub-bhyve boot remain the same.

Code:
freenas# grep rancher /usr/local/lib/python3.6/site-packages/middlewared/plugins/vm.py 

		"URL": "http://download.freenas.org/bhyve-templates/rancheros-bhyve-v1.1.3/rancheros-bhyve-v1.1.3.img.gz", 
		"GZIPFILE": "rancheros-bhyve-v1.1.3.img.gz", 
			"RancherOS": ['linux /boot/vmlinuz-4.9.75-rancher rancher.password={0} printk.devkmsg=on rancher.state.dev=LABEL=RANCHER_STATE rancher.state.wait
rancher.resize_device=/dev/sda'.format(quote(password)), 
			grub_password = 'rancher.password={0}'.format(quote(password)) 
					if data.startswith('rancher.password'): 
						src_data[index] = 'rancher.password={0}'.format(quote(password))


And this is the code that fixes on the first pool.

Code:
@accepts(Str('pool_name')) 
	def activate_sharefs(self, pool_name=None): 
		""" 
		Create a pool for pre built containers images. 
		""" 

		if pool_name: 
			return self.__activate_sharefs(pool_name) 
		else: 
			# Only to keep compatibility with the OLD GUI 
			blocked_pools = ['freenas-boot'] 
			pool_name = None 

			# We get the first available pool. 
			for pool in self.middleware.call_sync('zfs.pool.query'): 
				if pool['name'] not in blocked_pools: 
					pool_name = pool['name'] 
					break 
			if pool_name: 
				return self.__activate_sharefs(pool_name) 
			else: 
				self.logger.error("===> There is no pool available to activate a shared fs.") 
				return False


Lipstick on a pig?
 

DaPlumber

Patron
Joined
May 21, 2014
Messages
246
So I'm running 11.2-BETA1...

Still getting stuck on either "Fetching RancherOS" or "Downloading" stuck at 0% forever. Cancelling and deleting the "VM" doesn't work until a reboot cleans up whatever dreck was left behind.

The "appliance" is broken and I can't see inside the black box. Annoying, to say the least.

At least 11.2B1 allows me to pick my pool. Sigh.

*EDIT* Just noticed my console is splattered with lots of badly formatted .py errors.
 
Status
Not open for further replies.
Top