Kubernetes Error When Choosing Pool

pbiyan123

Cadet
Joined
Feb 12, 2023
Messages
1
I'm on Scale 22.12.2. What worked for me is:
  1. Reboot
  2. Remove the ix-applications dataset
  3. Apps -> Settings -> Advanced Settings; Select Force & Save
  4. Apps -> Settings-> Choose Pool; Select the desired pool
solved thanks.....
 

dperkerson

Cadet
Joined
Jul 16, 2023
Messages
1
I had this problem after I imported a storage pool from a different server. The fix for me was to delete the ix-applications dataset and try again.
 

alextuby

Cadet
Joined
Jul 23, 2023
Messages
1
I had this problem. I found the reason for my case.
I've been using existing pools and during GUI import they were mounted to /mnt/mnt/poolname instead of /mnt/poolname. But the application config tool was looking for the config file still in /mnt/poolname. What worked for me:
1. Import the pool using GUI
2. Go to shell, run sudo zfs list. Make sure the pool is indeed mount to /mnt/mnt
3. run sudo zfs set mountpoint=/poolname poolname It seems TrueNAS assumes /mnt already so here we need to remove it
4. If there were issues with smbshare run sudo zfs set smbshare=off poolname/dataset for all datasets. It's not an issue for applications but for the import it is.
5. run again zfs list to make sure the mountpoint is correct.
6. Export the pool either via GUI if it's visible there or using sudo zpool export poolname if it isn't. The mountpoint should persist.
7. Import again using GUI. Now with the correct mountpoint remove the ix-applications dataset if it still remains and select the application pool again.
Worked for me. Hope it helps.
 

ForstHeld

Cadet
Joined
Jun 9, 2018
Messages
3
Just rebooting did not help me either. I un-set the pool in Apps, then rebooted. I then had to delete the ix-applications dataset attached to my Pool. Once I deleted that dataset I went back to apps and re selected the same pool. Everything then ran fine at that point.
This worked for me! Thanks.
 

NightFlight

Cadet
Joined
Mar 31, 2023
Messages
2
Code:
Error: [EINVAL] kubernetes_update.force: Missing '/mnt/Apps/ix-applications/config.json' configuration file. Specify force to override this and let system re-initialize applications.


If it helps anyone else, here is what put me into this loop.

1. Some stupid race condition would not allow me to start/manage/delete ix-apps. All CPU cores pegged.
2. rm -rf /mnt/Apps/ix-applications
3. reboot
4. Manually unmounted the ix-applications folder structure with CLI fu like:
Code:
for mount in `mount |grep ix-applications|cut -f3 -d' '|sort -r`
do
umount $mount
done


This old hacker got trapped in thinking that 'dataset' referred to the ix-applications folder structure on disk. Not the GUI configured equiv with pretty icons. Once I clued into the using the GUI to remove the old logically configured dataset, I was able to select the pool without issue again and the dataset was automagically re-created. Basically there's already a dataset object created at the same mount point, leading to a naming collision.

In my defence, the error notice sucks moose nuts. If it had suggested that the object already existed rather than just posting its general barf, I would have clued in much faster.
 

marshalleq

Explorer
Joined
Mar 12, 2016
Messages
88
I had this problem. I found the reason for my case.
I've been using existing pools and during GUI import they were mounted to /mnt/mnt/poolname instead of /mnt/poolname. But the application config tool was looking for the config file still in /mnt/poolname. What worked for me:
1. Import the pool using GUI
2. Go to shell, run sudo zfs list. Make sure the pool is indeed mount to /mnt/mnt
3. run sudo zfs set mountpoint=/poolname poolname It seems TrueNAS assumes /mnt already so here we need to remove it
4. If there were issues with smbshare run sudo zfs set smbshare=off poolname/dataset for all datasets. It's not an issue for applications but for the import it is.
5. run again zfs list to make sure the mountpoint is correct.
6. Export the pool either via GUI if it's visible there or using sudo zpool export poolname if it isn't. The mountpoint should persist.
7. Import again using GUI. Now with the correct mountpoint remove the ix-applications dataset if it still remains and select the application pool again.
Worked for me. Hope it helps.
This is what worked for me, my dataset from unraid was set to /mnt/drivename. Somehow that changes to /mnt/mnt/drivename. Setting mountpoint to just the drive name makes the problem disappear and somehow it adds the mnt in by itself afterwards. Silly quirk and would never have tracked it down without your help, so tanks.
 
Joined
Mar 24, 2024
Messages
4
Hi,
In case of errors like these:

[EINVAL] kubernetes_update.force: Missing '/mnt/Your_pool_name_here/ix-applications/config.json' configuration file. Specify force to override this and let system re-initialize applications.

[EINVAL] kubernetes_update.force: '/mnt/Your_pool_name_here/ix-applications/config.json' configuration file is an invalid JSON file. Specify force to override this and let system re-initialize applications.


You can also try manually putting valid config.json file into this folder and restart everything to see if it solves the problem
/mnt/Your_pool_name_here/ix-applications/


Valid content of this config.json file should look like below:

{"id": 1, "pool": "Your_pool_name_here", "cluster_cidr": "172.16.0.0/16", "service_cidr": "172.17.0.0/16", "cluster_dns_ip": "172.17.0.10", "route_v4_interface": "Your_network_interface_name_here", "route_v4_gateway": "Your_router_IP_here", "route_v6_interface": null, "route_v6_gateway": null, "node_ip": "0.0.0.0", "configure_gpus": true, "servicelb": true, "passthrough_mode": false, "metrics_server": false, "dataset": "Your_pool_name_here/ix-applications"}

Make sure You substitute these 3 phrases with correct values:
"Your_pool_name_here" - name of the data pool
"Your_network_interface_name_here" which should be like enp2s12, obtainable with command: ip route
"Your_router_IP_here" which should be like 192.168.1.20, obtainable with command: ip route
Also You may need to set different Id than 1 if the problem happens to a disk with different ID

After substitution of these 3 phrases, You can run this command
cat > /mnt/Your_pool_name_here/ix-applications/config.json
paste the valid config.json content
Ctrl + D to exit
 
Top