Kubernetes Error When Choosing Pool

bdpyle

Cadet
Joined
Dec 23, 2022
Messages
4
I just installed TrueNAS and wanted to get some docker containers up and running. To do this, I have a 1TB Western Digital Blue HDD hooked up through SATA. After setting up the drive as a storage pool, I went to the applications tab and selected it to choose the pool where the applications will be stored. I then got this error:

Code:
Error: [EFAULT] Command mount -t zfs applications/ix-applications/k3s/kubelet /var/lib/kubelet failed (code 1): filesystem 'applications/ix-applications/k3s/kubelet' cannot be mounted using 'mount'. Use 'zfs set mountpoint=legacy' or 'zfs mount applications/ix-applications/k3s/kubelet'. See zfs(8) for more information.


I then clicked cancel and tried again, which gave this error:

Code:
Error: [EINVAL] kubernetes_update.force: Missing '/mnt/applications/ix-applications/config.json' configuration file. Specify force to override this and let system re-initialize applications.


So far I have tried deleting the dataset and going through the process again, but that didn't work. I have also tried restarting the system and making sure the system time matches the TrueNAS time.

If you have any ideas for how to fix this, please let me know. Thanks!
 

Adnan

Dabbler
Joined
Sep 4, 2015
Messages
22
Same here
 

mjflower

Dabbler
Joined
Sep 14, 2020
Messages
25
same here
 

nadu

Cadet
Joined
Dec 13, 2022
Messages
3
Is there an update on this? I am currently running into the same issue and a reboot doesn't fix the issue. Unfortunately, this means that I can not setup any applications at all right now.
 

mazay

Cadet
Joined
Feb 11, 2023
Messages
9
Is there an update on this? I am currently running into the same issue and a reboot doesn't fix the issue. Unfortunately, this means that I can not setup any applications at all right now.
I found that TueNAS may taint the k3 node if it thinks it won't work properly, i.e. happened to me when the network cord was unplugged and it couldn't reach the gateway.

I'd go ahead and check what's going on with your workloads on the k3s side but that will require some kubectl knowledge, i.e. start with:
Code:
k3s kubectl get pod -A
 

nadu

Cadet
Joined
Dec 13, 2022
Messages
3
Thanks for your effort, but I got impatient and used another pool as the app pool. With that, it worked immediately.

I thought it was a general issue but apparently, my second pool was wonky.
 

JoshT1982

Cadet
Joined
Apr 15, 2023
Messages
1
Just rebooting did not help me either. I un-set the pool in Apps, then rebooted. I then had to delete the ix-applications dataset attached to my Pool. Once I deleted that dataset I went back to apps and re selected the same pool. Everything then ran fine at that point.
 

samarium

Contributor
Joined
Apr 8, 2023
Messages
192
I hate to say it. but me too. Fresh TNS 22.12.2 install, multiple attempts, differently pools, reboots, etc. I can confirm that
zfs list app/ix-applications/k3s/kubelet
was not mountpoint=legacy should be to allow mount on /var/lib/kubelet.
I did one time get it to install on another pool, but if broke moving back to the required pool, no apps had been installed.
 

serving_myself

Dabbler
Joined
Jan 2, 2023
Messages
12
Count me in :'(

Multiple different errors. Seems tried everything.
Steps;
1. Unset the pool
2. Delete ix-applications dataset
3. Reboot
4. Tried setting the pool in the Apps menu and got an error:
Code:
Error: [EFAULT] Command mount -t zfs applications/ix-applications/k3s/kubelet /var/lib/kubelet failed (code 1): filesystem 'applications/ix-applications/k3s/kubelet' cannot be mounted using 'mount'. Use 'zfs set mountpoint=legacy' or 'zfs mount applications/ix-applications/k3s/kubelet'. See zfs(8) for more information.

5. Checked the pool: the ix-applications dataset got created (and non-empty - 48MB)
6. Unset the pool again (went to check - the ix-applications dataset's still there)
7. Tied going Advanced Settings and apply them with "Force". No errors at this stage.
8. Tried to set the pool again. Got different error:
Code:
Error: [EINVAL] kubernetes_update.force: Missing '/mnt/applications/ix-applications/config.json' configuration file. Specify force to override this and let system re-initialize applications.

9. Went into Advanced Settings and applied them with "Force"again
10. Tried setting the pool again - same error as in #8.
11. Tried deleting the dataset w/o reboot, got an error:
Code:
[EBUSY] Failed to delete dataset: cannot unmount '/mnt/applications/ix-applications/docker': pool or dataset is busy
cannot unmount '/mnt/applications/ix-applications': pool or dataset is busy

12. After a while, checked the dataset contents.
It's changed from this:
Code:
admin@truenas[~]$ ls -l /mnt/applications/ix-applications/
total 47
drwxr-xr-x  3 root root  3 Apr 16 18:58 catalogs
drwxr-xr-x  2 root root  2 Apr 16 18:58 default_volumes
drwx--x--- 14 root root 14 Apr 16 18:58 docker
drwxr-xr-x  3 root root  3 Apr 16 18:58 k3s
-rw-r--r--  1 root root 89 Apr 16 18:58 migrations.json
drwxr-xr-x  2 root root  2 Apr 16 18:58 releases

... to that:
Code:
admin@truenas[~]$ ls -l /mnt/applications/ix-applications/
total 13
drwx--x--- 14 root root 14 Apr 16 18:58 docker
-rw-r--r--  1 root root 89 Apr 16 18:58 migrations.json

13. Went and tried to "Force" the Advanced settings again - no config.json has appeared in the dir.
14. Tried setting the pool again - got new error:
Code:
Error: [EINVAL] kubernetes_update.force: Apps have been partially initialized on 'applications' pool but it is missing 'applications/ix-applications/k3s, applications/ix-applications/k3s/kubelet, applications/ix-applications/releases, applications/ix-applications/default_volumes, applications/ix-applications/catalogs' datasets. Specify force to override this and let system re-initialize applications.

15. Went and tried Forcing the settings again - nothing new appeared in the dir.
16. Opened a Jira ticket: https://ixsystems.atlassian.net/browse/NAS-121509
 
Last edited:

piersdd

Cadet
Joined
Nov 20, 2021
Messages
8
Count me in :'(

Multiple different errors. Seems tried everything.
Steps;
1. Unset the pool
2. Delete ix-applications dataset
3. Reboot
4. Tried setting the pool in the Apps menu and got an error:
Code:
Error: [EFAULT] Command mount -t zfs applications/ix-applications/k3s/kubelet /var/lib/kubelet failed (code 1): filesystem 'applications/ix-applications/k3s/kubelet' cannot be mounted using 'mount'. Use 'zfs set mountpoint=legacy' or 'zfs mount applications/ix-applications/k3s/kubelet'. See zfs(8) for more information.

5. Checked the pool: the ix-applications dataset got created (and non-empty - 48MB)
6. Unset the pool again (went to check - the ix-applications dataset's still there)
7. Tied going Advanced Settings and apply them with "Force". No errors at this stage.
8. Tried to set the pool again. Got different error:
Code:
Error: [EINVAL] kubernetes_update.force: Missing '/mnt/applications/ix-applications/config.json' configuration file. Specify force to override this and let system re-initialize applications.

9. Went into Advanced Settings and applied them with "Force"again
10. Tried setting the pool again - same error as in #8.
11. Tried deleting the dataset w/o reboot, got an error:
Code:
[EBUSY] Failed to delete dataset: cannot unmount '/mnt/applications/ix-applications/docker': pool or dataset is busy
cannot unmount '/mnt/applications/ix-applications': pool or dataset is busy

12. After a while, checked the dataset contents.
It's changed from this:
Code:
admin@truenas[~]$ ls -l /mnt/applications/ix-applications/
total 47
drwxr-xr-x  3 root root  3 Apr 16 18:58 catalogs
drwxr-xr-x  2 root root  2 Apr 16 18:58 default_volumes
drwx--x--- 14 root root 14 Apr 16 18:58 docker
drwxr-xr-x  3 root root  3 Apr 16 18:58 k3s
-rw-r--r--  1 root root 89 Apr 16 18:58 migrations.json
drwxr-xr-x  2 root root  2 Apr 16 18:58 releases

... to that:
Code:
admin@truenas[~]$ ls -l /mnt/applications/ix-applications/
total 13
drwx--x--- 14 root root 14 Apr 16 18:58 docker
-rw-r--r--  1 root root 89 Apr 16 18:58 migrations.json

13. Went and tried to "Force" the Advanced settings again - no config.json has appeared in the dir.
14. Tried setting the pool again - got new error:
Code:
Error: [EINVAL] kubernetes_update.force: Apps have been partially initialized on 'applications' pool but it is missing 'applications/ix-applications/k3s, applications/ix-applications/k3s/kubelet, applications/ix-applications/releases, applications/ix-applications/default_volumes, applications/ix-applications/catalogs' datasets. Specify force to override this and let system re-initialize applications.

15. Went and tried Forcing the settings again - nothing new appeared in the dir.
16. Opened a Jira ticket: https://ixsystems.atlassian.net/browse/NAS-121509
+1
This is precisely my experience at present
 

samarium

Contributor
Joined
Apr 8, 2023
Messages
192
I had success destroying 22.12.2, installing 22.12.1 and setting app pool, then upgrading to 22.12.2. Only did 1 time though, didn't need to repeat for statistical significance :- )
 
Last edited:

serving_myself

Dabbler
Joined
Jan 2, 2023
Messages
12
@samarium Great news. Will try that and report here.
 

thecrownguy

Dabbler
Joined
Sep 18, 2022
Messages
11
I am in the same boat and running out of options. I am moving from an older pool to a new aray of drives. Everythinng else switched just fine, but kubernetes is jacked. And I dont have any apps installed.
 

serving_myself

Dabbler
Joined
Jan 2, 2023
Messages
12
This doesn't help me on 22.12.1 (there's no 'force' flag on it though, but since there's no existing `ix-applications` dataset at this moment, it should've been irrelevant).
In 22.12.2 all the same.
 

serving_myself

Dabbler
Joined
Jan 2, 2023
Messages
12
I'm on Scale 22.12.2. What worked for me is:
  1. Reboot
  2. Remove the ix-applications dataset
  3. Apps -> Settings -> Advanced Settings; Select Force & Save
  4. Apps -> Settings-> Choose Pool; Select the desired pool
Did you do anything on top of that?
E.g., Unsetting the pool at any of the steps? (0. set the pool and get an error, 0.5. unset the pool, ...)?
I wonder what am I missing ...
 

robert88

Dabbler
Joined
Dec 28, 2018
Messages
12
Had the same issue.
1. select the other pool.
2. copy the config.json to pool where it did not work
3. unset the unwanted pool
4. adjusted the config.json to reflect the correct pool entry name
5. selected the pool from GUI.
6. all good and ready
 

serving_myself

Dabbler
Joined
Jan 2, 2023
Messages
12
Had the same issue.
1. select the other pool.
2. copy the config.json to pool where it did not work
3. unset the unwanted pool
4. adjusted the config.json to reflect the correct pool entry name
5. selected the pool from GUI.
6. all good and ready
Simple and clever!

Nevertheless, didn't work out for me.
However, I've recreated the problem on a VM and it *seems* to me that the problem is with the name of the pool:
- when the pool is named "applications" I can always recreate the issue (tried multiple times with destroying and recreation of it - always hit the problem).
- when the pool is named something else (I've named it "apps") - all's good!
 
  • Like
Reactions: stk
Top