I should have stopped when I had the apps back up and running under Angelfish...
So my apps have been down since I tried to upgrade to Bluefin on Monday, and the apps are the only reason I migrated to SCALE in the first place (aside from my continuing and growing suspicion, which iX continues to deny, that they'll be dropping CORE entirely before too long). Bluefin may be the greatest thing ever, or it may be a giant steaming pile--but whichever it is (or wherever in between it is, which I suppose is more likely), it clearly wasn't written to cope with my apparently-very-demanding (really?) use case. So, since this worked yesterday morning, I figured I'd revert back to Angelfish, unset pool/reboot/choose pool, and the apps would be back.
But no, of course it isn't that simple. Unset pool and reboot worked just fine. After the system comes back up, go to Apps, and it immediately prompts me to choose a pool. I do, a progress bar runs for about 8 seconds, and then I get
Error: [EFAULT] Docker service is not running
:
This is a new error to me--I haven't seen it previously while working on this issue.
When I close that, go to Settings -> Choose Pool, and select the same pool again, it returns "Success" within about one second:
But apps are not running:
k3s is dead:
Code:
root@freenas2[~]# systemctl status k3s
● k3s.service - Lightweight Kubernetes
Loaded: loaded (/lib/systemd/system/k3s.service; disabled; vendor preset: disabled)
Active: inactive (dead)
Docs: https://k3s.io
...and running
systemctl restart k3s
first returns failure, and then it seems to put k3s into the same restart loop we've seen before:
Code:
root@freenas2[~]# systemctl restart k3s
Job for k3s.service failed because the control process exited with error code.
See "systemctl status k3s.service" and "journalctl -xe" for details.
root@freenas2[~]# systemctl status k3s
● k3s.service - Lightweight Kubernetes
Loaded: loaded (/lib/systemd/system/k3s.service; disabled; vendor preset: disabled)
Active: activating (start) since Sat 2022-12-24 07:43:40 EST; 5s ago
Docs: https://k3s.io
Process: 231874 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
Process: 231875 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Main PID: 231876 (k3s-server)
Tasks: 130
Memory: 800.5M
CGroup: /system.slice/k3s.service
└─231876 /usr/local/bin/k3s server
Dec 24 07:43:43 freenas2 k3s[231876]: I1224 07:43:43.340630 231876 shared_informer.go:247] Caches are synced for crd-autoregist>
Dec 24 07:43:43 freenas2 k3s[231876]: I1224 07:43:43.341401 231876 shared_informer.go:247] Caches are synced for cluster_authen>
Dec 24 07:43:43 freenas2 k3s[231876]: W1224 07:43:43.748066 231876 lease.go:233] Resetting endpoints for master service "kubern>
Dec 24 07:43:43 freenas2 k3s[231876]: I1224 07:43:43.751233 231876 controller.go:611] quota admission added evaluator for: endp>
Dec 24 07:43:43 freenas2 k3s[231876]: I1224 07:43:43.927488 231876 controller.go:611] quota admission added evaluator for: endp>
Dec 24 07:43:44 freenas2 k3s[231876]: I1224 07:43:44.238952 231876 controller.go:132] OpenAPI AggregationController: action for>
Dec 24 07:43:44 freenas2 k3s[231876]: I1224 07:43:44.238982 231876 controller.go:132] OpenAPI AggregationController: action for>
Dec 24 07:43:44 freenas2 k3s[231876]: W1224 07:43:44.368221 231876 lease.go:233] Resetting endpoints for master service "kubern>
Dec 24 07:43:44 freenas2 k3s[231876]: I1224 07:43:44.368963 231876 storage_scheduling.go:109] all system priority classes are c>
Dec 24 07:43:44 freenas2 k3s[231876]: E1224 07:43:44.811559 231876 controller.go:159] Found stale data, removed previous endpoi>
root@freenas2[~]# systemctl status k3s
● k3s.service - Lightweight Kubernetes
Loaded: loaded (/lib/systemd/system/k3s.service; disabled; vendor preset: disabled)
Active: activating (start) since Sat 2022-12-24 07:43:56 EST; 9s ago
Docs: https://k3s.io
Process: 235461 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
Process: 235462 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Main PID: 235463 (k3s-server)
Tasks: 137
Memory: 844.7M
CGroup: /system.slice/k3s.service
└─235463 /usr/local/bin/k3s server
Dec 24 07:44:01 freenas2 k3s[235463]: I1224 07:44:01.308678 235463 storage_scheduling.go:109] all system priority classes are c>
Dec 24 07:44:01 freenas2 k3s[235463]: W1224 07:44:01.308806 235463 lease.go:233] Resetting endpoints for master service "kubern>
Dec 24 07:44:01 freenas2 k3s[235463]: E1224 07:44:01.754728 235463 controller.go:159] Found stale data, removed previous endpoi>
Dec 24 07:44:03 freenas2 k3s[235463]: time="2022-12-24T07:44:03-05:00" level=info msg="Connecting to proxy" url="wss://127.0.0.1>
Dec 24 07:44:03 freenas2 k3s[235463]: time="2022-12-24T07:44:03-05:00" level=info msg="Handling backend connection request [ix-t>
Dec 24 07:44:03 freenas2 k3s[235463]: time="2022-12-24T07:44:03-05:00" level=info msg="Running kubelet --address=0.0.0.0 --anony>
Dec 24 07:44:03 freenas2 k3s[235463]: Flag --network-plugin has been deprecated, will be removed along with dockershim.
Dec 24 07:44:03 freenas2 k3s[235463]: time="2022-12-24T07:44:03-05:00" level=info msg="Waiting to retrieve kube-proxy configurat>
Dec 24 07:44:03 freenas2 k3s[235463]: I1224 07:44:03.088788 235463 server.go:442] "Kubelet version" kubeletVersion="v1.23.5+k3s>
Dec 24 07:44:03 freenas2 k3s[235463]: I1224 07:44:03.091153 235463 dynamic_cafile_content.go:156] "Starting controller" name="c>
Multiple unset pool/reboot/choose pool cycles have given the same results each time.