Hi there,
I have read the details in Developer's Notes | TrueNAS Documentation Hub but I may dare some ask some additional questions, as I'm not that familiar with Kubernetes as I'm used to docker-compose.
NUMBER 1
NUMBER 2
NUMBER 3
NUMBER 4
I have read the details in Developer's Notes | TrueNAS Documentation Hub but I may dare some ask some additional questions, as I'm not that familiar with Kubernetes as I'm used to docker-compose.
NUMBER 1
As I did a fresh install of SCALE 20.12-ALPHA (Angelfish) I did:
Code:
truenas# docker --versionDocker version 19.03.13, build 4484c46d9dtruenas# docker ps -aCannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
that is the
what is activated as soon as a pool for Applications is chosen, see herenative container services within Debian.
Code:
truenas# docker ps -a
Code:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMEStruenas# docker imagesREPOSITORY TAG IMAGE ID CREATED SIZErancher/coredns-coredns 1.6.9 4e797b323460 10 months ago 43.1MBrancher/klipper-lb v0.1.2 897ce3c5fc8f 20 months ago 6.1MBrancher/pause 3.1 da86e6ba6ca1 3 years ago 742kB
so it looks like the standard deployment as described on Container runtimes | Kubernetes:
but on Developer's Notes | TrueNAS Documentation Hub it is said that
but I do not find anything related to using docker in k3s/README.md at master · k3s-io/k3s (github.com) and Rancher Docs: Installation Options.The initial implementation of Kubernetes is being done using the K3S software from Rancher (recently acquired by SUSE Linux). This proven software base provides a lightweight Kubernetes implementation with support for the API and ability to cluster instances.
So why are the above-shown docker containers and docker active?
Btw. Portainer is able to connect to this docker daemon if Portainer is started from CLI as described Deploying on Linux - Documentation (portainer.io). I don't know if that is intended.
NUMBER 2
How far does
deviate from original Helm Charts?This application is an enhanced helm chart which deploys the application to the TrueNAS SCALE Kubernetes cluster.
I studied
Here is an example of deploying the Plex docker image:Code:midclt call -job chart.release.create '{"catalog": "OFFICIAL", "train": "test", "item": "ix-chart", "values": {"image": {"repository": "plexinc/pms-docker"}, "portForwardingList": [{"containerPort": 32400, "nodePort": 32400}], "volumes": [{"datasetName": "transcode", "mountPath": "/transcode"}, {"datasetName": "config", "mountPath": "/config"}, {"datasetName": "data", "mountPath": "/data"}], "workloadType": "Deployment", "gpuConfiguration": {"nvidia.com/gpu": 1}}, "version": "2010.0.1", "release_name": "plex"}'Code:
in Deploying Kubernetes Workloads | Developer's Notes | TrueNAS Documentation Hub what basically says, you have to provide all parameters where imagecontainer-based) template given in charts/questions.yaml at master · truenas/charts (github.com), plus the parameters what are in the documentation of the docker container itself.
required: true
defined in the catalogue item (docker Is that correct?
I do have some applications deployed with
docker compose
. It looks like I could migrate easily according to Converting docker-compose to a helm chart? - Stack Overflow I simply use Kubernetes + Compose = Kompose, done.How do I deploy than on SCALE. As the template explicitly refers to a single docker image container, I only have the option to use Developer's Notes | TrueNAS Documentation Hub, isn't it?
Any chance to add SCALE charts manually?
Custom Applications | Developer's Notes | TrueNAS Documentation Hub describes only single docker container deployment.
NUMBER 3
SCALE allows Kubernetes to be disabled. The user will then have access to the native container services within Debian. This will include Docker, LXC (Q1 2021) or any other Kubernetes distribution. There will be a Container Storage Interface (CSI) that can couple the container services with the SCALE storage capabilities. Users can script these capabilities and then use 3rd-party tools like Portainer to manage them. This approach can be used in SCALE 20.10 and later.
What is meant by
Users can script these capabilities
?
Further up there is said
TrueNAS SCALE has native host support for container workloads. This is under active development and not at BETA or RELEASE quality.
so I wonder, if I may use Portainer as orchestration manager until the first RELEASE is there.
How can I connect to the existing Kubernetes integration without shutting down the existing Using Applications | Developer's Notes | TrueNAS Documentation Hub?
It was easy to Add Local Endpoint - Documentation (portainer.io) in regard to docker but I'm bit lost in regard to Kubernetes.
NUMBER 4
Coming from TrueNAS SCALE Announcement and Nightly Image Downloads | Page 4 | TrueNAS Community and How to share docker image between multiple operating systems on same machine - Stack Overflow (unanswered).
As SCALE is not production-ready yet, I wondering if there is the possibility to share the cluster node with different OS (Linux).
As the same hardware is virtualized and kernel of the OS are at least quite similar as I would use Debian Server or Ubuntu Server as the productive OS. In addition, I could install SCALE nightly. At least as long as I don't make use of Docker Privileged or other leveraged functions, it could work in theory.
- Swarm would me allow storing configuration in a file, if I understand Store configuration data using Docker Configs | Docker Documentation correctly, although Swarm is not applicable here
- If I understand correctly, all images, containers etc. are stored on the selected pool, so How do I change the Docker image installation directory? - Open Source Projects / DockerEngine - Docker Forums / How do I change the default docker container location? - Stack Overflow would not be necessary on TrueNAS but on the production OS.
- only network configuration could be challenging but
docker compose
takes care about it anyway.-moving the daemon on the production system as mentioned in dual boot - Can I share docker overlay2 between two host systems? - Ask Ubuntu
but how to solve that using Kubernetes as container runtime? Of course, Kubernetes allows to use different machines but at least two need to online. I could get a little pi to act as High available Kubernetes cluster with single control plane node - DEV Community, what hopefully stores the configuration of all nodes and clustered (getting confused with the wording, see What is a Kubernetes cluster? (redhat.com)).
In addition, the zfs pool has to be mounted into the productive OS but that is well described, e.g. in Install ZFS File System on Ubuntu 18.04 LTS – Linux Hint.
Any chance to realise that?
At the very least, it would help to have all persistent data available on all operating systems.
The production system would ensure the availability of the apps etc., but I could simply switch to SCALE and check the progress and at least report if I notice something. The final migration could be much easier this way.
Last edited: