First I have tried to force the pod to use an existing user & group on all kubernetes nodes (incl. master and NFS-host) which did not do any effect. So discarded this option!
Environmentals in Container
More promising is the attempt to launch the container itself with the environmental variables for user and group, using again the ones existing throughout my environment. Having the access levels on the NFS share itself set with chmod 777 I can at least see that the container setup is creating sucessfully creating folders in openhab/conf and openhab/userdata but still crashes.
NFS Export Settings: Adding anonuid & anongid to my OpenHAB export declarations in /etc/exports
Nn the NFS server itself, I added anonuid and anongid to the exports used for OpenHab using the same user & group as for environmental variables inside the container which finally did the trick!
No I can launch OpenHab and am running on Kubernetes v1.19.2 with a bunch of Raspberry PIs
Important note: Today, I moved the NFS server to my NAS since for testing purpose I had it on the Kube-Master. Watch for your desired userid and depict the one you have on the NFS-Server and go from there (1) NFS Exports anonuid + anongid (2) environment variables USER_ID and GROUP_ID > they have to match for this to work!
@tow one question: I am running OH2.5.10 on Kubernetes with NFS Persistent Volumes. All good but: seems that OpenHAB due to NFS is not recognizing âfile changesâ and thus to load new config files I currently delete the pod and create a new one which is not pretty fast on my Raspberry PIs
How are you handling this? While I am pretty sure my OH will be untouched once all things & items are configured, it is still annoying to run the delete & recreate cycle (especially since it is mainly waiting time).
Sorry for the late reply⊠Its this time of the year
Anyway, I tried to stay away from using files and used UI for almost everything.
Why am i speaking in the past?.. i switched to 3.10 and it seems all can be edited from UI.
Iâm studying kubernetes for my personal hobbist use.
In your opinion what is the advantage of running openhab on kubernetes for home use?
I seem to have understood that it cannot do vertical scaling of instance ( take advantage of all cluster hardware) but useful when there is a big load of requests (so not in my home case)âŠ
I do not think the vertical scaling is needed for OH (would wonder who has such a massive home automation that this would be needed).
While OH is not optimized for Kubernetes, I am pretty happy how well it is running and if you have multiple HW nodes, you benefit from autohealing since OH and all other components you might run (for me besides OH: web Server, MosquittoMQTT, Prometheus & Grafana,âŠ).
It is a good learning and I think running OH on K8S is not a bad idea
If you yet have to get hardware ARM64 based systems like Raspberry PI do still have some limitations regarding support for the platform e.g. I would like to run Longhorn.io distributed storage since running a NFS server introduces a single point of failure if you do not go for a NFS HA clusterâŠ
This was a lot of writing - let me know if there is something specific you are interested in.
I am running 4x Raspi PI4 4GB which boot and run from USB3.0 SSD for performance (microSD is too slow to enjoy K8S). Software is Ubuntu 20.10 and MicroK8S 1.20
When installed with Kubernetes dashboard etc you can easily upgrade, use different environments etc (testing = to test new versions, and prod or so for the environment which actually controls your house)
In attachment how I set-up the piâs for Kubernetes and openhab etc⊠let me know if you like the doc and if I should extend it.
I found a way how to overcome this: I wrote myself a little bash script copying my conf files from my development repo directly into the running container using kubectl cp command
I am using a somewhat similar setup compared to yours: k3s as the Kubernetes basis, three nodes (one on Hetzner, two internal), Wireguard VPN, OpenHAB 3 (3.0.1 at the moment) deployment, entering config files by copying them directly into the container. The file system basis is a simple NFS. I used to have a Longhorn NFS running in the cluster underneath but this felt a bit like overkill for my relatively simple scenario.
My goal is to be able to throw away and re-setup the whole scenario with a single bash script. Actually, this already works pretty fine.
The only thing is that in my environment OpenHAB takes ages (approx. 30 minutes) after the initial install until it is possible to ssh onto it or to configure things, items, etc.
Do you experience the same behavior? Or do you know what I am doing wrong?
This is where it gets stuck in the logs:
... <some stuff before this> ...
+ initialize_volume /openhab/conf /openhab/dist/conf
+ volume=/openhab/conf
+ source=/openhab/dist/conf
++ ls -A /openhab/conf
+ '[' -z 'html
icons
items
persistence
rules
scripts
services
sitemaps
sounds
things
transform' ']'
+ initialize_volume /openhab/userdata /openhab/dist/userdata
+ volume=/openhab/userdata
+ source=/openhab/dist/userdata
++ ls -A /openhab/userdata
+ '[' -z 'etc
logs
tmp' ']'
++ cmp /openhab/userdata/etc/version.properties /openhab/dist/userdata/etc/version.properties
cmp: /openhab/dist/userdata/etc/version.properties: No such file or directory
+ '[' '!' -z ']'
+ chown -R openhab:openhab /openhab
+ sync
are you using Raspberry Pis and if so with microSD cards or do you boot from USB3 to SSD or even NVMe? I am booting my Raspi4 to SSD and performance is pretty good.
OH containers still take a little longer to start but did not measure anymore since moving away from SD-cards. Guts feeling is maybe 2-3 minutes? I can measure if neededâŠ.
I am on a ânormalâ Linux machine that hosts a Linux VM. Inside that VM the whole k3s cluster is running.
Thank you for that information. 2-3 minutes is absolutely acceptable. I will see where I can do some optimization.
@pstoermer maybe check if you have memory issues - donât believe you have an I/O Problem (yet: not knowing your exact setup and hardware in use).
Anyhow: with the âkubectl cpâ Script i anyhow only redeploy (restart OH) when a new release becomes available or when I need to drain a K3S node for maintenance/reboot.