OpenHAB on Kubernetes

Dear all,

next series of tests

  1. Kubernetes SecurityContext

First I have tried to force the pod to use an existing user & group on all kubernetes nodes (incl. master and NFS-host) which did not do any effect. So discarded this option!

  1. Environmentals in Container
    More promising is the attempt to launch the container itself with the environmental variables for user and group, using again the ones existing throughout my environment. Having the access levels on the NFS share itself set with chmod 777 I can at least see that the container setup is creating sucessfully creating folders in openhab/conf and openhab/userdata but still crashes.

  2. NFS Export Settings: Adding anonuid & anongid to my OpenHAB export declarations in /etc/exports
    Nn the NFS server itself, I added anonuid and anongid to the exports used for OpenHab using the same user & group as for environmental variables inside the container which finally did the trick!

No I can launch OpenHab and am running on Kubernetes v1.19.2 with a bunch of Raspberry PIs :slight_smile:

Important note: Today, I moved the NFS server to my NAS since for testing purpose I had it on the Kube-Master. Watch for your desired userid and depict the one you have on the NFS-Server and go from there (1) NFS Exports anonuid + anongid (2) environment variables USER_ID and GROUP_ID > they have to match for this to work!

Finally :slight_smile:

Jens

just saw your post which came as I was writing mine :slight_smile:

@tow one question: I am running OH2.5.10 on Kubernetes with NFS Persistent Volumes. All good but: seems that OpenHAB due to NFS is not recognizing “file changes” and thus to load new config files I currently delete the pod and create a new one which is not pretty fast on my Raspberry PIs :slight_smile:

How are you handling this? While I am pretty sure my OH will be untouched once all things & items are configured, it is still annoying to run the delete & recreate cycle (especially since it is mainly waiting time).

Thx,
Jens

Sorry for the late reply
 Its this time of the year :slight_smile:
Anyway, I tried to stay away from using files and used UI for almost everything.
Why am i speaking in the past?.. i switched to 3.10 and it seems all can be edited from UI.

I see and am working with OH3, too but plan to stick to file based configuration - have to see.

Will try Longhorn.io as distributed storage on K8S cluster,

Happy New Year!

Jens

Hi !

I’m studying kubernetes for my personal hobbist use.

In your opinion what is the advantage of running openhab on kubernetes for home use?
I seem to have understood that it cannot do vertical scaling of instance ( take advantage of all cluster hardware) but useful when there is a big load of requests (so not in my home case)


Can you give me a your point of view ? Thanks !

Luca

Hi Luca,

I do not think the vertical scaling is needed for OH (would wonder who has such a massive home automation that this would be needed).

While OH is not optimized for Kubernetes, I am pretty happy how well it is running and if you have multiple HW nodes, you benefit from autohealing since OH and all other components you might run (for me besides OH: web Server, MosquittoMQTT, Prometheus & Grafana,
).

It is a good learning and I think running OH on K8S is not a bad idea :slight_smile:

If you yet have to get hardware ARM64 based systems like Raspberry PI do still have some limitations regarding support for the platform e.g. I would like to run Longhorn.io distributed storage since running a NFS server introduces a single point of failure if you do not go for a NFS HA cluster


This was a lot of writing - let me know if there is something specific you are interested in.

I am running 4x Raspi PI4 4GB which boot and run from USB3.0 SSD for performance (microSD is too slow to enjoy K8S). Software is Ubuntu 20.10 and MicroK8S 1.20

Jens

When installed with Kubernetes dashboard etc you can easily upgrade, use different environments etc (testing = to test new versions, and prod or so for the environment which actually controls your house)
In attachment how I set-up the pi’s for Kubernetes and openhab etc
 let me know if you like the doc and if I should extend it.

Raspberry Pi domotica farm-v1.pdf (539.4 KB)

1 Like

Have the same issue (although I set most settings in oh3 trough the GUI).
If I modify the persistence file it will not be automatically be detected.

Hi Wim,

did not find a fix yet, has to do with how OH is checking since my web server running on K8S is detecting new files directly.

Now it is more annoying than a problem since typically I do not change the files permanently.

For testing new releases or adding new bindings I am thinking of running a test system on pure docker as a workaround :slight_smile:

Maybe we should raise a dedicated thread on that topic,

Jens

Dear all,

I found a way how to overcome this: I wrote myself a little bash script copying my conf files from my development repo directly into the running container using kubectl cp command

#!/bin/bash
POD=$(kubectl get pods --selector=app=openhab -n <your-k8s-namespace> -o jsonpath="{.items[0].metadata.name}")

kubectl cp </path/to/your/openhab/conf> <your-k8s-namespace>/${POD}:/openhab

Doing so OpenHab identifies and loads the changes immediatelly :slight_smile: Tested with OH-3.0.2 on K3S-1.2.0.6

Jens

Hi @JensF ,

I am using a somewhat similar setup compared to yours: k3s as the Kubernetes basis, three nodes (one on Hetzner, two internal), Wireguard VPN, OpenHAB 3 (3.0.1 at the moment) deployment, entering config files by copying them directly into the container. The file system basis is a simple NFS. I used to have a Longhorn NFS running in the cluster underneath but this felt a bit like overkill for my relatively simple scenario. :slightly_smiling_face:

My goal is to be able to throw away and re-setup the whole scenario with a single bash script. Actually, this already works pretty fine.

The only thing is that in my environment OpenHAB takes ages (approx. 30 minutes) after the initial install until it is possible to ssh onto it or to configure things, items, etc.

Do you experience the same behavior? Or do you know what I am doing wrong?

This is where it gets stuck in the logs:

... <some stuff before this> ...
+ initialize_volume /openhab/conf /openhab/dist/conf
+ volume=/openhab/conf
+ source=/openhab/dist/conf
++ ls -A /openhab/conf
+ '[' -z 'html
icons
items
persistence
rules
scripts
services
sitemaps
sounds
things
transform' ']'
+ initialize_volume /openhab/userdata /openhab/dist/userdata
+ volume=/openhab/userdata
+ source=/openhab/dist/userdata
++ ls -A /openhab/userdata
+ '[' -z 'etc
logs
tmp' ']'
++ cmp /openhab/userdata/etc/version.properties /openhab/dist/userdata/etc/version.properties
cmp: /openhab/dist/userdata/etc/version.properties: No such file or directory
+ '[' '!' -z ']'
+ chown -R openhab:openhab /openhab
+ sync

Do you know what this sync actually does?

Regards,
Peter

Hi @pstoermer,

are you using Raspberry Pis and if so with microSD cards or do you boot from USB3 to SSD or even NVMe? I am booting my Raspi4 to SSD and performance is pretty good.

OH containers still take a little longer to start but did not measure anymore since moving away from SD-cards. Guts feeling is maybe 2-3 minutes? I can measure if needed
.

Jens

I am on a ‘normal’ Linux machine that hosts a Linux VM. Inside that VM the whole k3s cluster is running.
Thank you for that information. 2-3 minutes is absolutely acceptable. I will see where I can do some optimization.

@pstoermer maybe check if you have memory issues - don’t believe you have an I/O Problem (yet: not knowing your exact setup and hardware in use).

Anyhow: with the „kubectl cp“ Script i anyhow only redeploy (restart OH) when a new release becomes available or when I need to drain a K3S node for maintenance/reboot.

Jens