OpenHAB on Kubernetes

Hi Rene!

I’m using K3s and have been running OpenHAB on a pod for some time for testing purposes, not for my production (at least not yet :-)) I’ve also managed to get a friend of mine to set up the same environment.

I’ve got this error a couple of times, and it seems like dl.bintray.com sometimes is down, have you tested several times at different occasions?

I’ve written a number of gists with configuration guidelines for K3s on Raspberry PIs if someone is interested in trying out OH on K3s.
NOTE: Some of the w.o. described might have been fixed in later versions.

BR,

/Janne

Hi Janne,

yes I tried it several days at different times before posting here. As I’m still learning Kubernetes this might also be an issue with my setup. What seems strange is this part of the error:

Connect to dl.bintray.com:443 [dl.bintray.com/149.172.19.157] failed: connect timed out

Using nslookup on my laptop or the host-system running microk8s (both in my home network) I get both times this result, which is a completely different IP address than OH prints out in the error message:

$ nslookup dl.bintray.com
Server: 192.168.17.104
Address: 192.168.17.104#53

Non-authoritative answer:
Name: dl.bintray.com
Address: 52.29.129.139
Name: dl.bintray.com

However, I don’t know how to ssh into the OH container to run nslookup there and compare the results or do some other investigation from within the container.

However, I don’t know how to ssh into the OH container to run nslookup there and compare the results or do some other investigation from within the container.

You can’t ssh into the OH pod but you can use kubectl, first you need to find the pod id for OH:

pi@k3s-master-1:~ $ kubectl get pods
NAME                                                         READY   STATUS             RESTARTS   AGE
influxdb-7989845b67-5fqh7                                    0/1     Unknown            0          75d
grafana-b84d897fb-mj5fc                                      1/1     Running            2          105d
nfs-client-provisioner-openhab-production-5cc5d6c7d4-dqs65   1/1     Running            7          54d
nfs-client-provisioner-58987d578-m9zvm                       1/1     Running            4          54d
nfs-client-provisioner-dump1090-647fcc45db-ln6t5             1/1     Running            3          50d
openhab-79f845bc8c-npkdh                                     1/1     Running            0          2d2h
busybox                                                      1/1     Running            1873       78d

Lets connect to pod openhab-79f845bc8c-npkdh using kubectl:

pi@k3s-master-1:~ $ kubectl exec -it openhab-79f845bc8c-npkdh /bin/bash
root@openhab-79f845bc8c-npkdh:/openhab# ls
addons	conf  dist  lib  LICENSE.TXT  runtime  start_debug.sh  start.sh  userdata

However the OH container used does not include nslookup nor ping commands, but you can use arping to check if DNS points to correct IP.
NOTE: arping will not be able to actually ping external IPs:

root@openhab-79f845bc8c-npkdh:/openhab# arping -c 1 dl.bintray.com
arping: lookup dev: No matching interface found using getifaddrs().
arping: Unable to automatically find interface to use. Is it on the local LAN?
arping: Use -i to manually specify interface. Guessing interface eth0.
ARPING 3.124.133.231
Timeout

--- 3.124.133.231 statistics ---
1 packets transmitted, 0 packets received, 100% unanswered (0 extra)

root@openhab-79f845bc8c-npkdh:/openhab# arping -c 1 dl.bintray.com
arping: lookup dev: No matching interface found using getifaddrs().
arping: Unable to automatically find interface to use. Is it on the local LAN?
arping: Use -i to manually specify interface. Guessing interface eth0.
ARPING 52.29.129.139
Timeout

--- 52.29.129.139 statistics ---
1 packets transmitted, 0 packets received, 100% unanswered (0 extra)
root@openhab-79f845bc8c-npkdh:/openhab# arping -c 1 dl.bintray.com
arping: lookup dev: No matching interface found using getifaddrs().
arping: Unable to automatically find interface to use. Is it on the local LAN?
arping: Use -i to manually specify interface. Guessing interface eth0.
ARPING 52.29.129.139
Timeout

--- 52.29.129.139 statistics ---
1 packets transmitted, 0 packets received, 100% unanswered (0 extra)

Doing an nslookup from my local MAC gives the same IPs:

$ nslookup dl.bintray.com
Server:		192.168.1.1
Address:	192.168.1.1#53

Non-authoritative answer:
Name:	dl.bintray.com
Address: 3.124.133.231
Name:	dl.bintray.com
Address: 52.29.129.139

I had some problems with kube-proxy and IP tables that required a w.o. to get networking to work, don’t know if that could be connected.

The IP adress that you got dl.bintray.com/149.172.19.157 is:

$ nslookup 149.172.19.157
Server:		192.168.1.1
Address:	192.168.1.1#53

Non-authoritative answer:
157.19.172.149.in-addr.arpa	name = HSI-KBW-149-172-19-157.hsi13.kabel-badenwuerttemberg.de.

Is that maybe an IP address belonging to your ISP?

What I suggest you do is to install a pod based on a container that has all the network commands to be able to debug your network, e.g. busybox. Then you can issue a command on the busybox pod via kubectl exec -it:

pi@k3s-master-1:~ $ kubectl exec -it busybox nslookup www.dn.se
Server:    192.168.1.1
Address 1: 192.168.1.1 router.asus.com

Name:      www.dn.se
Address 1: 95.101.172.101 a95-101-172-101.deploy.static.akamaitechnologies.com
pi@k3s-master-1:~ $ kubectl exec -it busybox ping www.dn.se
PING www.dn.se (95.101.172.101): 56 data bytes
64 bytes from 95.101.172.101: seq=0 ttl=58 time=2.964 ms
64 bytes from 95.101.172.101: seq=1 ttl=58 time=6.728 ms
64 bytes from 95.101.172.101: seq=2 ttl=58 time=2.438 ms
^C
--- www.dn.se ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 2.438/4.043/6.728 ms

BR,

/Janne

1 Like

Thank you so much for your help! This enabled me to investigate further, although I did not find the cause or a solution… It it really strange:

  • Yes, the IP 149.172.19.157 was my own external IP at home I got from my internet provider
  • Using busybox in the same namespace (smart-home) nslookup fetches the correct addresses for dl.bintray.com. I played around with dnsPolicy and dnsConfig for busybox and it always worked correctly regardless if the cluster DNS or my DNS at home (192.168.17.104) were used
  • I connected to the openHAB container with /bin/bash and examined /etc/resolv.conf and it contained the correct settings (cluster DNS first then my home DNS, correct search domains). arping in this container returned the wrong (my own external) IP. I tried to install nslookup using apt-get in the openHAB container but all external names such as prod.debian.map.fastly.net were resolved to my own external IP.

I don’t know what’s wrong here. When I connect to my Node RED container with /bin/bash it has the exact same configuration in /etc/resolv.conf as openHAB. It has nslookup installed and resolves everything correctly: bintray, debian mirror, local names, cluster names. :+1:

The openHAB container only resolves cluster names correctly, local names in my home network and external names are all resolved to my external IP at home. :thinking:

Today it seems that I got it working. I added this line to my deployment YAML:

dnsPolicy: None

It allows a Pod to ignore DNS settings from the Kubernetes environment. All DNS settings are supposed to be provided using the dnsConfig field in the and I set this to my home DNS server.

Although it works now I am wondering why my other pods such as Node Red and busybox can resolve any names correctly. This makes me feel that there is somewhere still an issue with my microk8s setup in general…

In case you are also running on Kubernetes and have trouble with Homematic, I’ve written a short howto that describes how you can solve the issue you might have with the Homematic bridge within a Kubernetes pod: How to configure Homematic binding in OpenHAB on Kubernetes

Dear all,

I am trying to get OpenHab running on a 4-node (1 master, 3 worker) Raspberry PI4 4GB Kubernetes Cluster (Kubespray, K8S verson 1.19.2).

Everything works w/o problems when I do not try to use NFS Persistent Volumes to mount the folders /openhab/addons, /openhab/conf and /openhab/userdata.

My NFS share and persistent volumes is working for many other services like Apache/PHP, Grafana, InfluxDB and Prometheus.

OpenHab is always crashing during intial launch / ContainerCreation and the logs I am getting are the following:

~/k8s-config/services/openhab$ kubectl logs openhab-75c686c96c-grm7t 
++ test -t 0
++ echo false
+ interactive=false
+ set -euo pipefail
+ IFS='
        '
+ '[' limited = unlimited ']'
+ rm -f /openhab/runtime/instances/instance.properties
+ rm -f /openhab/userdata/tmp/instances/instance.properties
+ NEW_USER_ID=9001
+ NEW_GROUP_ID=9001
+ echo 'Starting with openhab user id: 9001 and group id: 9001'
Starting with openhab user id: 9001 and group id: 9001
+ id -u openhab
++ getent group 9001
+ '[' -z '' ']'
+ echo 'Create group openhab with id 9001'
+ groupadd -g 9001 openhab
Create group openhab with id 9001
+ echo 'Create user openhab with id 9001'
+ adduser -u 9001 --disabled-password --gecos '' --home /openhab --gid 9001 openhab
Create user openhab with id 9001
Warning: The home dir /openhab you specified already exists.
Adding user `openhab' ...
Adding new user `openhab' (9001) with group `openhab' ...
adduser: Warning: The home directory `/openhab' does not belong to the user you are currently creating.
The home directory `/openhab' already exists.  Not copying from `/etc/skel'.
+ groupadd -g 14 uucp2
+ groupadd -g 16 dialout2
+ groupadd -g 18 dialout3
+ groupadd -g 32 uucp3
+ groupadd -g 997 gpio
+ adduser openhab dialout
Adding user `openhab' to group `dialout' ...
Adding user openhab to group dialout
Done.
+ adduser openhab uucp
Adding user `openhab' to group `uucp' ...
Adding user openhab to group uucp
Done.
+ adduser openhab uucp2
Adding user `openhab' to group `uucp2' ...
Adding user openhab to group uucp2
Done.
+ adduser openhab dialout2
Adding user `openhab' to group `dialout2' ...
Adding user openhab to group dialout2
Done.
+ adduser openhab dialout3
Adding user `openhab' to group `dialout3' ...
Adding user openhab to group dialout3
Done.
+ adduser openhab uucp3
Adding user `openhab' to group `uucp3' ...
Adding user openhab to group uucp3
Done.
+ adduser openhab gpio
Adding user `openhab' to group `gpio' ...
Adding user openhab to group gpio
Done.
+ initialize_volume /openhab/conf /openhab/dist/conf
+ volume=/openhab/conf
+ source=/openhab/dist/conf
++ ls -A /openhab/conf
+ '[' -z 'html
icons
items
persistence
rules
scripts
services
sitemaps
sounds
things
transform' ']'
+ initialize_volume /openhab/userdata /openhab/dist/userdata
+ volume=/openhab/userdata
+ source=/openhab/dist/userdata
++ ls -A /openhab/userdata
+ '[' -z 'etc
logs
tmp' ']'
++ cmp /openhab/userdata/etc/version.properties /openhab/dist/userdata/etc/version.properties
cmp: /openhab/dist/userdata/etc/version.properties: No such file or directory
+ '[' '!' -z ']'
+ chown -R openhab:openhab /openhab
chown: changing ownership of '/openhab/Readme.txt': Operation not permitted
chown: changing ownership of '/openhab': Operation not permitted

My openhab manifest looks as follows:

~/k8s-config/services/openhab$ cat openhab_service.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: openhab
  labels:
    app: openhab
spec:
  replicas: 1
  selector:
    matchLabels:
      app: openhab
  template:
    metadata:
      labels:
        app: openhab
    spec:
      containers:
      - name: openhab
        image: openhab/openhab:2.4.0
        imagePullPolicy: IfNotPresent
        ports:
#        - containerPort: 8101
        - containerPort: 8080
        - containerPort: 8443
#        - containerPort: 5007
#        - containerPort: 22
        volumeMounts:
        -  mountPath: "/openhab"
           name: nfs-pv-openhab
        args:
        - "--run jens"
      volumes:
      - name: nfs-pv-openhab
        persistentVolumeClaim:
          claimName: nfs-pvc-openhab
---
kind: Service
apiVersion: v1
metadata:
  name: openhab
  labels:
    app: openhab
spec:
  ports:
#    - name: language-service
#      protocol: TCP
#      port: 8101
#      targetPort: 8101
    - name: http
      protocol: TCP
      port: 8080
      targetPort: 8080
    - name: https
      protocol: TCP
      port: 8443
      targetPort: 8443
  selector:
    app: openhab
  type: LoadBalancer
  externalTrafficPolicy: Cluster

I am a little bit out of ideas and maybe I am just overlooking something simple/basic. Would be grateful for any hint / advice to get the final piece of moving my home automation towards a true cluster (still planning to change from 1 master + 3 workers to 2 masters in HA + 2 workers :slight_smile:

ps. yes, you might argue that this is a bit overengineered for home use, but the whole kubernetes cluster started as a learning initative for my projects at work :slight_smile:

Thx,
Jens

Dear all,

short update - I have tried a different strategy mounting the three folders with dedicated volumes / volumeClaims. This gave me some progress and some folder contents where created.

Despite me adding full access rights to the NFS share (via NFS as well as on Filesystem level using CHMOD 777) I get a bunch of ownership issues.

Maybe I need to try to run OpenHab with a different user that is known to the kubernetes cluster / NFS server… Will try and respond.

Found a similar thread going into the same direction: https://community.openhab.org/t/migration-to-rancher-2-5-rke/101512/3

Best,
Jens

I’m running openhab on kubernetes cluster with nfs-client-provisioner for managing volumeClaims.
One thing i noticed is that the nfs folders need to be owned by the openhab user from within the container.
And the user inside the container is controlled by GROUP_ID and USER_ID environment variables.
So, make sure to set GROUP_ID and USER_ID to match permissions set on your NFS server for the folders.
So, the next step is to run myopenhab for Google Assistant integration… that is a different challenge :slight_smile:

1 Like

Dear all,

next series of tests

  1. Kubernetes SecurityContext

First I have tried to force the pod to use an existing user & group on all kubernetes nodes (incl. master and NFS-host) which did not do any effect. So discarded this option!

  1. Environmentals in Container
    More promising is the attempt to launch the container itself with the environmental variables for user and group, using again the ones existing throughout my environment. Having the access levels on the NFS share itself set with chmod 777 I can at least see that the container setup is creating sucessfully creating folders in openhab/conf and openhab/userdata but still crashes.

  2. NFS Export Settings: Adding anonuid & anongid to my OpenHAB export declarations in /etc/exports
    Nn the NFS server itself, I added anonuid and anongid to the exports used for OpenHab using the same user & group as for environmental variables inside the container which finally did the trick!

No I can launch OpenHab and am running on Kubernetes v1.19.2 with a bunch of Raspberry PIs :slight_smile:

Important note: Today, I moved the NFS server to my NAS since for testing purpose I had it on the Kube-Master. Watch for your desired userid and depict the one you have on the NFS-Server and go from there (1) NFS Exports anonuid + anongid (2) environment variables USER_ID and GROUP_ID > they have to match for this to work!

Finally :slight_smile:

Jens

just saw your post which came as I was writing mine :slight_smile:

@tow one question: I am running OH2.5.10 on Kubernetes with NFS Persistent Volumes. All good but: seems that OpenHAB due to NFS is not recognizing “file changes” and thus to load new config files I currently delete the pod and create a new one which is not pretty fast on my Raspberry PIs :slight_smile:

How are you handling this? While I am pretty sure my OH will be untouched once all things & items are configured, it is still annoying to run the delete & recreate cycle (especially since it is mainly waiting time).

Thx,
Jens

Sorry for the late reply… Its this time of the year :slight_smile:
Anyway, I tried to stay away from using files and used UI for almost everything.
Why am i speaking in the past?.. i switched to 3.10 and it seems all can be edited from UI.

I see and am working with OH3, too but plan to stick to file based configuration - have to see.

Will try Longhorn.io as distributed storage on K8S cluster,

Happy New Year!

Jens

Hi !

I’m studying kubernetes for my personal hobbist use.

In your opinion what is the advantage of running openhab on kubernetes for home use?
I seem to have understood that it cannot do vertical scaling of instance ( take advantage of all cluster hardware) but useful when there is a big load of requests (so not in my home case)…

Can you give me a your point of view ? Thanks !

Luca

Hi Luca,

I do not think the vertical scaling is needed for OH (would wonder who has such a massive home automation that this would be needed).

While OH is not optimized for Kubernetes, I am pretty happy how well it is running and if you have multiple HW nodes, you benefit from autohealing since OH and all other components you might run (for me besides OH: web Server, MosquittoMQTT, Prometheus & Grafana,…).

It is a good learning and I think running OH on K8S is not a bad idea :slight_smile:

If you yet have to get hardware ARM64 based systems like Raspberry PI do still have some limitations regarding support for the platform e.g. I would like to run Longhorn.io distributed storage since running a NFS server introduces a single point of failure if you do not go for a NFS HA cluster…

This was a lot of writing - let me know if there is something specific you are interested in.

I am running 4x Raspi PI4 4GB which boot and run from USB3.0 SSD for performance (microSD is too slow to enjoy K8S). Software is Ubuntu 20.10 and MicroK8S 1.20

Jens

When installed with Kubernetes dashboard etc you can easily upgrade, use different environments etc (testing = to test new versions, and prod or so for the environment which actually controls your house)
In attachment how I set-up the pi’s for Kubernetes and openhab etc… let me know if you like the doc and if I should extend it.

Raspberry Pi domotica farm-v1.pdf (539.4 KB)

1 Like

Have the same issue (although I set most settings in oh3 trough the GUI).
If I modify the persistence file it will not be automatically be detected.

Hi Wim,

did not find a fix yet, has to do with how OH is checking since my web server running on K8S is detecting new files directly.

Now it is more annoying than a problem since typically I do not change the files permanently.

For testing new releases or adding new bindings I am thinking of running a test system on pure docker as a workaround :slight_smile:

Maybe we should raise a dedicated thread on that topic,

Jens

Dear all,

I found a way how to overcome this: I wrote myself a little bash script copying my conf files from my development repo directly into the running container using kubectl cp command

#!/bin/bash
POD=$(kubectl get pods --selector=app=openhab -n <your-k8s-namespace> -o jsonpath="{.items[0].metadata.name}")

kubectl cp </path/to/your/openhab/conf> <your-k8s-namespace>/${POD}:/openhab

Doing so OpenHab identifies and loads the changes immediatelly :slight_smile: Tested with OH-3.0.2 on K3S-1.2.0.6

Jens