OpenHAB on Kubernetes

OK, so I’m not your typical home user, or maybe I’m a little crazy, or maybe a little of both. I recently set up a Kubernetes cluster in my basement, partly as a learning exercise, partly as a way to get all of the other experiments I run under control.

Naturally I wanted to move my OpenHAB server into the Kubernetes cluster. So far it seems to be a success. I still have a lot to learn about Kubernetes so I’m sure that this will evolve but I thought that I’d share what I’ve come up with so far.
Here you can find the configuration files that I used to deploy OpenHAB. If you know Kubernetes, these should make sense without a lot of explanation. If you don’t know Kubernetes, I’m not the person to explain as I’m still learning myself.

A few quick notes until I have time to write up a proper README:

  1. It’s making use of the official OpenHAB Docker container. Using the official Docker container made getting this going fairly straightforward. I have some minor quibbles with how the Docker container is set up but they don’t necessarily get in the way of a running setup.
  2. The config files make use of node selectors so that the container always gets scheduled on a specific host. First because it needs access to the Aeotec Z-Wave Z-Stick so that OpenHAB can control my Z-Wave devices, second because for performance reasons I needed to keep the OpenHAB configs on a local disk rather than on my NAS. If your NAS was fast enough and you didn’t need to access specific hardware devices you could let Kubernetes schedule the container on any available node.
  3. Make sure that your Kubernetes setup is solid before trying to put OpenHAB into the mix! I’m still very new to Kubernetes so a lot of my problems have been related to incorrect setup of my Kubernetes cluster rather than problems with OpenHAB.
6 Likes

I run just on a plain old Docker server.What minor quibbles do you have? Maybe it would be worth opening an issue. Personally, I’ve had a bit of pain working with UIDs and GIDs between the container and the host. Thank goodness the one inside the container was not in use on my system but I personally love how the PostgreSQL official container handles it: map your /etc/passwd into the container.

You could look into a ser2net/socat solution to expose your zwave device over the network and eliminate this limitation. Of course I say that knowing full well I never got that to work myself, but others have. You will probably need to be viligant though to make sure that only one program accesses the socket at a time. I’m not sure the locking and such works with ser2net and socat.

I look forward to your readme.

I’m just running stock Docker with official containers from dockerhub for the most part. I’m not even using DockerCompose. I’d be interested in knowing what it takes to go to the next step.

Regarding my quibbles with the Docker container, mainly it’s my OCD coming through, but I do think that there are some genuine improvements that can be made. Once I get some round tuits (hopefully this weekend) I’ll get some discussion started, probably on the existing official Docker container thread.

As far as “taking it to the next level” it really does depend on how crazy you are :nerd:. Running Kubernetes doesn’t really make a lot of sense until you run enough Docker containers to need at least three compute nodes, plus you really need some sort of network storage. There are various ways to run Kubenetes on a single node in a VM, but that’s mainly for learning and/or development of Kubernetes itself.

I really like the idea to use kubernetes for this. Just a small question… I am using the amazon dash buttons for a few applications. The docker container uses the host mode networking for this… Have you found a solution to run a deployment in host mode? (I guess it´s not possible…, does not really scale so good…)

You can configure Kubernetes pods to use host networking and you probably would have to run the pod in privileged mode as well but you’d have to try it to know for sure. I don’t have any Dash buttons so I can’t really test it.

Ok… thanks for these infos. I will start to dockerize my environment first and then make the move to kubernetes.

:slight_smile:

Thanks a lot, your yaml files saved me some time!
I’m using NFS PVs for the config files which works fine, but changes are not determined by the openhab engine automatically. I’m still looking for a solution here.

Get the knx binding running is a bit challenging. I experimented a lot getting the UDP Multicast through the nginx ingress, but seems not to be a very common use case.
Currently it runs on hostNetwork: true, which is only a hack.

Nevertheless K8S is awesome, even if it might be a bit over-sized for home automation purposes. :wink:

I know this is old. Anyone still working with openhab in kubernetes? I just recently got my kubernetes cluster up. Looking at the configs and had a couple questions.

Mainly the persistence storage. Does that storage move back and forth between nodes in the kubernetes cluster?

I have been playing with a persistent volume option and that PV is set to point at an NFS mount.

Thanks!

I’m no longer using OpenHAB but if you have multiple Kubernetes nodes you’ll definitely want to use persistent volumes to manage the OpenHAB data. I think that the whole point of persistent volumes in Kubernetes is that Kubernetes ensures that the persistent volume is available on the node that is running your OpenHAB pod. If you created your own Kubernetes cluster from scratch it can be a bit of a challenge to get persistent volumes working properly as there are many things to get set up right but once it’s working it’s fairly auto-magical. If you’re having problems with persistent volumes though it’d probably be better to look for support on a Kubernetes-specific forum though.

I have made some progress. I was able to get a pv setup that connects to my freenas server. But for some reason, i have a container that says that when it tries to write to it, it’s read only. So I have made some progress. I did register on the kube forums and have recently posted the details. So hopefully I can move past that so i can get my cluster running with some low hanging fruit (pihole for example) and keep moving forward and get something like openhab running :slight_smile:

Thanks!

I also try to get OpenHAB running on Kubernetes, I’m using microk8s on Ubuntu 18.10. My deployment-yaml and service-yaml are below. The container cannot connect to dl.bintray.com to download the needed addons, see errors below. In the same namespace on the same node I have for example Node RED running which can access the internet without any problems (it connects to weather.com or google calendar for example). Any ideas what I need to change to get OH working?

openhab-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: openhab
  labels:
    app: openhab
  namespace: smart-home
spec:
  selector:
    matchLabels:
      app: openhab
  replicas: 1
  template:
    metadata:
      labels:
        app: openhab
    spec:
      containers:
      - name: openhab
        image: openhab/openhab:2.5.0
        imagePullPolicy: Always
        resources:
          limits:
#            cpu: 2.0
            memory: 2048Mi
        ports:
        - containerPort: 8080
        - containerPort: 8443
        - containerPort: 8101
        - containerPort: 5007
        - containerPort: 22
#        volumeMounts:
#        - name: influxdb-data
#          mountPath: /var/lib/influxdb
#        - name: influxdb-conf
#          mountPath: /etc/influxdb/influxdb.conf
#        - name: tz-config
#          mountPath: /etc/localtime
#        args:
#        - "--run auberger"
#      volumes:
#      - name: influxdb-data
#        hostPath:
#          # directory location on host
#          path: /home/auberger/k8s/influxdb/pv/data
#          # this field is optional
#          type: Directory
#      - name: influxdb-conf
#        hostPath:
#           path: /home/auberger/k8s/influxdb/pv/influxdb.conf
      dnsConfig:
        nameservers:
        - 192.168.17.104

openhab-service.yaml:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: openhab
  name: openhab
  namespace: smart-home
spec:
  ports:
  # Which port on the node is the service available externallythrough?
  - nodePort: 8080
    # Inside the cluster, what port does the service expose?
    port: 8080
    protocol: TCP
    # Which port do pods selected by this service expose?
    targetPort: 8080
    name: http
  - nodePort: 8443
    port: 8443
    protocol: TCP
    targetPort: 8443
    name: https
  - nodePort: 8101
    port: 8101
    protocol: TCP
    targetPort: 8101
    name: ssh
  - nodePort: 5007
    port: 5007
    protocol: TCP
    targetPort: 5007
    name: lsp
  - nodePort: 2022
    port: 22
    protocol: TCP
    targetPort: 22
    name: ssh-shell
  selector:
    app: openhab
  # Make the service externally visible via the node
  type: NodePort

Error when starting OH:

Launching the openHAB runtime...
org.apache.karaf.features.internal.util.MultiException: Error:
        Error downloading mvn:org.openhab.ui.bundles/org.openhab.ui.dashboard/2.5.0
        at org.apache.karaf.features.internal.download.impl.MavenDownloadManager$MavenDownloader.<init>(MavenDownloadManager.java:91)
        at org.apache.karaf.features.internal.download.impl.MavenDownloadManager.createDownloader(MavenDownloadManager.java:72)
        at org.apache.karaf.features.internal.region.Subsystem.downloadBundles(Subsystem.java:457)
        at org.apache.karaf.features.internal.region.Subsystem.downloadBundles(Subsystem.java:452)
        at org.apache.karaf.features.internal.region.SubsystemResolver.resolve(SubsystemResolver.java:224)
        at org.apache.karaf.features.internal.service.Deployer.deploy(Deployer.java:393)
        at org.apache.karaf.features.internal.service.FeaturesServiceImpl.doProvision(FeaturesServiceImpl.java:1062)
        at org.apache.karaf.features.internal.service.FeaturesServiceImpl.lambda$doProvisionInThread$13(FeaturesServiceImpl.java:998)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
        Suppressed: java.io.IOException: Error downloading mvn:org.openhab.ui.bundles/org.openhab.ui.dashboard/2.5.0
                at org.apache.karaf.features.internal.download.impl.AbstractRetryableDownloadTask.run(AbstractRetryableDownloadTask.java:77)
                at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
                at java.util.concurrent.FutureTask.run(FutureTask.java:266)
                at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
                at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
                ... 3 more
        Caused by: java.io.IOException: Error resolving artifact org.openhab.ui.bundles:org.openhab.ui.dashboard:jar:2.5.0: [Could not transfer artifact org.openhab.ui.bundles:org.openhab.ui.dashboard:jar:2.5.0 from/to openhab (https://dl.bintray.com/openhab/mvn/online-repo/2.5/): Connect to dl.bintray.com:443 [dl.bintray.com/149.172.19.157] failed: connect timed out]
                at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.configureIOException(AetherBasedResolver.java:803)
                at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:774)
                at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:657)
                at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:598)
                at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:565)
                at org.apache.karaf.features.internal.download.impl.MavenDownloadTask.download(MavenDownloadTask.java:52)
                at org.apache.karaf.features.internal.download.impl.AbstractRetryableDownloadTask.run(AbstractRetryableDownloadTask.java:60)
                ... 7 more
                Suppressed: shaded.org.eclipse.aether.transfer.ArtifactTransferException: Could not transfer artifact org.openhab.ui.bundles:org.openhab.ui.dashboard:jar:2.5.0 from/apiVersion: v1
to openhab (https://dl.bintray.com/openhab/mvn/online-repo/2.5/): Connect to dl.bintray.com:443 [dl.bintray.com/149.172.19.157] failed: connect timed out

Hi Rene!

I’m using K3s and have been running OpenHAB on a pod for some time for testing purposes, not for my production (at least not yet :-)) I’ve also managed to get a friend of mine to set up the same environment.

I’ve got this error a couple of times, and it seems like dl.bintray.com sometimes is down, have you tested several times at different occasions?

I’ve written a number of gists with configuration guidelines for K3s on Raspberry PIs if someone is interested in trying out OH on K3s.
NOTE: Some of the w.o. described might have been fixed in later versions.

BR,

/Janne

Hi Janne,

yes I tried it several days at different times before posting here. As I’m still learning Kubernetes this might also be an issue with my setup. What seems strange is this part of the error:

Connect to dl.bintray.com:443 [dl.bintray.com/149.172.19.157] failed: connect timed out

Using nslookup on my laptop or the host-system running microk8s (both in my home network) I get both times this result, which is a completely different IP address than OH prints out in the error message:

$ nslookup dl.bintray.com
Server: 192.168.17.104
Address: 192.168.17.104#53

Non-authoritative answer:
Name: dl.bintray.com
Address: 52.29.129.139
Name: dl.bintray.com

However, I don’t know how to ssh into the OH container to run nslookup there and compare the results or do some other investigation from within the container.

However, I don’t know how to ssh into the OH container to run nslookup there and compare the results or do some other investigation from within the container.

You can’t ssh into the OH pod but you can use kubectl, first you need to find the pod id for OH:

pi@k3s-master-1:~ $ kubectl get pods
NAME                                                         READY   STATUS             RESTARTS   AGE
influxdb-7989845b67-5fqh7                                    0/1     Unknown            0          75d
grafana-b84d897fb-mj5fc                                      1/1     Running            2          105d
nfs-client-provisioner-openhab-production-5cc5d6c7d4-dqs65   1/1     Running            7          54d
nfs-client-provisioner-58987d578-m9zvm                       1/1     Running            4          54d
nfs-client-provisioner-dump1090-647fcc45db-ln6t5             1/1     Running            3          50d
openhab-79f845bc8c-npkdh                                     1/1     Running            0          2d2h
busybox                                                      1/1     Running            1873       78d

Lets connect to pod openhab-79f845bc8c-npkdh using kubectl:

pi@k3s-master-1:~ $ kubectl exec -it openhab-79f845bc8c-npkdh /bin/bash
root@openhab-79f845bc8c-npkdh:/openhab# ls
addons	conf  dist  lib  LICENSE.TXT  runtime  start_debug.sh  start.sh  userdata

However the OH container used does not include nslookup nor ping commands, but you can use arping to check if DNS points to correct IP.
NOTE: arping will not be able to actually ping external IPs:

root@openhab-79f845bc8c-npkdh:/openhab# arping -c 1 dl.bintray.com
arping: lookup dev: No matching interface found using getifaddrs().
arping: Unable to automatically find interface to use. Is it on the local LAN?
arping: Use -i to manually specify interface. Guessing interface eth0.
ARPING 3.124.133.231
Timeout

--- 3.124.133.231 statistics ---
1 packets transmitted, 0 packets received, 100% unanswered (0 extra)

root@openhab-79f845bc8c-npkdh:/openhab# arping -c 1 dl.bintray.com
arping: lookup dev: No matching interface found using getifaddrs().
arping: Unable to automatically find interface to use. Is it on the local LAN?
arping: Use -i to manually specify interface. Guessing interface eth0.
ARPING 52.29.129.139
Timeout

--- 52.29.129.139 statistics ---
1 packets transmitted, 0 packets received, 100% unanswered (0 extra)
root@openhab-79f845bc8c-npkdh:/openhab# arping -c 1 dl.bintray.com
arping: lookup dev: No matching interface found using getifaddrs().
arping: Unable to automatically find interface to use. Is it on the local LAN?
arping: Use -i to manually specify interface. Guessing interface eth0.
ARPING 52.29.129.139
Timeout

--- 52.29.129.139 statistics ---
1 packets transmitted, 0 packets received, 100% unanswered (0 extra)

Doing an nslookup from my local MAC gives the same IPs:

$ nslookup dl.bintray.com
Server:		192.168.1.1
Address:	192.168.1.1#53

Non-authoritative answer:
Name:	dl.bintray.com
Address: 3.124.133.231
Name:	dl.bintray.com
Address: 52.29.129.139

I had some problems with kube-proxy and IP tables that required a w.o. to get networking to work, don’t know if that could be connected.

The IP adress that you got dl.bintray.com/149.172.19.157 is:

$ nslookup 149.172.19.157
Server:		192.168.1.1
Address:	192.168.1.1#53

Non-authoritative answer:
157.19.172.149.in-addr.arpa	name = HSI-KBW-149-172-19-157.hsi13.kabel-badenwuerttemberg.de.

Is that maybe an IP address belonging to your ISP?

What I suggest you do is to install a pod based on a container that has all the network commands to be able to debug your network, e.g. busybox. Then you can issue a command on the busybox pod via kubectl exec -it:

pi@k3s-master-1:~ $ kubectl exec -it busybox nslookup www.dn.se
Server:    192.168.1.1
Address 1: 192.168.1.1 router.asus.com

Name:      www.dn.se
Address 1: 95.101.172.101 a95-101-172-101.deploy.static.akamaitechnologies.com
pi@k3s-master-1:~ $ kubectl exec -it busybox ping www.dn.se
PING www.dn.se (95.101.172.101): 56 data bytes
64 bytes from 95.101.172.101: seq=0 ttl=58 time=2.964 ms
64 bytes from 95.101.172.101: seq=1 ttl=58 time=6.728 ms
64 bytes from 95.101.172.101: seq=2 ttl=58 time=2.438 ms
^C
--- www.dn.se ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 2.438/4.043/6.728 ms

BR,

/Janne

1 Like

Thank you so much for your help! This enabled me to investigate further, although I did not find the cause or a solution… It it really strange:

  • Yes, the IP 149.172.19.157 was my own external IP at home I got from my internet provider
  • Using busybox in the same namespace (smart-home) nslookup fetches the correct addresses for dl.bintray.com. I played around with dnsPolicy and dnsConfig for busybox and it always worked correctly regardless if the cluster DNS or my DNS at home (192.168.17.104) were used
  • I connected to the openHAB container with /bin/bash and examined /etc/resolv.conf and it contained the correct settings (cluster DNS first then my home DNS, correct search domains). arping in this container returned the wrong (my own external) IP. I tried to install nslookup using apt-get in the openHAB container but all external names such as prod.debian.map.fastly.net were resolved to my own external IP.

I don’t know what’s wrong here. When I connect to my Node RED container with /bin/bash it has the exact same configuration in /etc/resolv.conf as openHAB. It has nslookup installed and resolves everything correctly: bintray, debian mirror, local names, cluster names. :+1:

The openHAB container only resolves cluster names correctly, local names in my home network and external names are all resolved to my external IP at home. :thinking:

Today it seems that I got it working. I added this line to my deployment YAML:

dnsPolicy: None

It allows a Pod to ignore DNS settings from the Kubernetes environment. All DNS settings are supposed to be provided using the dnsConfig field in the and I set this to my home DNS server.

Although it works now I am wondering why my other pods such as Node Red and busybox can resolve any names correctly. This makes me feel that there is somewhere still an issue with my microk8s setup in general…

In case you are also running on Kubernetes and have trouble with Homematic, I’ve written a short howto that describes how you can solve the issue you might have with the Homematic bridge within a Kubernetes pod: How to configure Homematic binding in OpenHAB on Kubernetes

Dear all,

I am trying to get OpenHab running on a 4-node (1 master, 3 worker) Raspberry PI4 4GB Kubernetes Cluster (Kubespray, K8S verson 1.19.2).

Everything works w/o problems when I do not try to use NFS Persistent Volumes to mount the folders /openhab/addons, /openhab/conf and /openhab/userdata.

My NFS share and persistent volumes is working for many other services like Apache/PHP, Grafana, InfluxDB and Prometheus.

OpenHab is always crashing during intial launch / ContainerCreation and the logs I am getting are the following:

~/k8s-config/services/openhab$ kubectl logs openhab-75c686c96c-grm7t 
++ test -t 0
++ echo false
+ interactive=false
+ set -euo pipefail
+ IFS='
        '
+ '[' limited = unlimited ']'
+ rm -f /openhab/runtime/instances/instance.properties
+ rm -f /openhab/userdata/tmp/instances/instance.properties
+ NEW_USER_ID=9001
+ NEW_GROUP_ID=9001
+ echo 'Starting with openhab user id: 9001 and group id: 9001'
Starting with openhab user id: 9001 and group id: 9001
+ id -u openhab
++ getent group 9001
+ '[' -z '' ']'
+ echo 'Create group openhab with id 9001'
+ groupadd -g 9001 openhab
Create group openhab with id 9001
+ echo 'Create user openhab with id 9001'
+ adduser -u 9001 --disabled-password --gecos '' --home /openhab --gid 9001 openhab
Create user openhab with id 9001
Warning: The home dir /openhab you specified already exists.
Adding user `openhab' ...
Adding new user `openhab' (9001) with group `openhab' ...
adduser: Warning: The home directory `/openhab' does not belong to the user you are currently creating.
The home directory `/openhab' already exists.  Not copying from `/etc/skel'.
+ groupadd -g 14 uucp2
+ groupadd -g 16 dialout2
+ groupadd -g 18 dialout3
+ groupadd -g 32 uucp3
+ groupadd -g 997 gpio
+ adduser openhab dialout
Adding user `openhab' to group `dialout' ...
Adding user openhab to group dialout
Done.
+ adduser openhab uucp
Adding user `openhab' to group `uucp' ...
Adding user openhab to group uucp
Done.
+ adduser openhab uucp2
Adding user `openhab' to group `uucp2' ...
Adding user openhab to group uucp2
Done.
+ adduser openhab dialout2
Adding user `openhab' to group `dialout2' ...
Adding user openhab to group dialout2
Done.
+ adduser openhab dialout3
Adding user `openhab' to group `dialout3' ...
Adding user openhab to group dialout3
Done.
+ adduser openhab uucp3
Adding user `openhab' to group `uucp3' ...
Adding user openhab to group uucp3
Done.
+ adduser openhab gpio
Adding user `openhab' to group `gpio' ...
Adding user openhab to group gpio
Done.
+ initialize_volume /openhab/conf /openhab/dist/conf
+ volume=/openhab/conf
+ source=/openhab/dist/conf
++ ls -A /openhab/conf
+ '[' -z 'html
icons
items
persistence
rules
scripts
services
sitemaps
sounds
things
transform' ']'
+ initialize_volume /openhab/userdata /openhab/dist/userdata
+ volume=/openhab/userdata
+ source=/openhab/dist/userdata
++ ls -A /openhab/userdata
+ '[' -z 'etc
logs
tmp' ']'
++ cmp /openhab/userdata/etc/version.properties /openhab/dist/userdata/etc/version.properties
cmp: /openhab/dist/userdata/etc/version.properties: No such file or directory
+ '[' '!' -z ']'
+ chown -R openhab:openhab /openhab
chown: changing ownership of '/openhab/Readme.txt': Operation not permitted
chown: changing ownership of '/openhab': Operation not permitted

My openhab manifest looks as follows:

~/k8s-config/services/openhab$ cat openhab_service.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: openhab
  labels:
    app: openhab
spec:
  replicas: 1
  selector:
    matchLabels:
      app: openhab
  template:
    metadata:
      labels:
        app: openhab
    spec:
      containers:
      - name: openhab
        image: openhab/openhab:2.4.0
        imagePullPolicy: IfNotPresent
        ports:
#        - containerPort: 8101
        - containerPort: 8080
        - containerPort: 8443
#        - containerPort: 5007
#        - containerPort: 22
        volumeMounts:
        -  mountPath: "/openhab"
           name: nfs-pv-openhab
        args:
        - "--run jens"
      volumes:
      - name: nfs-pv-openhab
        persistentVolumeClaim:
          claimName: nfs-pvc-openhab
---
kind: Service
apiVersion: v1
metadata:
  name: openhab
  labels:
    app: openhab
spec:
  ports:
#    - name: language-service
#      protocol: TCP
#      port: 8101
#      targetPort: 8101
    - name: http
      protocol: TCP
      port: 8080
      targetPort: 8080
    - name: https
      protocol: TCP
      port: 8443
      targetPort: 8443
  selector:
    app: openhab
  type: LoadBalancer
  externalTrafficPolicy: Cluster

I am a little bit out of ideas and maybe I am just overlooking something simple/basic. Would be grateful for any hint / advice to get the final piece of moving my home automation towards a true cluster (still planning to change from 1 master + 3 workers to 2 masters in HA + 2 workers :slight_smile:

ps. yes, you might argue that this is a bit overengineered for home use, but the whole kubernetes cluster started as a learning initative for my projects at work :slight_smile:

Thx,
Jens

Dear all,

short update - I have tried a different strategy mounting the three folders with dedicated volumes / volumeClaims. This gave me some progress and some folder contents where created.

Despite me adding full access rights to the NFS share (via NFS as well as on Filesystem level using CHMOD 777) I get a bunch of ownership issues.

Maybe I need to try to run OpenHab with a different user that is known to the kubernetes cluster / NFS server… Will try and respond.

Found a similar thread going into the same direction: https://community.openhab.org/t/migration-to-rancher-2-5-rke/101512/3

Best,
Jens

I’m running openhab on kubernetes cluster with nfs-client-provisioner for managing volumeClaims.
One thing i noticed is that the nfs folders need to be owned by the openhab user from within the container.
And the user inside the container is controlled by GROUP_ID and USER_ID environment variables.
So, make sure to set GROUP_ID and USER_ID to match permissions set on your NFS server for the folders.
So, the next step is to run myopenhab for Google Assistant integration… that is a different challenge :slight_smile:

1 Like