KNX-Binding on Kubernetes-Cluster

Am just testing openHAB to see if it is right for me.

I have deployed openHAB on my Kubernetes cluster and am failing to connect to my MDT-KNX gateway.

Here is my openHAB deployment first:
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: openhab
  name: openhab
  namespace: openhab
  annotations:
    metallb.universe.tf/allow-shared-ip: openhab
spec:
  type: LoadBalancer
  loadBalancerIP: 192.168.1.24
  ports:
    - name: webinterface
      port: 80
      targetPort: 8080
    - name: knx
      port: 3671
      targetPort: 3671
    - name: ssh
      port: 22
      targetPort: 8101
  selector:
    app: openhab
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: openhab-udp
  name: openhab-udp
  namespace: openhab
  annotations:
    metallb.universe.tf/allow-shared-ip: openhab
spec:
  type: LoadBalancer
  loadBalancerIP: 192.168.1.24
  ports:
    - name: knx-udp
      port: 3671
      protocol: UDP
      targetPort: 3671
  selector:
    app: openhab
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: openhab
  name: openhab
  namespace: openhab
spec:
  replicas: 1
  selector:
    matchLabels:
      app: openhab
  template:
    metadata:
      labels:
        app: openhab
    spec:
      # hostNetwork: true          
      containers:
        - image: openhab/openhab:3.3.0
          name: openhab
          securityContext:
            privileged: true
          ### Kubernetes Probes | begin ###
          livenessProbe:
            initialDelaySeconds: 420
            periodSeconds: 20
            tcpSocket:
              port: 8080
          ### Kubernetes Probes |  end  ###
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
              name: webinterface
              protocol: TCP
            - containerPort: 3671
              name: knx
              protocol: TCP
            - containerPort: 3671
              name: knx-udp
              protocol: UDP
            - containerPort: 8101
              name: ssh
              protocol: TCP
          resources: {}
          env:
            - name: "TZ"
              value: "Europe/Berlin"
          volumeMounts:
            - mountPath: /openhab/conf
              name: kubernetes-pvc
              subPath: openhab/conf
            - mountPath: /openhab/userdata
              name: kubernetes-pvc
              subPath: openhab/userdata
            - mountPath: /openhab/addons
              name: kubernetes-pvc
              subPath: openhab/addons
            - name: etc-localtime
              mountPath: /etc/localtime
            - name: etc-timezone
              mountPath: /etc/timezone
      restartPolicy: Always
      volumes:
        - name: kubernetes-pvc
          persistentVolumeClaim:
            claimName: kubernetes-pvc
        - name: etc-localtime
          hostPath:
            path: /usr/share/zoneinfo/Europe/Berlin
        - name: etc-timezone
          hostPath:
            path: /usr/share/zoneinfo/Europe/Berlin
---

I tried to include the gateway both as a tunnel and as a router. However, I only get the online status when I connect it as a router:

UID: knx:ip:ea8e42ceb7
label: KNX/IP Gateway
thingTypeUID: knx:ip
configuration:
  useNAT: false
  readRetriesLimit: 3
  autoReconnectPeriod: 60
  type: ROUTER
  localSourceAddr: 0.0.0
  readingPause: 50
  portNumber: 3671
  responseTimeout: 10

Then I also get the status Online at my DALI gateway, but only as long as I don’t specify a physical address.

UID: knx:device:ea8e42ceb7:f2d47f12ec
label: KNX DALI GW
thingTypeUID: knx:device
configuration:
  pingInterval: 600
  readInterval: 0
  fetch: false
bridgeUID: knx:ip:ea8e42ceb7
channels:
  - id: "2"
    channelTypeUID: knx:switch
    label: "2"
    description: ""
    configuration:
      ga: 1/1/122+<1/0/122

Unfortunately, I can’t switch the light with the created switch and I don’t get a feedback either.

search about multicast for kubernets and multicast for knx

For Router mode, there is no simple way to ensure the device is up, so openHAB fakes the online state.
Use Tunnel mode and ensure to set the correct IP for both openHAB container (localIp) and the gateway (ipAddress)

Thanks first of all for your responses.

@Udo_Hartmann
Of course, I have already tried to establish a connection via tunnel.

UID: knx:ip:ea8e42ceb7
label: KNX/IP Gateway
thingTypeUID: knx:ip
configuration:
  useNAT: false
  readRetriesLimit: 4
  ipAddress: 192.168.1.254
  autoReconnectPeriod: 60
  localIp: 192.168.1.24
  localSourceAddr: 1.1.255
  readingPause: 50
  type: TUNNEL
  portNumber: 3671
  responseTimeout: 10
events.log
2022-11-23 20:24:28.569 [INFO ] [ab.event.ThingStatusInfoChangedEvent] - Thing 'knx:ip:ea8e42ceb7' changed from UNKNOWN to OFFLINE (COMMUNICATION_ERROR): connecting from 192.168.1.24:0 to 192.168.1.254:3671: Cannot assign requested address (Bind failed)
2022-11-23 20:24:53.417 [INFO ] [ab.event.ThingStatusInfoChangedEvent] - Thing 'knx:ip:ea8e42ceb7' changed from OFFLINE (COMMUNICATION_ERROR): connecting from 192.168.1.24:0 to 192.168.1.254:3671: Cannot assign requested address (Bind failed) to OFFLINE
2022-11-23 20:24:53.418 [INFO ] [ab.event.ThingStatusInfoChangedEvent] - Thing 'knx:ip:ea8e42ceb7' changed from OFFLINE to UNKNOWN
openhab.log
2022-11-23 20:24:28.555 [ERROR] [et/IP Tunneling 192.168.1.254:3671] - communication failure on connect
java.net.BindException: Cannot assign requested address (Bind failed)
	at java.net.PlainDatagramSocketImpl.bind0(Native Method) ~[?:?]
	at java.net.AbstractPlainDatagramSocketImpl.bind(AbstractPlainDatagramSocketImpl.java:135) ~[?:?]
	at java.net.DatagramSocket.bind(DatagramSocket.java:394) ~[?:?]
	at java.net.DatagramSocket.<init>(DatagramSocket.java:244) ~[?:?]
	at tuwien.auto.calimero.knxnetip.ClientConnection.connect(ClientConnection.java:177) ~[?:?]
	at tuwien.auto.calimero.knxnetip.KNXnetIPTunnel.<init>(KNXnetIPTunnel.java:171) ~[?:?]
	at tuwien.auto.calimero.knxnetip.KNXnetIPTunnel.<init>(KNXnetIPTunnel.java:163) ~[?:?]
	at org.openhab.binding.knx.internal.client.IPClient.getConnection(IPClient.java:110) ~[?:?]
	at org.openhab.binding.knx.internal.client.IPClient.createKNXNetworkLinkIP(IPClient.java:93) ~[?:?]
	at org.openhab.binding.knx.internal.client.IPClient.establishConnection(IPClient.java:80) ~[?:?]
	at org.openhab.binding.knx.internal.client.AbstractKNXClient.connect(AbstractKNXClient.java:182) ~[?:?]
	at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:829) [?:?]

Have also tried without “Local Device Address”, but also unsuccessful. :unamused:

@stamate_viorel
Also for multicast in connection with Kubernetes I could not find a solution.

Since my last post, I’ve been playing around with the settings and just flipped the “Use NAT” switch.

UID: knx:ip:ea8e42ceb7
label: KNX/IP Gateway
thingTypeUID: knx:ip
configuration:
  useNAT: true
  readRetriesLimit: 3
  ipAddress: 192.168.1.254
  autoReconnectPeriod: 60
  type: TUNNEL
  localSourceAddr: 0.0.0
  readingPause: 50
  portNumber: 3671
  responseTimeout: 10

What can I say now it works.

Thank you all for your efforts.

Container is in bridged mode? openHAB has some bindings which communicate via Multicast. If you want to use Multicast, the Container has to be in host mode, because the docker bridge does not route Mulkticast