-
Platform information:
- Hardware: Raspberry Pi 4
- OS: RaspiOS Lite (November, 2021)
- Java Runtime Environment: Built into Docker Image
- openHAB version: 3.2.0
-
Issue of the topic: This configuration has worked in the past with 3.1.0. I’m unable to access the web interface and the logs (see below) show the error “Unable to start pax web server: null”. I made a backup of my original configuration and then recreated the openhab/conf, openhab/addons, and openhab/userdata directories. Recreated the Openhab container and the website is still not accessible. I have confirmed directory permissions are correct. I know Openhab runs because scheduling is turning lights on and off. Does anyone know how I can fix this problem?
-
Please post configurations (if applicable): Kubernetes running on Raspberry PI. Master cluster is on it’s own RPi and a separate node for Openhab. Both are RPi 4’s. I’m using Flannel for networking and Metallb for software load balancing.
apiVersion: apps/v1
kind: Deployment
metadata:
name: openhab
labels:
app: openhab
spec:
replicas: 1
selector:
matchLabels:
app: openhab
template:
metadata:
labels:
app: openhab
spec:
nodeSelector:
local-volume: openhab
containers:
- name: openhab
securityContext:
privileged: true
image: registry.lan:5000/openhab:3.2.0
ports:
- containerPort: 8080
name: http
protocol: TCP
- containerPort: 8443
name: https
protocol: TCP
- containerPort: 8101
name: console
protocol: TCP
volumeMounts:
- name: aeotec-z-wave-z-stick
mountPath: /dev/ttyACM0
readOnly: false
- name: etc-localtime
mountPath: /etc/localtime
readOnly: true
- name: openhab-conf-volume
mountPath: /openhab/conf
readOnly: false
- name: openhab-userdata-volume
mountPath: /openhab/userdata
readOnly: false
- name: openhab-addons-volume
mountPath: /openhab/addons
readOnly: false
volumes:
- name: aeotec-z-wave-z-stick
hostPath:
path: /dev/ttyACM0
- name: etc-localtime
hostPath:
path: /usr/share/zoneinfo/America/New_York
- name: openhab-conf-volume
persistentVolumeClaim:
claimName: openhab-conf-claim
- name: openhab-userdata-volume
persistentVolumeClaim:
claimName: openhab-userdata-claim
- name: openhab-addons-volume
persistentVolumeClaim:
claimName: openhab-addons-claim
terminationGracePeriodSeconds: 300
nodeSelector:
kubernetes.io/hostname: rpi8gb
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: openhab-conf-volume
labels:
directory: openhab
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local
local:
path: /mnt/raid1/openhab/conf
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- rpi8gb
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: openhab-conf-claim
spec:
storageClassName: local
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
selector:
matchLabels:
directory: openhab
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: openhab-userdata-volume
labels:
directory: openhab
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local
local:
path: /mnt/raid1/openhab/userdata
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- rpi8gb
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: openhab-userdata-claim
spec:
storageClassName: local
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
selector:
matchLabels:
directory: openhab
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: openhab-addons-volume
labels:
directory: openhab
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local
local:
path: /mnt/raid1/openhab/addons
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- rpi8gb
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: openhab-addons-claim
spec:
storageClassName: local
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
selector:
matchLabels:
directory: openhab
---
apiVersion: v1
kind: Service
metadata:
name: openhab
spec:
selector:
app: openhab
type: ClusterIP
ports:
- protocol: TCP
port: 8080
targetPort: http
name: openhab-http
- protocol: TCP
port: 8443
targetPort: https
name: openhab-https
- protocol: TCP
port: 8101
targetPort: console
name: openhab-console
---
apiVersion: v1
kind: Service
metadata:
name: openhab-http
annotations:
metallb.universe.tf/allow-shared-ip: shared
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: openhab
loadBalancerIP: 172.16.10.3
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: openhab-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: "openhab.lan"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: openhab
port:
number: 8080
- Items configuration related to the issue
- Sitemap configuration related to the issue
- Rules code related to the issue
- Services configuration related to the issue
- If logs where generated please post these here using code fences:
2022-02-25 15:42:22.308 [ERROR] [j.pax.web.service.internal.Activator] - Unable to start pax web server: null
java.lang.NullPointerException: null
at org.ops4j.pax.web.service.jetty.internal.ServerControllerImpl$Stopped.start(ServerControllerImpl.java:533) ~[?:?]
at org.ops4j.pax.web.service.jetty.internal.ServerControllerImpl.start(ServerControllerImpl.java:83) ~[?:?]
at org.ops4j.pax.web.service.jetty.internal.ServerControllerFactoryImpl$1.start(ServerControllerFactoryImpl.java:164) ~[?:?]
at org.ops4j.pax.web.service.jetty.internal.ServerControllerImpl$Unconfigured.configure(ServerControllerImpl.java:784) ~[?:?]
at org.ops4j.pax.web.service.jetty.internal.ServerControllerImpl.configure(ServerControllerImpl.java:99) ~[?:?]
at org.ops4j.pax.web.service.internal.Activator.updateController(Activator.java:373) ~[?:?]
at org.ops4j.pax.web.service.internal.Activator.lambda$scheduleUpdateFactory$1(Activator.java:299) ~[?:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
2022-02-25 15:42:33.841 [INFO ] [.core.model.lsp.internal.ModelServer] - Started Language Server Protocol (LSP) service on port 5007
2022-02-25 15:42:39.193 [INFO ] [e.automation.internal.RuleEngineImpl] - Rule engine started.