Openhab 3.4 restart itself

Hello,
I need your help :slight_smile:
My Openhab installation is starting up again and again since today… every 5-10 minutes. I use Docker for Openhab. There is enough memory and hard disk.
In Docker I get the error message Container openhab3-openhab-1 exited with status code 137 in the log of Openhab I don’t actually see any error messages.

I don’t think this is a docker problem, I think openhab have any isue…

Where can I start? What could be the error?
Thank you very much.
Thomas

Have you done any changes before the problem started to occur ?

How much is enough ?

it’s start this night… i have make no changes since 1 Week…

The Pi have 4GB Ram, the installation use only ca 1,5 GB, no swap in use…

I have a similar problem but it started with earlier versions already. I didn’t have the time to look into this more closely. I also don’t have as many restarts as you but only a couple of times a day.

I have created a rule that notifies me when OH was started so I see how often this happens on my phone. My system hosts other applications as well but is equipped with 32GB of memory of which there are currently about 23GB in use. However I noticed that restarts often happen during times with higher system load. I still think it is still strange that this leads to restarts, other applications don’t behave like that.

Check systemctl status openhab and journalctl -u openhab. These two should give you some pointers of possible causes, as they capture standard error output. It might be restart or an crash.

Exit code 137 might be self protection of operating system from running into out of memory condition: mongodb - resolving java result 137 - Stack Overflow

good idea, but with docker is this not possible from host, I think,or?

Sorry, I missed that. Do you have any limit set for container?

yes I have, but today I have startet the container without limit

since 30 min… the container startet every 2 min…

and the worst thing is, the backup has not run for 1 month… great

Hey guys,
I am currently having a similar issue on my docker based instance, not sure If it is related.
I just updated from openHab 3.3.x to 3.4.4. The system came up with some missing Bindings including the shelly binding. As soon as I install the shelly binding, I get the following messages in the console (log:tail):

14:47:20.436 [WARN ] [b.core.thing.binding.BaseThingHandler] - Handler ShellyRelayHandler tried updating the thing status although the handler was already disposed.
14:47:22.164 [WARN ] [b.core.thing.binding.BaseThingHandler] - Handler ShellyRelayHandler tried updating the thing status although the handler was already disposed.
14:47:22.871 [INFO ] [.basic.internal.servlet.WebAppServlet] - Stopped Basic UI
14:47:24.679 [INFO ] [hab.ui.habpanel.internal.HABPanelTile] - Stopped HABPanel
14:47:25.391 [WARN ] [b.core.thing.binding.BaseThingHandler] - Handler ShellyRelayHandler tried updating the thing status although the handler was already disposed.
14:47:25.479 [WARN ] [b.core.thing.binding.BaseThingHandler] - Handler ShellyRelayHandler tried updating the thing status although the handler was already disposed.

After that messages, the system goes down and is restarted by docker. Memory and disk are sufficient.

Thanks for any Help,
Daniel

Where can I start?

What does the docker logs command show for your openhab container ?
What does the command docker stats show when you monitor it between restarts ?

docker stats say:

the Logs look like ok:

+ initialize_volume /openhab/userdata /openhab/dist/userdata
+ volume=/openhab/userdata
+ source=/openhab/dist/userdata
++ ls -A /openhab/userdata
+ '[' -z 'backup
cache
Californium.properties
config
core
etc
ipcamera
jsondb
kar
logs
marketplace
netatmo
secrets
sony
tmp' ']'
++ cmp /openhab/userdata/etc/version.properties /openhab/dist/userdata/etc/version.properties
+ '[' '!' -z ']'
+ chown -R openhab:openhab /openhab
+ sync
+ '[' -d /etc/cont-init.d ']'
+ sync
+ '[' false == false ']'
++ IFS=' '
++ echo gosu openhab tini -s ./start.sh
+ '[' 'gosu openhab tini -s ./start.sh' == 'gosu openhab tini -s ./start.sh' ']'
+ command=($@ server)
+ exec gosu openhab tini -s ./start.sh server
Launching the openHAB runtime...
+ IFS='
	'
++ ls -d /usr/lib/jvm/temurin-11-jdk-armhf
+ export JAVA_HOME=/usr/lib/jvm/temurin-11-jdk-armhf
+ JAVA_HOME=/usr/lib/jvm/temurin-11-jdk-armhf
+ '[' unlimited = unlimited ']'
+ echo 'Configuring Java unlimited strength cryptography policy...'
+ sed -i 's/^crypto.policy=limited/crypto.policy=unlimited/' /usr/lib/jvm/temurin-11-jdk-armhf/conf/security/java.security
Configuring Java unlimited strength cryptography policy...
+ /etc/ca-certificates/update.d/adoptium-cacerts
/etc/ssl/certs/adoptium/cacerts successfully populated.
+ capsh --print
+ grep -E Current:.+,cap_net_admin,cap_net_raw,.+
+ rm -f '/var/lock/LCK..*'
+ rm -f /openhab/userdata/tmp/instances/instance.properties
+ NEW_USER_ID=999
+ NEW_GROUP_ID=994
+ echo 'Starting with openhab user id: 999 and group id: 994'
Starting with openhab user id: 999 and group id: 994
+ id -u openhab
+ initialize_volume /openhab/conf /openhab/dist/conf
+ volume=/openhab/conf
+ source=/openhab/dist/conf
++ ls -A /openhab/conf
+ '[' -z 'automation
html
icons
items
persistence
rules
scripts
services
sitemaps
sounds
things
transform' ']'
+ initialize_volume /openhab/userdata /openhab/dist/userdata
+ volume=/openhab/userdata
+ source=/openhab/dist/userdata
++ ls -A /openhab/userdata
+ '[' -z 'backup
cache
Californium.properties
config
core
etc
ipcamera
jsondb
kar
logs
marketplace
netatmo
secrets
sony
tmp' ']'
++ cmp /openhab/userdata/etc/version.properties /openhab/dist/userdata/etc/version.properties
+ '[' '!' -z ']'
+ chown -R openhab:openhab /openhab
+ sync
+ '[' -d /etc/cont-init.d ']'
+ sync
+ '[' false == false ']'
++ IFS=' '
++ echo gosu openhab tini -s ./start.sh
+ '[' 'gosu openhab tini -s ./start.sh' == 'gosu openhab tini -s ./start.sh' ']'
+ command=($@ server)
+ exec gosu openhab tini -s ./start.sh server
Launching the openHAB runtime...

so i did a restore of all values from 04/23. the error is still there, strange…

By checking through some log files I discovered that my docker daemon had some trouble to resolve some(?) DNS names. I have no idea why as the config was pointing to my router which should work. Now I changed the config to point directly to my two DNS servers and the DNS issues seem to be gone. I also haven’t had any OH restarts since then but I cannot conclude that this was a result of the DNS reconfiguration as I sometimes don’t have any restarts for days.

As it may take a long time to see if my problems were really solved by this I just thought I’d mention it here so that you may check if you see similar issues in your installation.

Thanks for sharing.
I will have a look at it.

I have now taken a new SD card and reinstalled everything and restarted Docker and the backup from yesterday.
It has been running for 12 hours now without any problems. So I guess the host had a problem, although I didn’t do anything and the installation is identical.
The SD card was only 2 months old, so I don’t think it was broken.

It’s a pity we couldn’t solve the problem any other way, it’s annoying to have to reinstall everything… I would rather have understood what happened.

I think my problems are also gone. In the past few months every now and then I had weird problems with rules that were not executed though the trigger events took place. I guess that the system was busy restarting or doing whatever during that time… So far I didn’t have any unexpected restart after the configuration change and all my rules executed as expected.

remember Java Virtual Machine memory != Physical system memory. It is entirely plausible top run out of memory in the JVM that is running OpenHAB well before getting close to the physical system memory runs out. Check you start command’s --XMax setting and consider increasing it (just not beyond 80% of the system’s physical memory).

Thanks! Yes that’s true of course. But I would expect to see a core dump in case the JVM really runs out of memory wouldn’t I?

I have a few core dumps on my system but they are not related to the many restarts (more restarts than dumps) and they were not caused by running out of memory.

I use a standard docker deployment without any custom JVM options. I will keep this in mind if I encounter any more problems, thank you!