OH3 - killing my system occasionally - LOGfile exploding to 100% root

Hi,
every now and than my OH3 (latest milestone, i guess M3) is running out of SD memory due to log files (syslog and daemon.log) exploding until the whole system is down. Alwways results in Openhab loosing part of the configuration until i remove log files and backup OH3 config.

Any idea, is this known - what can I do?

Below only a pic as I cannot open the 5GB file in nano. Many thanks if there is a solution to this.

Please provide some more information.
Hardware/OS/full Java version/installed bindings.

Sorry - RPI4, 4GB RAM
openjdk 11.0.11 2021-04-20 LTS

Bindings: Astro, MQTT,Resol,http,ntp,sysinfo,jeelink,openweather,mail,pushover,zwave

Is this OpenJDK or Zulu 11 ?
What OS is your Pi running ?

oh - missed the 3rd line of the java --version
its ZULU, which from my memory was recommended to use with OH3

openjdk 11.0.11 2021-04-20 LTS
OpenJDK Runtime Environment Zulu11.48+21-CA (build 11.0.11+9-LTS)
OpenJDK Client VM Zulu11.48+21-CA (build 11.0.11+9-LTS, mixed mode)

And its Raspian Buster, latest updates/upgrades.

Thanks, so prerequisits are met. You are using resol, http, jeelink and pushover binding, which I don‘t, so my rough guess would be one of those bindings causing the issue.
Your screenshot shows NPE‘s in conjunction with ThreadPoolExecutorWorker, so I would start with raising the ThreadPool limit, using the search will give you some posts how to do that in the configs.

And please verify you are on openHAB 3.2M3, as it has a fix for bindings closing shared thread pools.

OK thanks…will do the raising part. But hard to track as it shows up every few weeks…the worst thing is that it completely messes my SD to zero available HDD memory which results in Openhab loosing all configs that have been GUI based and not file based. Backup brings it all back in as now i do backups quite frequently…so its easy but also annoying sometimes.

Yes i’m always on the latest version, which we know is sometimes not the best idea…

image

I am on M3 also and never hat issues with the Milestone versions…

With a proper openHABian based setup, you wouldn’t be having the problem that logs fill filesystems and all subsequent issues this is causing.

Have a look a logrotate and it’s configuration. The maxsize for files daemon.log and syslog can be set before they are being rotated. Similar stuff can be configured for openhab’s log files in the appenders configuration files.

@mstormi
maybe Openhabian would save me from these consecutive issues but still its a problem (from what i read up to now) within openhab. So best would be to fix it there, of course i should start limiting log files to tackle it from the other side as well.

Furthermore, looking at openhabian for a long time, but if you have several parallel activities on this machine, openhabian is too limiting compared to the plain buster image.

Kind Regards

You did not tell this fact in your first post.
Even your screenshot shows Karaf entries, the culprit could be outside openHAB.
Did you check if no other prozess is „eating up“ your memory ?

hm, my guess these parallel tasks are irrelevant here as the logs show pages of JAVA related activities and the rest is python/php based development.

except from zigbee2mqtt which is nodejs. not sure if javascript would lead to this as well.

It simply is a bad idea to put your home at risk by sharing your openHAB machine with any other programs whatsoever as each and every of them can conflict with OH on various resources (RAM, disk and swap space, package and library dependencies, networking just to name some) and can break your system like what openhab is doing to your machine right now.

Then again, openHABian does not limit you anywhere, that’s a plain wrong statement and frankly I dislike reading that from people that have just deliberately killed their own system by de-selecting openHABian because they think they know better how to setup a server best.

Have a look at the output from ‘shell:info’ from the karaf console (see docs on how to reach the console) and see if the threads are increasing over time? They will go up and down, but should not have a upwards trend.

As others have explained, you can setup the logs to rotate. I think I have mine set to reach a max of 10mb and only keep say 4 of the old files before they are thrown away. 40mb max for each log.

  1. prevent the logs from growing out of control.
  2. Look at what is flooding the logs and look at solving the root cause.

If they fill up all your spare space within X time frame, I would guess there is something that needs looking at.

Hmm, what is Java going to do when it tries to carry out work and finds the host resources have been taken by some other process …

He who complains about lack of resource is not usually the theif.

2 Likes

I cannot emphasize this enough (and I’m gonna steal that saying :wink: ). When it comes down to a lack of resources problem, the thing complaining is just what happened to need more resources first, not necessarily the thing that is running amok consuming more resources than is necessary.

A good while back there was a problem running an extension/plugin for Grafana on an RPi which consumed more and more resources quietly in the background. Eventually the OS would give up and just kill openHAB. The problem had nothing to do with openHAB but openHAB was the one showing the problems.

When one deviates from a standard and well known setup like openHABian, they drastically limit the amount and the quality of the support we here can provide. You are running a whole lot of stuff on a relatively limited set of hardware. What’s causing the actual problem? :man_shrugging: We’re not experts in all that other stuff you are running. We don’t even know what all that other stuff is. Are they causing a resource constraint? :woman_shrugging:

1 Like

openHAB is usually the one showing up because it’s usually the largest process / resource hog which is what Linux prefers to kill if in ‘desperate’ mode. The statements in the link given above are likewise true if you don’t use openHABian but any other OS or hardware: it’s not fair to ignore the recommendations in the first place and then ask here for help when you start seeing the effects of your decision.

On-the-other-hand, openHABian is not the only officially supported way to install and run openHAB. If someone followed any one of the sets of instructions in the docs, including installing via apt, manually, Docker, etc, we should do our best to help where we can. But the more someone has running along side openHAB, the less we will be able to help. But I don’t want to discourage users from asking for help in the first place.

2 Likes

Sure, and I didn’t mean to say that the recommendation is openHABian.
To be clear, the recommendation I refer to that the OP ignored was to run OH on a dedicated system (which by design ensures there cannot be resource conflicts (well, to the extent possible)).
Asking for help while omitting important information wasn’t fair because it wasted my and other volunteer’s free time that wouldn’t want to support in cases like this.
I don’t want to forbid such posts or discourage anyone from asking either, but the least thing I’d expect from any poster is to point out the shared usage broadly and in the first place.
That the OP did not do, instead he declares his parallel activity to be irrelevant.
That is bad style, disappointing and demotivating.
My humble opinion, YMMV.