the last days before my change today with disabling network-binding i also did not had such en “explosion” with thousands of there messages.
Another person told me about his impacts of ip camera binding which im also using. But disabling did not gave me a hint of impacts of this.
Thats why i logged interactively to container and had a look at the threads which are running and thnking about them.
These very much threads of arping gaves me a hint to disabling the network binding. Then it lowers the load completely as i have shown in graph. So what we have seen and seems also some years before someone got out (linked the other cases) is that pinging via network-binding is not useful if you are using it in docker container. It makes absolutely no sence that it starts pingthreads from each network which is existing in docker when in openhab is defined take only the network XXX.
I guess there would be a little change nessecary to avoid the unnessecary pings.
For me, if i would need it in future this means to make a little shellscript and start that instead of network-binding.
So fazit: load can be impacted from openhab itself also if someone tells it isnt
I believe you already filed an issue but if not, make sure you do so. The network binding should not do that.
But I also recommend using the best tool for the job. OH is not a very good IT system monitoring system. If you need to monitor a bunch of devices and services via ping, you might be better off deploying a system like Zabbix, ELK stack, Prometheus, etc.
…i did a question in community about the network-binding and its behavior(no answers) ,
and also filed an issue now for the massive threads.
“best tool for the job”: i did use it because it was an easy way for showing which device running and which is not so other people in family can easily know which device to reboot
im also/already using prometheus/ LGTM and will use more this tool for that.
Today in morning it crashed again with WARN ] [ab.core.internal.events.EventHandler] - The queue for a subscriber of type 'class org.openhab.core.internal.items.ItemUpdater' exceeds 5000 elements. System may be unstable.
Im looking further if i can get out something. Maybe a systemlimit like open files or others…
just to put this down also. Zwave module / seems to be acting up also. i receive multiple same data from thermostats and energy meters. i.e. microsecond apart the same data 20 times. could be causing load issues. also send commands dont seem to work. another strange behaviour on my setup.
copy the file e.g. to /tmp folder.
then login via ssh and change to user root ( sudo bash )
from that shell you can copy the file from /tmp folder to the target folder
make sure that ownership and permissions are the same after this
sorry just corrected my previous posts as the code wasnt working.
after update in 4.0.2 the part that isnt in the file is the “GroupStateUpdatedEvent” log level the “ItemStateisthere” so do you think given the screen show it could be that? (anyway will try)
in the file you will find other Logger-Entries. The new one should be there with the correct number of spaces on beginning of line as the others. If you are editing the file use Linux-Texteditors (not Windows Editors, they can make wrong CR/LFs) (if you dont know one, and never before used others then take nano). If correctly applied AND after it openhab restarted it reads the file and use it. If you have problems i guess you can download it also from git to the correct place with wget
Same error for me.
“The queue for a subscriber of type ‘class org.openhab.core.internal.items.ItemUpdater’ exceeds 5000 elements”
intel i3 + clean proxmox-LXC openhab-only container (ubuntu 22, 5gb+2swap), OH 4.0.2 just stop working after a few days with “exceed 500 errors in log”.
Same container with 3.4 was pretty stable.
…i still have the problem. Until now im getting out only that something seems to consume most ram and in compose log i’ll find then
openhab_1 | [83100.969s][warning][os,thread] Failed to start the native thread for java.lang.Thread "items-queue"
smem then shows a magically border nearly 1M in Case of the problem
2507803 9001 /usr/lib/jvm/java-17-openjd 462104 993956 994094 994236
this time im experimenting to implement a healthcheck to automaticly restart the container, but until today it is not yet working correctly because it tells then it cannot fork the curl-instance if problem appears. So it seems memory inside container has problem, outside it would be possible to handle. So im still experimenting for getting a workaround. The good thing on my site: every morning the problem appears to giv it a new try;-)