i tried a shell:threads on command line of karaf console and got out that for each existing docker-network which is existing such thread was started. So im not sure if for some add-ons it is ignored that openhab is running on host-network
"SocketListener(192-168-32-1.local.)" Id=315 in RUNNABLE (running in native)
at java.base@17.0.9/sun.nio.ch.DatagramChannelImpl.receive0(Native Method)
at java.base@17.0.9/sun.nio.ch.DatagramChannelImpl.receiveIntoNativeBuffer(DatagramChannelImpl.java:750)
at java.base@17.0.9/sun.nio.ch.DatagramChannelImpl.receive(DatagramChannelImpl.java:728)
at java.base@17.0.9/sun.nio.ch.DatagramChannelImpl.trustedBlockingReceive(DatagramChannelImpl.java:666)
at java.base@17.0.9/sun.nio.ch.DatagramChannelImpl.blockingReceive(DatagramChannelImpl.java:635)
at java.base@17.0.9/sun.nio.ch.DatagramSocketAdaptor.receive(DatagramSocketAdaptor.java:240)
at java.base@17.0.9/java.net.DatagramSocket.receive(DatagramSocket.java:700)
at javax.jmdns.impl.SocketListener.run(SocketListener.java:57)
If your container sees more than one network, you might se more than one in MainUi/Network settings as well.
Limit those settings to just the one IP you use.
Just a guess as I am no docker user/expert, but this applies to all installations where the machine has more than one interface (LAN+WiFi e.g.)
hi @hmerk
i limited before update the network in ui, thats correct.
After the update - now - the gui is unvailable.
So i guess that configuration was not changed it should be still limited to my main IP
yeah, this could be possibly the reason. Is there a way to switch this off/back to former methode by karaf? A short time after a restart i have access to karaf before all is hanging
Also it would be a good thing to write something about this change if someone uses docker in Release-Notes There is already a headline for “Breaking Changes”
Maybe there is a real possibility like a variable EXTRA_JAVA_OPTS to set hard an network-interface which java listens on. This possibly would also do it so it is forbidden for java to use any other interface which exists…
To test if this is caused by the add-on suggestion service (mDNS and/or UPnP), you can add the following lines to the the cfg/services/addons.cfg file before starting OH:
The suggestion finder service will not start and use these underlying services in that case. So if they were not used by any of your add-ons before, they will not be started at all.
The same options are available from the UI, but setting it in the addons.cfg file allows you to avoid starting it at all.
But it shows on “top -H” still the mass of SocketListener and JmDNS Threads. Im not sure if these 2 parameters are working in the expected way…
But a positive thing: The CPU-Usage is a lot lower now about 15 mins later and so the gui is usable now with this
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
27 openhab 20 0 6034604 1.5g 25812 S 11.3 19.9 39:14.56 java
2678 root 20 0 9812 3280 2740 R 0.3 0.0 0:00.11 top
1 openhab 20 0 1944 436 372 S 0.0 0.0 0:00.16 tini
This time im seeing no more SocketListener-Threads, but still JmDNS Threads. So it could be that the mass of SocketListener-Threads which were gone now were causing the issue.
Update: SocketListener-Threads are still there again. CPU again high as before. Only from waiting, no change
disable suggestionfinder → the threadcount does not come down. Seems an issue from usage together with docker and network in hostmode
Update: i downgraded to 4.0.3 and my smarthome works again fine. So anyone which has same problem - downgrade is no problem.
If someone has an idea how to disable the SocketListener-Threads/JmDNS-Threads i can run again the 4.1.0 and can try this…
On 4.0.3 we have these thread-counts to get a relation:
hi @hmerk
yes, could be. But the unavailability could also come from the hundreds of processes which are eating the available cpu and ram. its like i had former other addons which pinging. Its useless that these pings are done on hundreds of interfaces which existing on docker. and so its also useless that it is discovering things from interfaces which are not the main interface. So if it is possible to limit the interface where the java-process is communicating on then it would work all fine. For pings i opened an issue and a workaround is known. For this i just dont know any workaround to NOT OPEN hundreds of processes “SocketListener” and “JmDNS”, because the workaround in addons.cfg from @Mherwege does not work. I greped therefore the counts of these processes and counted them 90/213 which making no sence. On docker you have other containers which has private interfaces. It makes no sence to say that you have to give openhab the host-network that UPNP can work and then let it run not only on the main-network-card interface instead let it act on hundreds of interfaces where no broadcasts/multicasts ever will be received from upnp and so on.
I was merely trying to figure out if the issue now appears due to these suggestion finders.
Just to be sure, when youvdisable the finders, if you can get into the karaf console, can you verify if they are effectively disabled? Do
bundle:list | grep Finder
to find out which ones are running.
If all are running, there should be 4 of these. You cannot disable the process one (it looks for running processes on current machine). But you should be able to disable the others, which does not necessarily mean none of the dependencies are not running anymore due to another reason outside of the finders.
The IP-based finder could potentially create sockets to listen for incoming traffic (but will time out and not get recreated). So disabling that as well could be interesting. The parameter would be:
suggestionFinderIp = false
Again, I think the root cause may indeed be the underlying code tries on too many interfaces. But that is not because of the finders.
I don’t use docker, so no way to test.
After 10mins later i looked at CPU-Usage of container: it is about 10-20%. And also the GUI is working now on 4.1.0 with the 3 parameters in as fast as under 4.0.3!
im thinking this too but on docker you will have this if your are not using docker for only openhab. This would be like using a containership for only one container…
and CPU/RAM eaten and GUI no more reacting… Im not sure why this >300 threads which counted by wc-l are coming because the UpNP Addon i uninstalled so from this it can not come… What im surely know that the behavoir is not existing in 4.0.X and is reproducable in 4.1.0
Hi @Scriptwriter
I upgraded to 4.1.0 today and experienced exactly the same thing. I’m using docker on Ubuntu in host mode for OH. The upgrade was successful, but the UI was unresponsive at first. After a few minutes I was able to use it (more or less).
CPU usage was extremely high, affecting the whole system. And I also saw a lot of these SocketListener( processes.
Didn’t have time to further investigate, so I downgraded to 4.0.3 for the time being which also solved it for me.
@hmerk
My guess is, that the UI is unresponsive because of the high CPU usage. In my case, I’m on an 8 core Intel CPU (compared to @Scriptwriter’s Raspi). The UI gets started in my case, but is also very slow and hard to use.
For me it’s quite similar. Upgraded from 4.0.4 to 4.1.0 on docker on a Synology NAS in host Network and got CPU usage like never seen before. GUI also very slow or not responding at all. Fired 4.0.4 back up and it’s running without issues.