Openhab 4.0.1 and Network Pingdevice on Docker-Install results in massive threads

Hi all,
im pinging some Devices from my Openhab 4.0.1 Installation (Docker Container). The Installation is migrated from older 3.4.3-

Her Config of one of my Pingdevices:

UID: network:pingdevice:4b7f079f3f
label: InternetRouter
thingTypeUID: network:pingdevice
configuration:
  hostname: 192.168.2.254
  refreshInterval: 60000
  retry: 1
  timeout: 5000

Im looking why my Docker-Container is using so much load. My Container is running in Host-Mode. In Config-Network the Main IP of the Host itself is set as IP for Openhab.

So what i dont understand is that openhab makes Pingthreads from all Source-Networks of other containerconstructs (see br-ids) which existing on same server but having no real relation to openhab?
(openhab is seeing them from host-mode which i need to use upnp)

nobody   3423789  0.0  0.0   9140  4884 ?        S    12:11   0:00 arping -w 5 -C 1 -i br-e77d9bdcb692 192.168.2.254
nobody   3423791  1.0  0.0   9140  4852 ?        S    12:11   0:00 arping -w 5 -C 1 -i br-3a8bcb7e1a20 192.168.2.254
nobody   3423795  1.0  0.0   9140  4876 ?        S    12:11   0:00 arping -w 5 -C 1 -i br-5dbd2b0f47cf 192.168.2.254
nobody   3423803  1.0  0.0   9140  4848 ?        S    12:11   0:00 arping -w 5 -C 1 -i br-4c02ad546123 192.168.2.254
nobody   3423811  1.0  0.0   9140  4888 ?        S    12:11   0:00 arping -w 5 -C 1 -i br-c8c8be0340ae 192.168.2.254
nobody   3423813  1.0  0.0   9140  4956 ?        S    12:11   0:00 arping -w 5 -C 1 -i br-72ac99832aa5 192.168.2.254
nobody   3423815  1.0  0.0   9140  4836 ?        S    12:11   0:00 arping -w 5 -C 1 -i br-d125af02b9b4 192.168.2.254
nobody   3423816  1.0  0.0   9140  4884 ?        S    12:11   0:00 arping -w 5 -C 1 -i br-77b0d455dbc2 192.168.2.254
nobody   3423818  1.0  0.0   9140  4836 ?        S    12:11   0:00 arping -w 5 -C 1 -i br-37b7e1374480 192.168.2.254
nobody   3423820  2.0  0.0   9140  4848 ?        S    12:11   0:00 arping -w 5 -C 1 -i br-ad8705eb98cb 192.168.2.254
nobody   3423823  1.0  0.0   9140  4928 ?        S    12:11   0:00 arping -w 5 -C 1 -i br-c6afe6811015 192.168.2.254
nobody   3423825  1.0  0.0   9140  4864 ?        S    12:11   0:00 arping -w 5 -C 1 -i br-b3d1fee00881 192.168.2.254
nobody   3423830  2.0  0.0   9140  4852 ?        S    12:11   0:00 arping -w 5 -C 1 -i br-8a8f9f51418c 192.168.2.254
nobody   3423838  1.0  0.0   9140  4848 ?        S    12:11   0:00 arping -w 5 -C 1 -i br-eb09dae5f21e 192.168.2.254
nobody   3423844  2.0  0.0   9140  4892 ?        S    12:11   0:00 arping -w 5 -C 1 -i br-f89f1a0e504d 192.168.2.254
nobody   3423846  1.0  0.0   9140  4876 ?        S    12:11   0:00 arping -w 5 -C 1 -i br-b6dd5302b3c3 192.168.2.254
nobody   3423847  2.0  0.0   9140  4764 ?        S    12:11   0:00 arping -w 5 -C 1 -i br-32b28ae41ba6 192.168.2.254
nobody   3423849  1.0  0.0   9140  4916 ?        S    12:11   0:00 arping -w 5 -C 1 -i br-bf53bf1036dd 192.168.2.254
nobody   3423850  1.0  0.0   9140  4784 ?        S    12:11   0:00 arping -w 5 -C 1 -i br-2db6488b5fc1 192.168.2.254
nobody   3423857  2.0  0.0   9140  4860 ?        S    12:11   0:00 arping -w 5 -C 1 -i br-225545b56f9e 192.168.2.254
nobody   3423861  2.0  0.0   9140  4808 ?        S    12:11   0:00 arping -w 5 -C 1 -i br-c1e8628f5e78 192.168.2.254
nobody   3423862  2.0  0.0   9140  4796 ?        S    12:11   0:00 arping -w 5 -C 1 -i br-ba49e533177b 192.168.2.254
nobody   3423864  1.0  0.0   9140  4868 ?        S    12:11   0:00 arping -w 5 -C 1 -i docker0 192.168.2.254
nobody   3423865  2.0  0.0   9140  4852 ?        S    12:11   0:00 arping -w 5 -C 1 -i br-0ced1bc112c8 192.168.2.254
nobody   3423869  2.0  0.0   9140  4928 ?        S    12:11   0:00 arping -w 5 -C 1 -i br-09b6e1630d76 192.168.2.254
nobody   3423873  1.0  0.0   9140  4800 ?        S    12:11   0:00 arping -w 5 -C 1 -i br-46ec93053821 192.168.2.254

So, if you pinging 20 Devices and setup 20 bridge-Networks you can calculate the threads from this…

So what can i do to avoid this kind of unnessecary load? How can i reconfigure the upper Ping-Config that only ONE Ping is done from the selected source of openhab?

Best thanks for your help and regards

Andreas

if you disable the not correctly working network-binding then you see the massive load is down which comes from the massive threads.

Intensive googling told me the massive threads per sourcenetwork someone reported before with older version: Arping Related Network Binding Issue - #16 by FranzS

But i have no solution how to tell the network binding only to open 1 arping for 1 profile. Maybe someone can help me with this to get the binding working again

Opened issue: Network-Binding, Pingdevice on Docker-Install results in massive threads · Issue #15437 · openhab/openhab-addons · GitHub

Have you seen the solution in the thread here? My solution was derived from this, where enp1s0 is the physical interface used by the container.

Create the shell script and make it executable chmod +x

#!/bin/bash
if [[ $6 == "enp1s0" ]];then
    exec arping $@
else
    exit 1
fi

Using $6 is a bit brittle and might need fixing if the binding changes, but I just wanted a quick solution.
Set the arping executable to point to the script at http://<openhab-ip>/settings/addons/binding-network/config.

Doing this reduced my base CPU load from ~2.5% to 1.5% for 13 interfaces pinging 12 IP addresses.

Another workaround might be placing all docker containers on the machine in host mode.

hi @jamesmelville
thats an interesting “quick-workaround”, which i will try out. I searched a lot but did not get a notice from this thread.
All containers to host mode is not a possible solution for me. I have a lot more networks…

…added now the script like described. My Interface i have to compare is eth0 so i changed comparision to it:
if [[ $6 == "eth0" ]];then
In Dockerfile added line to copy the script into container:
COPY ./myarping /usr/bin/myarping
Changed the reference in Config File openhab_conf/services/network.cfg to
binding.network:arpPingToolPath=/usr/bin/myarping
The result is, that CPU is not going up visible in my node-exporter graph to before when disabled all pings. Thats really nice. On my site running successfully about 42 container on a raspi4 (with zram and ssd)