UPnP based discovery does not work on host networked docker container

  • Platform information:
    • Hardware: ARMv7 rev 3 (v7l) / 2 GB
    • OS: Raspbian Buster (Linux/5.10.103-v7l+ (arm))
    • Java Runtime Environment: Eclipse Adoptium 11.0.17 (Temurin-11.0.17+8)
    • openHAB version: 3.4.1 (from this docker image)
  • Issue of the topic:

SSDP based discovery does not seem to be working, as it does not find most of the devices on my network.

Currently I have 3 devices on my network which I expect to be able to discover:

  • A Canon printer
  • 2 LG webOS TVs

These and all of the computers mentioned in the post are connected to the same local network, which is addressed as 192.168.56.0/24.

The problem only occurs on my Raspberry Pi device, where I intend to run openHAB, which is normally run as a docker-compose stack, in which the openhab container is only given the host network.
For the Canon printer I have installed the IPP binding. This device has ben discovered automatically.
For the webOS TVs I have installed the LG webOS binding. These devices are not discovered automatically, neither manually by clicking on the Discover button on the “Add a new Thing: lgwebos” page, despite them being turned on (not just standby, they emit light and sound) and configured for control from their android app.

If I run the same version of openHAB on my windows PC (downloaded from here), it has already discovered the printer and the TVs when I get to open the Things page of my instance: the devices are waiting for me in the Inbox.

If I run the same version of openHAB in an ubuntu virtual machine on my windows pc, with Docker by the docker run --name openhab --net=host openhab/openhab:3.4.1 command, that has also discovered all of these devices by the time I get to the Things page, and they are visible in the Inbox.

If I forget about docker compose and instead run openhab with the docker run --name openhab --net=host openhab/openhab:3.4.1 command on this problematic raspberry pi, the problem persists: the TVs are not found, even though I have installed the lgwebos binding. Manually starting a discovery for this binding does not find the TVs either.

  • If logs where generated please post these here using code fences:

By default, no relevant logs appear.
Increasing the logging level of the “org.openhab.binding.lgwebos” and " com.connectsdk" loggers does not log anything either while manually running a discovery.
If I also increase the logging level for “org.jupnp”, I see that openHAB receives discovery responses from the TVs.

I have attached logs and the log level configuration. I’ll try that again when I get home. I was unable to pipe the output of log:tail to a file, log:display only shows a handful of lines however I parameterize it, and they dont’t respect --no-color.
Note that the loggers com.connectsdk, org.openhab.binding.lgwebos and org.jupnp are set to TRACE level logging.
I did a log clear before saving the output. If startup events would also be useful please let me know and I’ll upload a log starting from a restart.
You may see the following notable IP addresses in the log:

  • 192.168.56.0/24 - my physical home network, to which computers, the printer and the TVs are connected
  • 192.168.56.2 - the raspberry pi which runs openhab
  • 192.168.56.34 - one of the LG TVs

It took more time for me to be able to get back to this, but I have uploaded the logs here.
I wasn’t able to upload it here, as with its 2,2 MB size it is above the size limit.
I have seen that a new update was released, so the logs were made with openHAB version 3.4.2, still in a Docker container, created and started with the same settings as before.

The logs start from starting up openHAB.
It includes when I have manually started discovery for the lgwebos binding (with the “Scan” button on the “Add a new Thing” page), and it ends when the discovery process has seemingly ended (~30 seconds after starting discovery, when the log output has became more quiet).
The log levels were set up this way:

openhab> log:get
Logger                                             │ Level
───────────────────────────────────────────────────┼──────
ROOT                                               │ WARN
com.connectsdk                                     │ TRACE
javax.jmdns                                        │ INFO
javax.mail                                         │ ERROR
openhab.event                                      │ INFO
openhab.event.AddonEvent                           │ ERROR
openhab.event.ChannelDescriptionChangedEvent       │ ERROR
openhab.event.InboxUpdatedEvent                    │ INFO
openhab.event.ItemAddedEvent                       │ ERROR
openhab.event.ItemChannelLinkAddedEvent            │ ERROR
openhab.event.ItemChannelLinkRemovedEvent          │ ERROR
openhab.event.ItemRemovedEvent                     │ ERROR
openhab.event.ItemStateEvent                       │ INFO
openhab.event.RuleAddedEvent                       │ ERROR
openhab.event.RuleRemovedEvent                     │ ERROR
openhab.event.RuleStatusInfoEvent                  │ ERROR
openhab.event.StartlevelEvent                      │ ERROR
openhab.event.ThingAddedEvent                      │ ERROR
openhab.event.ThingRemovedEvent                    │ ERROR
openhab.event.ThingStatusInfoEvent                 │ ERROR
openhab.event.ThingUpdatedEvent                    │ ERROR
org.apache.cxf.jaxrs.sse.SseEventSinkImpl          │ ERROR
org.apache.cxf.phase.PhaseInterceptorChain         │ ERROR
org.apache.karaf.jaas.modules.audit                │ INFO
org.apache.karaf.kar.internal.KarServiceImpl       │ ERROR
org.apache.karaf.shell.ssh.SshUtils                │ ERROR
org.apache.karaf.shell.support                     │ OFF
org.apache.sshd                                    │ ERROR
org.eclipse.lsp4j                                  │ OFF
org.jupnp                                          │ TRACE
org.openhab                                        │ INFO
org.openhab.binding.lgwebos                        │ TRACE
org.ops4j.pax.url.mvn.internal.AetherBasedResolver │ ERROR
org.ops4j.pax.web.pax-web-runtime                  │ OFF
su.litvak.chromecast.api.v2.Channel                │ ERROR

Notable log events

I have taken a look at the files, and I have noted the following events in it:

Manual start of discovery:

2023-02-14 15:35:37.358 [TRACE] [.jupnp.controlpoint.ControlPointImpl] - Sending asynchronous search for: ssdp:all

The first time the discovery request was sent to the main local network:

2023-02-14 15:35:38.118 [DEBUG] [.jupnp.transport.impl.DatagramIOImpl] - Sending message from address: /192.168.56.2:0

The first time a response to the discovery was received from the main local network:

2023-02-14 15:35:38.216 [DEBUG] [upnp.transport.spi.MulticastReceiver] - UDP datagram received from: 192.168.56.2:52967 on local interface: eth0 and address: 192.168.56.2

I suspect that this response is from openHAB itself, but a different component of it.

The second time a response to the discovery was received from the main local network:

2023-02-14 15:35:38.831 [DEBUG] [.jupnp.transport.impl.DatagramIOImpl] - UDP datagram received from: 192.168.56.47:60434 on: /192.168.56.2:0

This is one of the webOS TVs. A little further it can be seen as it introduces itself as one.

The previous message was logged once again. Probably not that notable, but I found it interesting.

2023-02-14 15:35:38.933 [DEBUG] [.jupnp.transport.impl.DatagramIOImpl] - UDP datagram received from: 192.168.56.47:60434 on: /192.168.56.2:0

The first time detailed information on the TV was requested from it.
The ... means that these logs events were close to each other in the log, but I have not copied in there others that seemed to be irrelevant.

2023-02-14 15:35:39.483 [DEBUG] [p.protocol.RetrieveRemoteDescriptors] - Sending device descriptor retrieval message: (StreamRequestMessage) GET http://192.168.56.47:1132/
...
2023-02-14 15:35:39.537 [DEBUG] [org.jupnp.transport.Router          ] - Sending via TCP unicast stream: (StreamRequestMessage) GET http://192.168.56.47:1132/
...
2023-02-14 15:35:39.547 [TRACE] [org.jupnp.transport.spi.StreamClient] - Preparing HTTP request: (StreamRequestMessage) GET http://192.168.56.47:1132/
...
2023-02-14 15:35:39.555 [TRACE] [org.jupnp.transport.spi.StreamClient] - Creating HTTP request. URI: 'http://192.168.56.47:1132/' method: 'GET'
...
2023-02-14 15:35:39.605 [TRACE] [org.jupnp.transport.spi.StreamClient] - Waiting 10 seconds for HTTP request to complete: (StreamRequestMessage) GET http://192.168.56.47:1132/

First response of the TV to the previous request:

2023-02-14 15:35:39.675 [TRACE] [otocol.async.ReceivingSearchResponse] - Received device search response: (RemoteDeviceIdentity) UDN: uuid:c5457ee3-5af0-6446-7af6-ff1d34c9b818, Descriptor: http://192.168.56.47:1132/

Here is the first time this request would be sent again, but it sees that it had already have sent such a request to which it haven’t yet processed a response, so now it doesn’t send it again:

2023-02-14 15:35:39.698 [TRACE] [otocol.async.ReceivingSearchResponse] - Skip submitting task, active retrieval for URL already in progress:http://192.168.56.47:1132/

While processing the request for the detailed information, it cannot process the SERVER header because of the formatting of the UPNP specification number:

2023-02-14 15:35:39.849 [TRACE] [upnp.model.message.header.UpnpHeader] - Invalid header value for tested type: ServerHeader - Missing 'UPnP/1.' in server information: Linux/i686 UPnP/1,0 DLNADOC/1.50 LGE WebOS TV/Version 0.9
...
2023-02-14 15:35:39.873 [TRACE] [org.jupnp.model.message.UpnpHeaders ] - Ignoring known but irrelevant header (value violates the UDA specification?) 'SERVER': Linux/i686 UPnP/1,0 DLNADOC/1.50 LGE WebOS TV/Version 0.9

While making a GET request to 192.168.56.47:1132 from my PC, I see that the SERVER header actually contains UPnP/1,0. However, when I was running openHAB on my PC, it was able to discover the TV.

The first time a different kind of response has arrived, which contains information about a UPnP service. Here the UPnP version number seems to be formatted as openHAB expects it:

2023-02-14 15:35:40.955 [TRACE] [upnp.transport.spi.DatagramProcessor] - HTTP/1.1 200 OK
Location: http://192.168.56.47:1428/
Cache-Control: max-age=1800
Server: WebOS/4.0.0 UPnP/1.0
EXT: 
USN: uuid:6ff027f3-83c6-4fe7-a16c-089fdfef6302::upnp:rootdevice
ST: upnp:rootdevice
Date: Tue, 14 Feb 2023 14:35:39 GMT

This response has other variants too, each of which contain a different service in the USN and ST headers. One more example is this:

2023-02-14 15:35:41.344 [TRACE] [upnp.transport.spi.DatagramProcessor] - HTTP/1.1 200 OK
Location: http://192.168.56.47:1428/
Cache-Control: max-age=1800
Server: WebOS/4.0.0 UPnP/1.0
EXT: 
USN: uuid:6ff027f3-83c6-4fe7-a16c-089fdfef6302::urn:lge:device:tv:1
ST: urn:lge:device:tv:1
Date: Tue, 14 Feb 2023 14:35:39 GMT

For some of these responses the logs complain that it was unable to parse the ST and the USN headers.

After these, to me it seems everything just gets repeated over 30 seconds, until openHAB gives up, but I may have missed some important logged events.

I was experimenting and I think I may have found what causes this.

The workaround

I’ve noticed that if I shut down any one of my other container stacks (with docker-compose down, which deletes the container and the docker networks that were made explicitly for the stack), and then I restart openHAB, openHAB will become able to discover devices on the network, including the LG TVs and the minidlna mediaserver (which is running on this same machine, in a different container).

What is causing this exactly

After some more experimenting it seems neither creating a certain amount of networks or containers in itself lead to openHAB not being able to discover devices, the actual thing that leads to this is when a certain amount of networks are being used by at least one container.

For example, if you already run openHAB on the host docker network, and then create 2 custom docker networks with the command docker network create [name], you will have 3 networks, which all are visible to the host OS, and also to openHAB.
Only one of them is used so far (the host one), the others will have the DOWN status, and they will not be visible in the “Settings > Network Settings > Primary Address” list, only in “Broadcast Address” in the same menu.
If you start a container (with docker run --network <network name> --detach containous/whoami, but you can use a different image, I choose this one because it keeps running and is relatively lightweight), and connect each of these new custom networks to this container (docker network connect <network name> <container name>, you can skip the one which you have given in the run command) those networks will be UP-ed, and they will also appear in the mentioned “Primary Address” list.
At this point, discovery is still working.
Now create and connect one more docker network to this whoami container.
Discovery is still working.
Now restart the openHAB container.
Discovery is now not working.
Now disconnect any one network from the whoami container, and restart openHAB.
Discovery is working again.

Conditions for the problem to appear

So far I don’t know if this is an openHAB issue, or a Docker one.

What I know, though, is that the discovery of openHAB will work fine if its container was started before a certain amount of networks were UP-ed, even if new networks are UP-ed later while openHAB is running, even though openHAB will see these new networks in the mentioned “Primary Address” list. The problem only happens when openHAB is started while this amount of networks are already UP-ed.

Also, the amount of networks needed to break discovery can vary.
On my Raspberry Pi, where I was initially testing openHAB, discovery worked fine until I had only 6 networks UP-ed, and the 7th broke it.
But then I continued testing this on an Ubuntu virtual machine on my x86 PC (because it is faster and nothing critical runs there so I have a free hand in testing), and there 2 networks are OK, and the 3rd one breaks it, however looking at the mentioned “Primary Addresses” list, in this environment for every network interface I have 2 addresses that start with 169.254., so eventually the limit may still be between 6 and 7 interfaces, but the rest is filled with these weird ones here (for 2 interfaces I have 2+4 addresses, for 3 I have 3+6).

Finally, when discovery is not working, I see lines like this in the log:

openhab    | 2023-02-18T13:47:24.727287554Z 14:47:24.726 [INFO ] [org.jupnp.transport.spi.StreamClient] - Timeout of 10 seconds while waiting for HTTP request to complete, aborting: (StreamRequestMessage) GET http://192.168.56.34:1572/

These do not appear when discovery is working.

One more thing I almost forgot.

When these timeout logs do appear, if I open a shell in the container (with docker exec -it openhab bash) and run curl for one of the not responding addresses, I’ll get a response that looks to be ok:

docker-compose logs -ft | grep --text "Timeout of 10 seconds"
...
openhab    | 2023-02-18T14:05:07.723277151Z 15:05:07.722 [INFO ] [org.jupnp.transport.spi.StreamClient] - Timeout of 10 seconds while waiting for HTTP request to complete, aborting: (StreamRequestMessage) GET http://192.168.56.2:8200/rootDesc.xml
openhab    | 2023-02-18T14:05:08.096812184Z 15:05:08.095 [INFO ] [org.jupnp.transport.spi.StreamClient] - Timeout of 10 seconds while waiting for HTTP request to complete, aborting: (StreamRequestMessage) GET http://192.168.56.34:1572/
...
docker exec -it openhab bash
root@rpi-docker-openhab:/openhab# curl http://192.168.56.34:1572/
<?xml version="1.0" encoding="UTF-8"?>
<root xmlns="urn:schemas-upnp-org:device-1-0" xmlns:dlna="urn:schemas-dlna-org:device-1-0">
  <specVersion>
    <major>1</major>
    <minor>0</minor>
  </specVersion>
  <device>
    <deviceType>urn:lge:device:tv:1</deviceType>
    <friendlyName>[LG] webOS TV SK8100PLA</friendlyName>
    <manufacturer>LG Electronics</manufacturer>
    <manufacturerURL></manufacturerURL>
    <modelDescription></modelDescription>
    <modelName>49SK8100PLA</modelName>
    <modelURL></modelURL>
    <serialNumber></serialNumber>
    <UDN>uuid:4e486f25-760e-4096-9f37-96b0bc713c8a</UDN>
    <wifiMac>20:3D:BD:FF:8F:5A</wifiMac>
    <wiredMac>A8:23:FE:41:DC:65</wiredMac>
    <serviceList>
      <service>
        <serviceType>urn:lge:service:virtualSvc:1</serviceType>
        <serviceId>urn:lge:serviceId:virtualSvc-0000-0001</serviceId>
        <SCPDURL>/virtualService/4e486f25-760e-4096-9f37-96b0bc713c8a/scpd.xml</SCPDURL>
        <controlURL>/virtualService/4e486f25-760e-4096-9f37-96b0bc713c8a/control.xml</controlURL>
        <eventSubURL>/virtualService/4e486f25-760e-4096-9f37-96b0bc713c8a/event.xml</eventSubURL>
      </service>
    </serviceList>
  </device>
</root>

Hello Peter,

Thanks for great investigation. Have you progressed any farder?
I recently migrated my monolit OH running on raspberry PI to docker deployment. I have only 3 networks (including the default one) and I have the same issue. I can confirm that removing other containers but oh enables UPnP discovery, but I haven’t succeeded with that workaround with a new container having all 3 networks.

Have you checked in the network settings in the OH web UI whether the correct network is set as primary network? I guess that it might be that the wrong network is chosen when OH finds multiple networks.

Hi Thomas,
Selected address is the one of my raspberry PI


I guess, it is correct.
I tried all of them but nothing helped.
Thanks

Inspite of correctly selected default network I see in the log it is ignored

Not sure if it is related - but I had some problems with UPnP together with Docker and Sonos. I’ve set the network interface to use for UPnP explicitely in userdata/config/org/openhab/network/upnp.config:

networkInterface="eno1"
service.pid="org.openhab.network.upnp"

Where “eno1” is the actual physical networkconnection; I use host network for the openHAB container.

br-ddf245f4293c: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
eno1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
veth7517331: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
veth21f1b9b: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
veth36a4474: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
veth5476b36: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
veth7d6c2c9: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
vetha94517c: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
vethd2ac1e0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
vethfa0170e: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

maybe it is worth a try if it makes a difference in your situation as well.

Hello Patrik,
In my userdata/config/org/openhab i dont have directory network. Do you know what it creates?
Thanks a lot.

Sorry - I do not know if this is related toa binding that I have installed? I hope someone else can help you here. I also use a docker iimage - but a newer version; but I expect the directory structure remained more or less the same … those are the bindings I have installed:

openhab> bundle:list | grep Add-ons
235 │ Active │  80 │ 3.4.5.202306011539     │ openHAB Add-ons :: Bundles :: SurePetcare Binding
265 │ Active │  80 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: Automation :: JavaScript Scripting
266 │ Active │  80 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: Android Debug Bridge Binding
267 │ Active │  80 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: Astro Binding
268 │ Active │  80 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: Chromecast Binding
269 │ Active │  80 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: ComfoAir Binding
270 │ Active │  80 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: Exec Binding
271 │ Active │  80 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: Feed Binding
272 │ Active │  80 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: HarmonyHub Binding
273 │ Active │  80 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: HP Printer Binding
274 │ Active │  80 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: HTTP Binding
275 │ Active │  80 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: iCalendar Binding
276 │ Active │  80 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: Jellyfin Binding
277 │ Active │  80 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: Log Reader Binding
278 │ Active │  80 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: Mail Binding
279 │ Active │  80 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: MQTT Broker Binding
280 │ Active │  81 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: MQTT EspMilightHub
281 │ Active │  81 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: MQTT Things and Channels
282 │ Active │  82 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: MQTT HomeAssistant Convention
283 │ Active │  82 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: MQTT Homie Convention
284 │ Active │  80 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: Network Binding
285 │ Active │  80 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: OpenWeatherMap Binding
286 │ Active │  80 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: SamsungTV Binding
287 │ Active │  80 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: SNMP Binding
288 │ Active │  80 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: Sonos Binding
289 │ Active │  80 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: Spotify Binding
290 │ Active │  80 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: Systeminfo Binding
291 │ Active │  80 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: ZWave Binding
302 │ Active │  80 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: IO :: openHAB Cloud Connector
303 │ Active │  80 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: Persistence Service :: InfluxDB
304 │ Active │  80 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: Persistence Service :: MapDB
305 │ Active │  80 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: Persistence Service :: RRD4j
306 │ Active │  75 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: Transformation Service :: JavaScript
307 │ Active │  75 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: Transformation Service :: Jinja
308 │ Active │  75 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: Transformation Service :: JSonPath
309 │ Active │  75 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: Transformation Service :: Map
310 │ Active │  75 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: Transformation Service :: RegEx
311 │ Active │  80 │ 3.4.4                  │ openHAB Add-ons :: Bundles :: Voice :: VoiceRSS Text-to-Speech

My problems were related to the sonos binding - so I do not know if this binding created the folder. Maybe you give 3.4.4 a try to see if this is something new …

In fact I have 3.4.4. I will check the list of your bindings. Thank you Patrik

I tried to install sonos but it does not create the config file.

… ok, let’s check what configuration the runtime actually uses … - can you login to karaf and execute a config:list? You’ll get a lot of output - search for org.openhab.network. My configuration looks like this:

----------------------------------------------------------------
Pid:            org.openhab.network
BundleLocation: null
Properties:
   primaryAddress = 192.168.0.165/24
   service.pid = org.openhab.network
----------------------------------------------------------------
Pid:            org.openhab.network.upnp
BundleLocation: null
Properties:
   networkInterface = eno1
   service.pid = org.openhab.network.upnp
----------------------------------------------------------------

Not sure if the 2nd entry has an effect at all (or is present in your config, have to investigate a bit more) - but the primary address should be set in your case as well. If not I think we should try to solve this 1st.

The container is configured with host network - right?

kind regards,
Patrik

Hi Patrik,

openhab> config:list| grep "Pid:\s+org.openhab.network" -B1 -A 8
----------------------------------------------------------------
Pid:            org.openhab.network
BundleLocation: null
Properties:
   broadcastAddress = 192.168.2.255
   primaryAddress = 192.168.2.27/24
   service.pid = org.openhab.network
   useIPv6 = false
   useOnlyOneAddress = true
----------------------------------------------------------------

I don’t have the part with Pid: org.openhab.network.upnp in my config.

And yes,

services:
  openhab:
    network_mode: host

Kind regards,
Radek

What I also noticed is that you have assigned two IPs of the same subnet to the box:
image

If I do this, sonos (which also relies on upnp) stops working even with my settings - so this might be a problem source as well. Can you try with only one address? Why do you need a second one?

with kind regards,
Patrik

The 105 is wifi and 27 is wired. Let me try to disable wifi this evening.
Thank you.

Best regards,
Radek

I disabled Wifi

sudo rfkill block wifi

the wifi IP disappeared from the list in openhab Network Settings->Primary Address
but UPnP devices are still not discoverable like before.


(in the inbox is Google home assistent)

Regards,
Radek

Thanks Patrik and Peter for getting my on the right track :slight_smile:. I also had problems with Sonos when trying OH4 with a docker image. I think OH is using the JUPnP library for UPnP. And then its possible to set the interface in the system property org.jupnp.network.useInterfaces.

Fingers crossed… but for me it seems to work when I set the property via the EXTRA_JAVA_OPTS environment variable.

enviroment:
  EXTRA_JAVA_OPTS: "-Dorg.jupnp.network.useInterfaces=enp4s0"

Source: https://github.com/JochenHiller/jupnp/blob/master/docs/basicapi/upnpservice.xhtml

Thanks Kalle, I played with various values and finally it works for me. I am not sure what is not needed, but I did this:

  • in docker compose I have now:
   openhab:
     - EXTRA_JAVA_OPTS=-Duser.timezone=${TZ} -Dorg.jupnp.network.useInterfaces=eth0 -Dorg.jupnp.network.useAddresses=192.168.2.27

It did not work out of the box, so I also

  • installed sonos binding (I don’t have any sonos thing in fact)
  • reinstalled the UPnP binding, removed and added my UPnP things (pioneer receiver and samsung TV)
  • reinstalled pioneer and samsung bindings and removed and added receiver and TV things

Thank you all - Patrik, Peter and Kale for your valuable help I apreciate it very much! :partying_face: