It took more time for me to be able to get back to this, but I have uploaded the logs here.
I wasn’t able to upload it here, as with its 2,2 MB size it is above the size limit.
I have seen that a new update was released, so the logs were made with openHAB version 3.4.2, still in a Docker container, created and started with the same settings as before.
The logs start from starting up openHAB.
It includes when I have manually started discovery for the lgwebos binding (with the “Scan” button on the “Add a new Thing” page), and it ends when the discovery process has seemingly ended (~30 seconds after starting discovery, when the log output has became more quiet).
The log levels were set up this way:
openhab> log:get
Logger │ Level
───────────────────────────────────────────────────┼──────
ROOT │ WARN
com.connectsdk │ TRACE
javax.jmdns │ INFO
javax.mail │ ERROR
openhab.event │ INFO
openhab.event.AddonEvent │ ERROR
openhab.event.ChannelDescriptionChangedEvent │ ERROR
openhab.event.InboxUpdatedEvent │ INFO
openhab.event.ItemAddedEvent │ ERROR
openhab.event.ItemChannelLinkAddedEvent │ ERROR
openhab.event.ItemChannelLinkRemovedEvent │ ERROR
openhab.event.ItemRemovedEvent │ ERROR
openhab.event.ItemStateEvent │ INFO
openhab.event.RuleAddedEvent │ ERROR
openhab.event.RuleRemovedEvent │ ERROR
openhab.event.RuleStatusInfoEvent │ ERROR
openhab.event.StartlevelEvent │ ERROR
openhab.event.ThingAddedEvent │ ERROR
openhab.event.ThingRemovedEvent │ ERROR
openhab.event.ThingStatusInfoEvent │ ERROR
openhab.event.ThingUpdatedEvent │ ERROR
org.apache.cxf.jaxrs.sse.SseEventSinkImpl │ ERROR
org.apache.cxf.phase.PhaseInterceptorChain │ ERROR
org.apache.karaf.jaas.modules.audit │ INFO
org.apache.karaf.kar.internal.KarServiceImpl │ ERROR
org.apache.karaf.shell.ssh.SshUtils │ ERROR
org.apache.karaf.shell.support │ OFF
org.apache.sshd │ ERROR
org.eclipse.lsp4j │ OFF
org.jupnp │ TRACE
org.openhab │ INFO
org.openhab.binding.lgwebos │ TRACE
org.ops4j.pax.url.mvn.internal.AetherBasedResolver │ ERROR
org.ops4j.pax.web.pax-web-runtime │ OFF
su.litvak.chromecast.api.v2.Channel │ ERROR
Notable log events
I have taken a look at the files, and I have noted the following events in it:
Manual start of discovery:
2023-02-14 15:35:37.358 [TRACE] [.jupnp.controlpoint.ControlPointImpl] - Sending asynchronous search for: ssdp:all
The first time the discovery request was sent to the main local network:
2023-02-14 15:35:38.118 [DEBUG] [.jupnp.transport.impl.DatagramIOImpl] - Sending message from address: /192.168.56.2:0
The first time a response to the discovery was received from the main local network:
2023-02-14 15:35:38.216 [DEBUG] [upnp.transport.spi.MulticastReceiver] - UDP datagram received from: 192.168.56.2:52967 on local interface: eth0 and address: 192.168.56.2
I suspect that this response is from openHAB itself, but a different component of it.
The second time a response to the discovery was received from the main local network:
2023-02-14 15:35:38.831 [DEBUG] [.jupnp.transport.impl.DatagramIOImpl] - UDP datagram received from: 192.168.56.47:60434 on: /192.168.56.2:0
This is one of the webOS TVs. A little further it can be seen as it introduces itself as one.
The previous message was logged once again. Probably not that notable, but I found it interesting.
2023-02-14 15:35:38.933 [DEBUG] [.jupnp.transport.impl.DatagramIOImpl] - UDP datagram received from: 192.168.56.47:60434 on: /192.168.56.2:0
The first time detailed information on the TV was requested from it.
The ...
means that these logs events were close to each other in the log, but I have not copied in there others that seemed to be irrelevant.
2023-02-14 15:35:39.483 [DEBUG] [p.protocol.RetrieveRemoteDescriptors] - Sending device descriptor retrieval message: (StreamRequestMessage) GET http://192.168.56.47:1132/
...
2023-02-14 15:35:39.537 [DEBUG] [org.jupnp.transport.Router ] - Sending via TCP unicast stream: (StreamRequestMessage) GET http://192.168.56.47:1132/
...
2023-02-14 15:35:39.547 [TRACE] [org.jupnp.transport.spi.StreamClient] - Preparing HTTP request: (StreamRequestMessage) GET http://192.168.56.47:1132/
...
2023-02-14 15:35:39.555 [TRACE] [org.jupnp.transport.spi.StreamClient] - Creating HTTP request. URI: 'http://192.168.56.47:1132/' method: 'GET'
...
2023-02-14 15:35:39.605 [TRACE] [org.jupnp.transport.spi.StreamClient] - Waiting 10 seconds for HTTP request to complete: (StreamRequestMessage) GET http://192.168.56.47:1132/
First response of the TV to the previous request:
2023-02-14 15:35:39.675 [TRACE] [otocol.async.ReceivingSearchResponse] - Received device search response: (RemoteDeviceIdentity) UDN: uuid:c5457ee3-5af0-6446-7af6-ff1d34c9b818, Descriptor: http://192.168.56.47:1132/
Here is the first time this request would be sent again, but it sees that it had already have sent such a request to which it haven’t yet processed a response, so now it doesn’t send it again:
2023-02-14 15:35:39.698 [TRACE] [otocol.async.ReceivingSearchResponse] - Skip submitting task, active retrieval for URL already in progress:http://192.168.56.47:1132/
While processing the request for the detailed information, it cannot process the SERVER header because of the formatting of the UPNP specification number:
2023-02-14 15:35:39.849 [TRACE] [upnp.model.message.header.UpnpHeader] - Invalid header value for tested type: ServerHeader - Missing 'UPnP/1.' in server information: Linux/i686 UPnP/1,0 DLNADOC/1.50 LGE WebOS TV/Version 0.9
...
2023-02-14 15:35:39.873 [TRACE] [org.jupnp.model.message.UpnpHeaders ] - Ignoring known but irrelevant header (value violates the UDA specification?) 'SERVER': Linux/i686 UPnP/1,0 DLNADOC/1.50 LGE WebOS TV/Version 0.9
While making a GET request to 192.168.56.47:1132 from my PC, I see that the SERVER header actually contains UPnP/1,0
. However, when I was running openHAB on my PC, it was able to discover the TV.
The first time a different kind of response has arrived, which contains information about a UPnP service. Here the UPnP version number seems to be formatted as openHAB expects it:
2023-02-14 15:35:40.955 [TRACE] [upnp.transport.spi.DatagramProcessor] - HTTP/1.1 200 OK
Location: http://192.168.56.47:1428/
Cache-Control: max-age=1800
Server: WebOS/4.0.0 UPnP/1.0
EXT:
USN: uuid:6ff027f3-83c6-4fe7-a16c-089fdfef6302::upnp:rootdevice
ST: upnp:rootdevice
Date: Tue, 14 Feb 2023 14:35:39 GMT
This response has other variants too, each of which contain a different service in the USN and ST headers. One more example is this:
2023-02-14 15:35:41.344 [TRACE] [upnp.transport.spi.DatagramProcessor] - HTTP/1.1 200 OK
Location: http://192.168.56.47:1428/
Cache-Control: max-age=1800
Server: WebOS/4.0.0 UPnP/1.0
EXT:
USN: uuid:6ff027f3-83c6-4fe7-a16c-089fdfef6302::urn:lge:device:tv:1
ST: urn:lge:device:tv:1
Date: Tue, 14 Feb 2023 14:35:39 GMT
For some of these responses the logs complain that it was unable to parse the ST and the USN headers.
After these, to me it seems everything just gets repeated over 30 seconds, until openHAB gives up, but I may have missed some important logged events.