I’m on this version:
336 │ Active │ 80 │ 4.1.0.202308151424 │ Seime Add-ons :: Bundles :: ESPHome Native
Nothing in the logs yet…it’s been stable as far as I can tell ever since I posted. Go figure.
I’m on this version:
336 │ Active │ 80 │ 4.1.0.202308151424 │ Seime Add-ons :: Bundles :: ESPHome Native
Nothing in the logs yet…it’s been stable as far as I can tell ever since I posted. Go figure.
I have had the same problems as @davecorder
I manage to capture this in the log just before it lost connection
(thanks for the log4j2.xml tip
I haven’t had much time to check it myself but i noticed one thing…
edit: Nevermind, On closer inspection. It’s not inside
Should selectorThread be inside selectorThread?
I’m only a hobby programmer so I might be way off
Thread selectorThread = new Thread(() -> {
....
selectorThread.setName("ESPHome Reader");
selectorThread.start();
}
2023-09-03 15:55:19.580 [TRACE] [nal.internal.comm.ConnectionSelector] - Processing key channel=java.nio.channels.SocketChannel[connected local=/172.18.0.11:49338 remote=screek-humen-sensor-1u-4b3f12.lan/192.168.1.194:6053], selector=sun.nio.ch.EPollSelectorImpl@32cd9cb7, interestOps=1, readyOps=1
2023-09-03 15:55:19.581 [WARN ] [nal.internal.comm.ConnectionSelector] - Error while selecting
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[?:?]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) ~[?:?]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:276) ~[?:?]
at sun.nio.ch.IOUtil.read(IOUtil.java:245) ~[?:?]
at sun.nio.ch.IOUtil.read(IOUtil.java:223) ~[?:?]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:356) ~[?:?]
at no.seime.openhab.binding.esphome.internal.internal.comm.ConnectionSelector.lambda$0(ConnectionSelector.java:52) ~[?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
2023-09-03 15:55:19.586 [DEBUG] [nal.internal.comm.ConnectionSelector] - Selector thread stopped
2023-09-03 15:55:23.108 [DEBUG] [rnal.internal.handler.ESPHomeHandler] - [192.168.1.194] Sending ping
2023-09-03 15:55:23.108 [DEBUG] [rnal.internal.comm.ESPHomeConnection] - [screek-humen-sensor-1u-4b3f12.lan] Sending message: PingRequest
2023-09-03 15:55:23.109 [TRACE] [rnal.internal.comm.ESPHomeConnection] - Writing data
2023-09-03 15:55:23.110 [WARN ] [rnal.internal.handler.ESPHomeHandler] - [192.168.1.194] Error sending ping request
no.seime.openhab.binding.esphome.internal.internal.comm.ProtocolAPIError: [screek-humen-sensor-1u-4b3f12.lan] Error sending message java.io.IOException: Broken pipe
at no.seime.openhab.binding.esphome.internal.internal.comm.ESPHomeConnection.send(ESPHomeConnection.java:51) ~[?:?]
at no.seime.openhab.binding.esphome.internal.internal.handler.ESPHomeHandler.lambda$7(ESPHomeHandler.java:374) ~[?:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) [?:?]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
2023-09-03 15:55:33.108 [DEBUG] [rnal.internal.handler.ESPHomeHandler] - [192.168.1.194] Sending ping
2023-09-03 15:55:33.109 [DEBUG] [rnal.internal.comm.ESPHomeConnection] - [screek-humen-sensor-1u-4b3f12.lan] Sending message: PingRequest
2023-09-03 15:55:33.109 [TRACE] [rnal.internal.comm.ESPHomeConnection] - Writing data
2023-09-03 15:55:33.109 [WARN ] [rnal.internal.handler.ESPHomeHandler] - [192.168.1.194] Error sending ping request
no.seime.openhab.binding.esphome.internal.internal.comm.ProtocolAPIError: [screek-humen-sensor-1u-4b3f12.lan] Error sending message java.io.IOException: Broken pipe
at no.seime.openhab.binding.esphome.internal.internal.comm.ESPHomeConnection.send(ESPHomeConnection.java:51) ~[?:?]
at no.seime.openhab.binding.esphome.internal.internal.handler.ESPHomeHandler.lambda$7(ESPHomeHandler.java:374) ~[?:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) [?:?]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
2023-09-03 15:55:43.108 [WARN ] [rnal.internal.handler.ESPHomeHandler] - [192.168.1.194] Ping responses lacking Waited 4 times 10 seconds, total of 40. Assuming connection lost and disconnecting
2023-09-03 15:55:43.109 [INFO ] [rnal.internal.comm.ESPHomeConnection] - [screek-humen-sensor-1u-4b3f12.lan] Disconnecting socket.
2023-09-03 15:55:53.109 [INFO ] [rnal.internal.handler.ESPHomeHandler] - [192.168.1.194] Trying to connect to 192.168.1.194:6053
2023-09-03 15:55:53.390 [INFO ] [rnal.internal.comm.ESPHomeConnection] - [screek-humen-sensor-1u-4b3f12.lan] Opening socket to screek-humen-sensor-1u-4b3f12.lan at port 6053.
2023-09-03 15:55:53.391 [DEBUG] [rnal.internal.comm.ESPHomeConnection] - [screek-humen-sensor-1u-4b3f12.lan] Sending message: HelloRequest
2023-09-03 15:55:53.391 [TRACE] [rnal.internal.comm.ESPHomeConnection] - Writing data
First of all, thanks for the great binding, I have 4 esp32 device running well with different sensors.
But I experienced some strange behaviour as follow:
OH version: 3.4.4
If I use binding version 3.4.4, I will have the following warning in 1 of the 4 esp32:
Handler ESPHomeHandler of thing esphome:device:esp2 tried updating channel still_energy although the handler was already disposed.
If I use binding version 3.4.5, the above warning gone, but all 4 esp32 devices will go offline after working for a while and can’t go online again and status keep showing the yellow “unknown”, tried restart the binding in console but still can’t go online. So forced to downgrade to ver 3.4.4 in order to work.
Any idea how to solve?
Thanks.
Patrick
Fixed a snag on this one, new jars in place
Try the latest version, I think this should be improved. Please report back if issue is still present.
updated to the latest jar and run for a few hours already, everything seems fine so far.
Will report bak if anything happen.
Thanks a lot !!!
I’ve just tried this binding on my Docker installation, host OS: Ubuntu.
I had to do the following in order to make mdns resolution to work:
In my docker-compose.yaml file:
services:
openhab:
container_name: openhab
...
volumes:
- /var/run/dbus:/var/run/dbus
- /var/run/avahi-daemon/socket:/var/run/avahi-daemon/socket
...
I also had to install avahi-utils
inside the openhab docker on startup.
This is done by mapping /etc/cont-init.d
to a host volume, then having a script like this:
#!/bin/sh
apt-get -qq update
apt-get -qq install -y avahi-utils
Sorry to tell that the things turn “unknown” again after running for the whole afternoon, resume normal after restart, will try check detail in log and report back if I found anything.
New version out with item type fix from @jimtng + a stability fix when connecting.
Updated to the latest binding, clear all cache and restart, run smooth initially, but then after around 6 hrs, status turn to “unknow” and can’t go online anymore.
ver 3.4.4 don’t have this problem, but will have “handler was already disposed.” warning in log.
@seime I’ve had and still have very bad experience with auto-created channels with the Shelly binding. Is it possible to manually create the available channels in the textual thing config / the UI (like the mqtt binding)?
It is impossible to know which channels may be present up front, but predefined channel types might be doable.
Could you elaborate on the exact problems you are facing with dynamic channels as I am not familiar with it?
Could you set logging level to trace and send me a pm with logs from a few minutes before and after the problem arises? If possible also catch logs from one of your esps with level very_verbose.
I haven’t figured out the solution yet, but I think the 2 minute delay in “NOT_YET_READY” has something to do with the fact that channel information is missing.
Yes I asked the forum about it a few days ago; [core.thing.internal.ThingManagerImpl] - Check result is 'not ready'
Have you looked into Binding in “NOT_YET_READY” state after recent core changes · Issue #3394 · openhab/openhab-core (github.com)?
If you create channel (or thing) types dynamically all you have to do is store them in the storage based channel-type provider (when they are first created). They are available on on next startup then and you’ll have no issues. If you chose a type name that is unique for each thing (e.g. because it includes the thing uid), you can automatically remove them when the thing is removed (see handleRemoval
in HarmonyDeviceHandler
).
I’m having lots of issues where the automatic creation does only work partially. E.g. I’ve had roller shutters without the rolller shutter channel or battery powered devices which don’t create all the channels even after multiple weeks, yet are correctly reporting their values on some of the created channels.
So I think it would be nice if something like the mqtt binding was possible.
There you can define the thing and the corresponding textually.
Arbitrary example I copied from another thread I was reading:
Thing mqtt:topic:PlugWaschmaschine "PlugWaschmaschine" (mqtt:broker:MosquittoMqttBroker) { Channels:
Type switch : state "state" [ stateTopic = "zigbee2mqtt/PlugWaschmaschine/state", commandTopic = "zigbee2mqtt/PlugWaschmaschine/set/state", on="ON", off="OFF" ]
Type datetime : last_seen "last_seen" [ stateTopic = "zigbee2mqtt/PlugWaschmaschine/last_seen" ]
Type number : power "power" [ stateTopic = "zigbee2mqtt/PlugWaschmaschine/power" ]
Type number : energy "energy" [ stateTopic = "zigbee2mqtt/PlugWaschmaschine/energy" ]
Type number : current "current" [ stateTopic = "zigbee2mqtt/PlugWaschmaschine/current" ]
}
This defines a thing and a channel of the type switch
with the name state
, etc…
Since I also create the yaml for esphome I know what channels should be there.
So it should be easy to create the corresponding thing file.
Additionally when I update the yaml I can just add/remove the corresponding channel.
Even a yaml → things file converter should be relatively easy to do.
If you have experienced channel trouble with this binding, I’m happy to take a look at TRACE level logs.
I’m using your suggested setup myself for MQTT topics from HA - works well but tedious setup. Which was in fact one of the main reasons for starting this binding. If this is what you are after, why not simply use the mqtt
transport instead of api
?
One could of course start parsing the ESPHome yaml files, but that might not be what the device is actually running.
(What I personally would love to see is GUI support for broken links detection between items and things. Same data as outputted from openhab> openhab:links orphan list
. This would make it much quicker to find broken links than logging into the openhab server via ssh/openhab-cli)
I used the shelly binding as an example because I’ve spent lots of time trouble shooting and these are very hard to track down errors. I am confident your binding is bug free and issues will never arise during channel creation , however it’s always nice to have a fallback if things go south and that’s the only thing I am asking for.
Do the automatic discovery except if the user has configured the channels manually, then only do the subscribe.
Additionally this gives the user a choice: do I like the convenience or is it worth a little bit of extra effort.
I get what you’re hinting at and that’s why this would only work manually. Only then it’s possible to ensure that what was flashed is the same that generated the yaml. The workflow would be
A python yaml to thing generator is definitely something I would use.
That’s how I do it now.
Thanks for the reminder. This has been on my HABApp backlog for ages but I forgot about it.
I think I’ll add it into the next release.