After upgrading Openhab (and Z-wave binding) to version 2.5 I see lots of CANCELLED transactions in the z-wave log, for example:
2019-12-25 21:27:38.077 [DEBUG] [nal.protocol.ZWaveTransactionManager] - NODE 40: Bump transaction 4637 priority from Immediate to Immediate
2019-12-25 21:27:38.077 [DEBUG] [nal.protocol.ZWaveTransactionManager] - NODE 40: Adding to device queue
2019-12-25 21:27:38.077 [DEBUG] [nal.protocol.ZWaveTransactionManager] - NODE 40: Added 4637 to queue - size 18
2019-12-25 21:27:38.078 [DEBUG] [nal.protocol.ZWaveTransactionManager] - Transaction SendNextMessage 1 out at start. Holdoff false.
2019-12-25 21:27:38.128 [DEBUG] [sactionManager$ZWaveTransactionTimer] - NODE 40: TID 4558: Timeout at state WAIT_REQUEST. 3 retries remaining.
2019-12-25 21:27:38.128 [DEBUG] [sactionManager$ZWaveTransactionTimer] - TID 4558: Transaction is current transaction, so clearing!!!!!
2019-12-25 21:27:38.128 [DEBUG] [e.internal.protocol.ZWaveTransaction] - TID 4558: Transaction CANCELLED
2019-12-25 21:27:38.128 [DEBUG] [ding.zwave.handler.ZWaveThingHandler] - NODE 40: Got an event from Z-Wave network: ZWaveTransactionCompletedEvent
2019-12-25 21:27:38.129 [DEBUG] [nal.protocol.ZWaveTransactionManager] - NODE 40: notifyTransactionResponse TID:4558 CANCELLED
2019-12-25 21:27:38.129 [DEBUG] [nal.protocol.ZWaveTransactionManager] - Transaction SendNextMessage 0 out at start. Holdoff false.
2019-12-25 21:27:38.130 [DEBUG] [wave.internal.protocol.SerialMessage] - Assembled message buffer = 01 09 00 13 28 02 84 08 25 7E 18
2019-12-25 21:27:38.130 [DEBUG] [ing.zwave.handler.ZWaveSerialHandler] - NODE 40: Sending REQUEST Message = 01 09 00 13 28 02 84 08 25 7E 18
2019-12-25 21:27:38.130 [DEBUG] [nal.protocol.ZWaveTransactionManager] - TID 4558: Transaction event listener: DONE: CANCELLED ->
Soon after upgrade all my wired Z-Wave devices were marked as DEAD, and went OFFLINE in the Things list. I’ve stopped Openhab, reinserted my Z-Wave USB controller into the PC - in order to reset it by power - and launched Openhab again. It was yesterday. No devices went offline after that, but I still see lots of CANCELLED transactions in the logs.
That said, devices are functioning almost normally at the same time, even when they were DEAD and OFFLINE - their statuses are being updated, they are receiving commands, except that sometimes there are delays of several seconds (which were quite rare before), and 2 or 3 times devices didn’t receive their commands.
I’ve also tried to update config parameter of one of my battery devices (waking it up after saving), and transaction was cancelled as well, updated value was not applied.
Before upgrade I was using Openhab 2.4 and Z-Wave 184.108.40.206901040921 snapshot binding, and have never seen such behaviour.
Verify you have no invalid addons defined such as restdocs (moved from misc to ui) and yahooweather (removed due to service stoppage). Invalid addons cause all addoins to be reloaded evry minute in an attempt to load the invalid ones.
Check addons.cfg and addons.config. The addons changes were mentioned in upgrade Warnings and the Release Notes.
@Bruce_Osborne, checked that already - I don’t have invalid addons, here is my addons.cfg:
package = expert
# Access Remote Add-on Repository
# Defines whether the remote openHAB add-on repository should be used for browsing and installing add-ons.
# This not only makes latest snapshots of add-ons available, it is also required for the installation of
# any legacy 1.x add-on. (default is true)
remote = true
# Include legacy 1.x bindings. If set to true, it also allows the installation of 1.x bindings for which there is
# already a 2.x version available (requires remote repo access, see above). (default is false)
legacy = true
# A comma-separated list of bindings to install (e.g. "sonos,knx,zwave")
binding = amazondashbutton,amazonechocontrol,astro,exec1,netatmo,squeezebox,weather1,zwave
# A comma-separated list of UIs to install (e.g. "basic,paper")
ui = paper,basic,habmin,habpanel
# A comma-separated list of persistence services to install (e.g. "rrd4j,jpa")
persistence = jdbc-mysql,mapdb
# A comma-separated list of actions to install (e.g. "mail,pushover")
action = telegram
# A comma-separated list of transformation services to install (e.g. "map,jsonpath")
# A comma-separated list of voice services to install (e.g. "marytts,freetts")
voice = voicerss
# A comma-separated list of miscellaneous services to install (e.g. "myopenhab")
misc = openhabcloud
Yes of course, I have removed the jar file from addons folder, and included “zwave” into addons.cfg, so that 2.5 release version has been installed. Same with another binding which was on snapshot version as well.
State updates are sent multiple times for different devices, are you sure you have the association groups set correctly (only one (the correct) group to your controller, not several groups, this can cause issues):
Hi, thank you for the answer, will try to comment on each point:
Ok, will try. I thought nighly healing has only positive effect
Yes, I’ve seen this in the logs, but the main question why has it started to happen after upgrade? Everything was ok before, and I didn’t perform any hardware changes (except adding new Popp Mold sensor, discussed in another thread).
27, 56, 63 are all Wired devices and not so far from the controller.
No, device 37 has only one association with the controller - device status group:
Now all my wired devices have gave offline. But they are “working” - receiving commands and sending status updates.
And all of them are stuck on the NodeNeighborUpdate failed, and due to this they cannot complete initial initialization after binding restart, so looks like healing isn’t the cause (I’ve recalled one detail - I’ve tried to heal one of wired devices manually via Habmin, and the log told me smth like “healing cancelled because initialization is not completed”).
I’ve also seen NodeNeighborUpdate failed for battery devices, but they are not going offline after that.
I start thinking that this NodeNeighborUpdate request has failed for literally all devices in my network.
Also noted one more interesting message in the fresh log: