Insteon (internal) state machine and missing messages?

Hi all, 1 month into OpenHAB here, excellent stuff, a big jump over the MisterHouse I had been running for 10 (20?) years!

One thing I’ve noticed is I’m occasionally missing notifications from my motion sensor #2842-222. Turning on TRACE, I see something strange with the internal state machine and I was wondering, is this normal activity, do I have a “funky” 2842-222 (3112 Rev2.3) that’s not quite following the rules, etc. I’ve started dabbling with Z-Wave so I might just ditch Insteon if the devices are not reliable… Things had been working OK in MisterHouse but IIRC it was more likely to send duplicate messages in some cases.

Anyhoo, in TRACE I see things like:

2020-06-06 04:37:27.886 [DEBUG] [ding.insteon.internal.InsteonBinding] - got msg: IN:Cmd:0x50|fromAddress:21.C4.53|toAddress:11.00.01|messageFlags:0xCF=ALL_LINK_BROADCAST:3:3|command1:0x06|command2:0x00|
2020-06-06 04:37:27.889 [TRACE] [rnal.device.GroupMessageStateMachine] - group state: EXPECT_BCAST --SUCCESS--> EXPECT_BCAST, publish: false

2020-06-06 04:37:27.889 [DEBUG] [on.internal.device.MessageDispatcher] - message ignored as duplicate. Matches group: false matches filter: true

2020-06-06 04:37:27.889 [TRACE] [on.internal.device.MessageDispatcher] - 21.C4.53:GenericLastTime->LastTimeHandler IN:Cmd:0x50|fromAddress:21.C4.53|toAddress:11.00.01|messageFlags:0xCF=ALL_LINK_BROADCAST:3:3|command1:0x06|command2:0x00|

2020-06-06 04:37:27.890 [DEBUG] [nsteon.internal.device.DeviceFeature] - 21.C4.53:GenericLastTime publishing: 2020-06-06T04:37:27.890-0400

Cleanup message after getting sensor details after receiving a 0X11 ON message, state machine remains in EXPECT_BCAST, all makes sense.

But then 30 seconds later when the motion sensor triggers the 0X13 OFF, the state machine thinks we are in the EXPECT_CLEAN state and so thinks this 0X13 is a duplicate.

2020-06-06 04:38:02.061 [DEBUG] [ding.insteon.internal.InsteonBinding] - got msg: IN:Cmd:0x50|fromAddress:21.C4.53|toAddress:00.00.01|messageFlags:0xCF=ALL_LINK_BROADCAST:3:3|command1:0x13|command2:0x01|
2020-06-06 04:38:02.062 [DEBUG] [nsteon.internal.device.InsteonDevice] - ----- applying message to feature: MotionSensorData
2020-06-06 04:38:02.062 [TRACE] [steon.internal.device.MessageHandler] - updating group state for 1 to BCAST
2020-06-06 04:38:02.062 [TRACE] [rnal.device.GroupMessageStateMachine] - group state: EXPECT_CLEAN --BCAST--> EXPECT_CLEAN, publish: false
2020-06-06 04:38:02.063 [DEBUG] [on.internal.device.MessageDispatcher] - message ignored as duplicate. Matches group: true matches filter: true

I was a little surprised that the state machine thought we were in EXPECT_CLEAN when the last interaction left things in EXPECT_BCAST.

There is some activity in between the motion sensor where we turn on a dimmer:

2020-06-06 04:37:41.801 [DEBUG] [binding.insteon.internal.driver.Port] - writing (500): OUT:Cmd:0x62|toAddress:29.6E.BB|messageFlags:0x0F=DIRECT:3:3|command1:0x11|command2:0x4D|
2020-06-06 04:37:42.133 [DEBUG] [ding.insteon.internal.InsteonBinding] - got msg: IN:Cmd:0x50|fromAddress:29.6E.BB|toAddress:34.77.FF|messageFlags:0x2F=ACK_OF_DIRECT:3:3|command1:0x11|command2:0x4D|
2020-06-06 04:37:42.134 [DEBUG] [on.internal.device.MessageDispatcher] - 29.6E.BB:GenericDimmer DIRECT_ACK: q:QUERY_PENDING cmd: 80
2020-06-06 04:37:42.135 [DEBUG] [nsteon.internal.device.DeviceFeature] - 29.6E.BB:GenericDimmer publishing: 30
2020-06-06 04:37:42.136 [TRACE] [nsteon.internal.device.InsteonDevice] - handled reply of direct: GenericDimmer(1:3:8)
2020-06-06 04:37:42.136 [TRACE] [on.internal.device.MessageDispatcher] - 29.6E.BB:GenericLastTime->LastTimeHandler IN:Cmd:0x50|fromAddress:29.6E.BB|toAddress:34.77.FF|messageFlags:0x2F=ACK_OF_DIRECT:3:3|command1:0x11|command2:0x4D|
2020-06-06 04:37:42.137 [DEBUG] [nsteon.internal.device.DeviceFeature] - 29.6E.BB:GenericLastTime publishing: 2020-06-06T04:37:42.137-0400
2020-06-06 04:37:42.137 [TRACE] [nternal.device.DeviceFeatureListener] - old state: DateTimeType:2020-06-06T04:34:06.784-0400=?2020-06-06T04:37:42.137-0400
2020-06-06 04:37:42.329 [TRACE] [binding.insteon.internal.driver.Port] - writer checking message queue
2020-06-06 04:37:43.801 [DEBUG] [.internal.device.RequestQueueManager] - device queue for 29.6E.BB is empty!

But that doesn’t seem to be involving the state machine, and since we go the ACK and it evidently included all we wanted, no reference to “group 1”.

So, was just curious what other people thought about this internal state machine and why it was thinking things were needed to be cleaned up.

FWIW, I also see things in the logs like:

2020-06-06 04:26:50.100 [TRACE] [nding.insteon.internal.driver.Poller] - entry 21.C4.53/Sat Jun 06 04:26:50 EDT 2020 expired at time 1591432010100
2020-06-06 04:31:50.100 [TRACE] [nding.insteon.internal.driver.Poller] - entry 21.C4.53/Sat Jun 06 04:31:50 EDT 2020 expired at time 1591432310100
2020-06-06 04:36:50.100 [TRACE] [nding.insteon.internal.driver.Poller] - entry 21.C4.53/Sat Jun 06 04:36:50 EDT 2020 expired at time 1591432610100
2020-06-06 04:41:50.101 [TRACE] [nding.insteon.internal.driver.Poller] - entry 21.C4.53/Sat Jun 06 04:41:50 EDT 2020 expired at time 1591432910101

It’s not apparent if that’s just related to not seeing the response due to being in the wrong state, or if the device itself isn’t behaving and isn’t responding correctly leaving us in a pending state. I’d say that’s a possibility and I will try a different device (will be same vintage and level though) to see if it behaves differently.

-nik

This code basically hasn’t been touched in almost 5 1/2 years (Dec 24, 2014). Here’s the comments on how the state machine works:

 * Ideally, Insteon ALL LINK messages are received in this order, and
 * only a single one of each:
 *
 * BCAST (a broadcast message from the device to all group members)
 * CLEAN (a cleanup point-to-point message to ensure more reliable transmission)
 * SUCCESS (a broadcast report of success or failure of cleanup, with cmd1=0x06)
 *
 * But often, the BCAST, CLEAN and SUCCESS messages are retransmitted multiple times,
 * or (less frequently) messages are lost. The present state machine was developed
 * to remove duplicates, yet make sure that a single lost message does not cause
 * the binding to miss an update.
 *
 *                          "SUCCESS"
 *                         EXPECT_BCAST
 *                    ^ /                ^ \
 *           SUCCESS / /                  \ \ [BCAST]
 *                  / /['CLEAN']  'SUCCESS'\ \
 *                 / /                      \ \
 *                / V         CLEAN          \ V
 * "CLEAN" EXPECT_SUCCESS <-------------- EXPECT_CLEAN "BCAST"
 *                         -------------->
 *                            ['BCAST']
 *
 * How to read this diagram:
 *
 * Regular, expected, non-duplicate messages do not have any quotes around them,
 * and lead to the obvious state transitions.
 *
 * The actions in [square brackets] are transitions that cause a state
 * update to be published when they occur.
 *
 * The presence of double quotes indicates a duplicate that does not lead
 * to any state transitions, i.e. it is simply ignored.
 *
 * Single quotes indicate a message that is the result of a single dropped
 * message, and leads to a state transition, in some cases even to a state
 * update to be published.
 *
 * For instance at the top of the diagram, if a "SUCCESS" message is received
 * when in state EXPECT_BCAST, it is considered a duplicate (it has "").
 *
 * When in state EXPECT_SUCCESS though, receiving a ['BCAST'] is most likely because
 * the SUCCESS message was missed, and therefore it is considered the result
 * of a single lost message (has '' around it). The state changes to EXPECT_CLEAN,
 * and the message should lead to publishing of a state update (it has [] around it).

My guess is that it is your device acting up and sometimes not sending the CLEAN message.

Thanks, very helpful. I agree on the sensor, I’m going to replace it and see how things behave.

Diagram makes sense, if we are in EXPECT_BCAST and get “SUCCESS” it’s ignored as duplicate. What I don’t get from the trace logs is the next time we get state machine trace activity it looks like group 1 state is in EXPECT_CLEAN (2020-06-06 04:38:02.062)

Reading the code if we get a SUCCESS message, we always change state to EXPECT_BCAST.

FYI device activity is typically:

$ grep -E -i "21.c4.53" openhab.log|grep -E "got msg: IN"| perl -nle '/^(\S+\s*\S+).*messageFlags:.*=([^:]+).*command1:([^\|]+)/ and print "$1 $2 $3"'
2020-06-03 08:30:12.313 ALL_LINK_BROADCAST 0x11
2020-06-03 08:30:12.496 ALL_LINK_BROADCAST 0x11
2020-06-03 08:30:12.714 ALL_LINK_BROADCAST 0x06
2020-06-03 08:30:12.963 ALL_LINK_BROADCAST 0x06
2020-06-03 08:30:47.075 ALL_LINK_BROADCAST 0x13
2020-06-03 08:30:47.246 ALL_LINK_BROADCAST 0x13
2020-06-03 08:30:47.459 ALL_LINK_BROADCAST 0x06
2020-06-03 08:30:47.645 ALL_LINK_BROADCAST 0x06
2020-06-03 08:31:40.628 ALL_LINK_BROADCAST 0x11
2020-06-03 08:31:40.816 ALL_LINK_BROADCAST 0x11
2020-06-03 08:31:41.074 ALL_LINK_BROADCAST 0x06
2020-06-03 08:31:41.218 ALL_LINK_BROADCAST 0x06
2020-06-03 08:32:10.373 ALL_LINK_BROADCAST 0x13
2020-06-03 08:32:10.559 ALL_LINK_BROADCAST 0x13
2020-06-03 08:32:10.771 ALL_LINK_BROADCAST 0x06
2020-06-03 08:32:10.962 ALL_LINK_BROADCAST 0x06
2020-06-03 08:33:36.045 ALL_LINK_BROADCAST 0x11
2020-06-03 08:33:36.229 ALL_LINK_BROADCAST 0x11
2020-06-03 08:33:36.443 ALL_LINK_BROADCAST 0x06
2020-06-03 08:33:36.633 ALL_LINK_BROADCAST 0x06
2020-06-03 08:34:15.838 ALL_LINK_BROADCAST 0x13
2020-06-03 08:34:15.976 ALL_LINK_BROADCAST 0x13
2020-06-03 08:34:16.185 ALL_LINK_BROADCAST 0x06
2020-06-03 08:34:16.381 ALL_LINK_BROADCAST 0x06
2020-06-03 08:39:46.906 ALL_LINK_BROADCAST 0x11
2020-06-03 08:39:47.136 ALL_LINK_BROADCAST 0x11

Lots of dups!, but basically ON, CLEAN, OFF, CLEAN, repeat.

What’s funny at times we get some direct messages but no CLEAN:

2020-06-06 04:24:47.960 ALL_LINK_BROADCAST 0x11
2020-06-06 04:24:48.053 ALL_LINK_BROADCAST 0x11
2020-06-06 04:24:49.514 ACK_OF_DIRECT      0x2E
2020-06-06 04:24:50.821 DIRECT             0x2E
2020-06-06 04:25:30.597 ALL_LINK_BROADCAST 0x13
2020-06-06 04:25:30.734 ALL_LINK_BROADCAST 0x13
2020-06-06 04:25:32.180 ACK_OF_DIRECT      0x2E
2020-06-06 04:25:32.444 DIRECT             0x2E
2020-06-06 04:37:27.310 ALL_LINK_BROADCAST 0x11
2020-06-06 04:37:27.486 ALL_LINK_BROADCAST 0x11
2020-06-06 04:37:27.692 ALL_LINK_BROADCAST 0x06
2020-06-06 04:37:27.886 ALL_LINK_BROADCAST 0x06
2020-06-06 04:38:02.061 ALL_LINK_BROADCAST 0x13
2020-06-06 04:38:02.253 ALL_LINK_BROADCAST 0x13
2020-06-06 04:38:02.458 ALL_LINK_BROADCAST 0x06
2020-06-06 04:38:02.651 ALL_LINK_BROADCAST 0x06

So, ON, DIRECT, OFF, DIRECT and no CLEAN at the 4:24 - 4:25 time range.

These direct messages are querying light level, battery level and seem to be related to the confusion. Possibly if the device wasn’t being asked that info it would behave more normally.

There’s still something I’m missing in the state machine because if you intersperse that with the above messages:

2020-06-06 04:24:47.960 - got msg: IN:Cmd:0x50|fromAddress:21.C4.53|toAddress:00.00.01|messageFlags:0xC7=ALL_LINK_BROADCAST:3:1|command1:0x11|command2:0x01|
2020-06-06 04:24:47.960 - updating group state for 1 to BCAST
2020-06-06 04:24:47.960 - group state: EXPECT_BCAST --BCAST--> EXPECT_CLEAN, publish: true
2020-06-06 04:24:47.961 - updating group state for 1 to BCAST
2020-06-06 04:24:47.961 - group state: EXPECT_BCAST --BCAST--> EXPECT_CLEAN, publish: true
2020-06-06 04:24:47.962 - updating group state for 1 to BCAST
2020-06-06 04:24:47.962 - group state: EXPECT_BCAST --BCAST--> EXPECT_CLEAN, publish: true
2020-06-06 04:24:47.964 - updating group state for 1 to BCAST
2020-06-06 04:24:47.964 - group state: EXPECT_BCAST --BCAST--> EXPECT_CLEAN, publish: true
2020-06-06 04:24:48.053 - got msg: IN:Cmd:0x50|fromAddress:21.C4.53|toAddress:00.00.01|messageFlags:0xCF=ALL_LINK_BROADCAST:3:3|command1:0x11|command2:0x01|
2020-06-06 04:24:48.053 - updating group state for 1 to BCAST
2020-06-06 04:24:48.053 - group state: EXPECT_CLEAN --BCAST--> EXPECT_CLEAN, publish: false
2020-06-06 04:24:48.053 - updating group state for 1 to BCAST
2020-06-06 04:24:48.053 - group state: EXPECT_CLEAN --BCAST--> EXPECT_CLEAN, publish: false
2020-06-06 04:24:48.054 - updating group state for 1 to BCAST
2020-06-06 04:24:48.054 - group state: EXPECT_CLEAN --BCAST--> EXPECT_CLEAN, publish: false
2020-06-06 04:24:48.054 - updating group state for 1 to BCAST
2020-06-06 04:24:48.054 - group state: EXPECT_CLEAN --BCAST--> EXPECT_CLEAN, publish: false
2020-06-06 04:24:49.514 - got msg: IN:Cmd:0x50|fromAddress:21.C4.53|toAddress:34.77.FF|messageFlags:0x2F=ACK_OF_DIRECT:3:3|command1:0x2E|command2:0x00|
2020-06-06 04:24:50.821 - got msg: IN:Cmd:0x51|fromAddress:21.C4.53|toAddress:34.77.FF|messageFlags:0x1F=DIRECT:3:3|command1:0x2E|command2:0x00|userData1:0x01|userData2:0x01|userData3:0x64|userData4:0x01|userData5:0x80|userData6:0x0E|userData7:0x00|userData8:0x71|userData9:0x0F|userData10:0x01|userData11:0x03|userData12:0x57|userData13:0x00|userData14:0xD2|
2020-06-06 04:25:30.597 - got msg: IN:Cmd:0x50|fromAddress:21.C4.53|toAddress:00.00.01|messageFlags:0xCB=ALL_LINK_BROADCAST:3:2|command1:0x13|command2:0x01|
2020-06-06 04:25:30.597 - updating group state for 1 to BCAST
2020-06-06 04:25:30.598 - group state: EXPECT_BCAST --BCAST--> EXPECT_CLEAN, publish: true
2020-06-06 04:25:30.598 - updating group state for 1 to BCAST
2020-06-06 04:25:30.598 - group state: EXPECT_BCAST --BCAST--> EXPECT_CLEAN, publish: true
2020-06-06 04:25:30.599 - updating group state for 1 to BCAST
2020-06-06 04:25:30.600 - group state: EXPECT_BCAST --BCAST--> EXPECT_CLEAN, publish: true
2020-06-06 04:25:30.602 - updating group state for 1 to BCAST
2020-06-06 04:25:30.602 - group state: EXPECT_BCAST --BCAST--> EXPECT_CLEAN, publish: true
2020-06-06 04:25:30.734 - got msg: IN:Cmd:0x50|fromAddress:21.C4.53|toAddress:00.00.01|messageFlags:0xCF=ALL_LINK_BROADCAST:3:3|command1:0x13|command2:0x01|
2020-06-06 04:25:30.734 - updating group state for 1 to BCAST
2020-06-06 04:25:30.734 - group state: EXPECT_CLEAN --BCAST--> EXPECT_CLEAN, publish: false
2020-06-06 04:25:30.735 - updating group state for 1 to BCAST
2020-06-06 04:25:30.735 - group state: EXPECT_CLEAN --BCAST--> EXPECT_CLEAN, publish: false
2020-06-06 04:25:30.736 - updating group state for 1 to BCAST
2020-06-06 04:25:30.736 - group state: EXPECT_CLEAN --BCAST--> EXPECT_CLEAN, publish: false
2020-06-06 04:25:30.737 - updating group state for 1 to BCAST
2020-06-06 04:25:30.737 - group state: EXPECT_CLEAN --BCAST--> EXPECT_CLEAN, publish: false
2020-06-06 04:25:32.180 - got msg: IN:Cmd:0x50|fromAddress:21.C4.53|toAddress:34.77.FF|messageFlags:0x2F=ACK_OF_DIRECT:3:3|command1:0x2E|command2:0x00|
2020-06-06 04:25:32.444 - got msg: IN:Cmd:0x51|fromAddress:21.C4.53|toAddress:34.77.FF|messageFlags:0x1F=DIRECT:3:3|command1:0x2E|command2:0x00|userData1:0x01|userData2:0x01|userData3:0x64|userData4:0x01|userData5:0x80|userData6:0x0E|userData7:0x00|userData8:0x71|userData9:0x0F|userData10:0x00|userData11:0x34|userData12:0x5C|userData13:0x00|userData14:0xD2|
2020-06-06 04:37:27.310 - got msg: IN:Cmd:0x50|fromAddress:21.C4.53|toAddress:00.00.01|messageFlags:0xCF=ALL_LINK_BROADCAST:3:3|command1:0x11|command2:0x01|
2020-06-06 04:37:27.310 - updating group state for 1 to BCAST
2020-06-06 04:37:27.310 - group state: EXPECT_CLEAN --BCAST--> EXPECT_CLEAN, publish: false
2020-06-06 04:37:27.311 - updating group state for 1 to BCAST
2020-06-06 04:37:27.311 - group state: EXPECT_CLEAN --BCAST--> EXPECT_CLEAN, publish: false
2020-06-06 04:37:27.312 - updating group state for 1 to BCAST
2020-06-06 04:37:27.312 - group state: EXPECT_CLEAN --BCAST--> EXPECT_CLEAN, publish: false
2020-06-06 04:37:27.313 - updating group state for 1 to BCAST
2020-06-06 04:37:27.313 - group state: EXPECT_CLEAN --BCAST--> EXPECT_CLEAN, publish: false
2020-06-06 04:37:27.486 - got msg: IN:Cmd:0x50|fromAddress:21.C4.53|toAddress:00.00.01|messageFlags:0xCF=ALL_LINK_BROADCAST:3:3|command1:0x11|command2:0x01|
2020-06-06 04:37:27.487 - updating group state for 1 to BCAST
2020-06-06 04:37:27.487 - group state: EXPECT_CLEAN --BCAST--> EXPECT_CLEAN, publish: false
2020-06-06 04:37:27.488 - updating group state for 1 to BCAST
2020-06-06 04:37:27.488 - group state: EXPECT_CLEAN --BCAST--> EXPECT_CLEAN, publish: false
2020-06-06 04:37:27.488 - updating group state for 1 to BCAST
2020-06-06 04:37:27.489 - group state: EXPECT_CLEAN --BCAST--> EXPECT_CLEAN, publish: false
2020-06-06 04:37:27.489 - updating group state for 1 to BCAST
2020-06-06 04:37:27.489 - group state: EXPECT_CLEAN --BCAST--> EXPECT_CLEAN, publish: false

At 04:24:48.054 we are in EXPECT_CLEAN and after some direct messages at 04:25:30.598 we are back in EXPECT_BCAST. It’s possible direct messages reset the state (haven’t looked for that code yet), however between 04:25:30.737 we are in EXPECT_CLEAN and after some direct messages at 04:37:27.310 we remain in EXPECT_CLEAN. Our group state seems to be always for group 1.

It’s almost like the hashmap for:

at line 288 is sometimes not finding things (so returns a new object which defaults to EXPECT_BCAST, but then at later times is finding the original object and so is returning a previous state which we have already passed by.

I know, I know, rather deep implications for the JDK and I would imagine a whole ton of issues there, huge red flags :slight_smile:, so I’m guessing it’s probably not a bug there, it just seems like that is the behavior.

FWIW my JDK is currently:

$ rpm -qa|grep -i jdk
zulu8-jdk-8.0.252-1.x86_64
zulu8-jdk-headless-8.0.252-1.x86_64

and I haven’t looked at updates for that so I think I’m going to look at refreshing the JDK regardless.

Sorry if I’m a bit too chatty here, just enjoying getting my hands into some deep tech stuff. :slight_smile:

Take care.

-nik

I really doubt that there is an issue with Java, it’s more than likely the poll of the device is causing it to not send the CLEAN message. By default the device should be polled about every 300 seconds.

Agreed, that would be a pretty strange Java bug. I might try shoehorning in a TRACE message whenever a new state machine is generated, and possibly its address if that’s possible. I’m just a suspicious person maybe?

I’ll try a new motion sensor. I have 3 but unfortunately they are all the same vintage, 3112 REV2.3 level, all bought at the same time. I have an older #2420M (2611 REV 2.0) but that didn’t work, not really supported.

I’ll let you know how things progress. So far so good but it doesn’t fail every time, I just notice it in the night because the light doesn’t get turned on.

1 Like

Swapped to another motion sensor, same issues. I have a log that logs whenever we see an OPEN or CLOSE published. Should be nice and balanced but there are drops occasionally:

2020-06-08 18:16:01.987 [INFO ] [lipse.smarthome.model.script.Hallway] - DEBUG - Saw update OPEN
2020-06-08 18:21:20.259 [INFO ] [lipse.smarthome.model.script.Hallway] - DEBUG - Saw update CLOSED
2020-06-08 18:28:05.383 [INFO ] [lipse.smarthome.model.script.Hallway] - DEBUG - Saw update OPEN
2020-06-08 18:35:38.152 [INFO ] [lipse.smarthome.model.script.Hallway] - DEBUG - Saw update OPEN
2020-06-08 18:36:09.733 [INFO ] [lipse.smarthome.model.script.Hallway] - DEBUG - Saw update CLOSED
2020-06-08 18:38:09.849 [INFO ] [lipse.smarthome.model.script.Hallway] - DEBUG - Saw update OPEN
2020-06-08 18:39:01.615 [INFO ] [lipse.smarthome.model.script.Hallway] - DEBUG - Saw update CLOSED

2020-06-09 03:53:40.468 [INFO ] [lipse.smarthome.model.script.Hallway] - DEBUG - Saw update OPEN
2020-06-09 03:54:15.201 [INFO ] [lipse.smarthome.model.script.Hallway] - DEBUG - Saw update CLOSED
2020-06-09 05:03:19.390 [INFO ] [lipse.smarthome.model.script.Hallway] - DEBUG - Saw update OPEN
2020-06-09 05:04:10.131 [INFO ] [lipse.smarthome.model.script.Hallway] - DEBUG - Saw update CLOSED
2020-06-09 05:04:44.783 [INFO ] [lipse.smarthome.model.script.Hallway] - DEBUG - Saw update CLOSED
2020-06-09 05:07:48.798 [INFO ] [lipse.smarthome.model.script.Hallway] - DEBUG - Saw update OPEN
2020-06-09 05:08:43.665 [INFO ] [lipse.smarthome.model.script.Hallway] - DEBUG - Saw update CLOSED

Your note about the polling was interesting. Maybe I’ll look at seeing if it’s possible to set up a “new” device that doesn’t poll status to see if that keeps things happier. Not sure if that’s possible yet but will look in the new device config stuff to see.

-nik