Zigbee binding

Hi @chris,

From time to time my Telegesis coordinator goes OFFLINE (COMMUNICATION_ERROR), there are no errors in the log, and indeed everything works correctly after a few seconds (so all devices become OFFLINE (BRIDGE_OFFLINE) as expected but a few seconds later they start becoming online as the binding receives reports, etc, from these devices, even if the bridge stays offline till I disable and reenable it.

I have spotted the issue down to some contention in the TelegesisFrameHandler:

"TelegesisFrameHandler" Id=43214 in BLOCKED on lock=com.zsmartsystems.zigbee.ZigBeeNetworkManager@d4c7bb
     owned by pool-25-thread-4086 Id=58294
    at com.zsmartsystems.zigbee.ZigBeeNetworkManager.nodeStatusUpdate(ZigBeeNetworkManager.java:873)
    at com.zsmartsystems.zigbee.dongle.telegesis.ZigBeeDongleTelegesis.telegesisEventReceived(ZigBeeDongleTelegesis.java:675)
    at com.zsmartsystems.zigbee.dongle.telegesis.internal.TelegesisFrameHandler.notifyEventReceived(TelegesisFrameHandler.java:467)
      - locked java.util.ArrayList@7f3cf3
    at com.zsmartsystems.zigbee.dongle.telegesis.internal.TelegesisFrameHandler.access$600(TelegesisFrameHandler.java:42)
    at com.zsmartsystems.zigbee.dongle.telegesis.internal.TelegesisFrameHandler$1.run(TelegesisFrameHandler.java:191)

And several threads writing the network state at the same time:

"ESH-discovery-3132" Id=58037 in BLOCKED on lock=org.openhab.binding.zigbee.internal.ZigBeeNetworkStateSerializerImpl@5a8482
"ESH-discovery-3144" Id=58334 in BLOCKED on lock=org.openhab.binding.zigbee.internal.ZigBeeNetworkStateSerializerImpl@5a8482
"ESH-discovery-3145" Id=58358 in BLOCKED on lock=org.openhab.binding.zigbee.internal.ZigBeeNetworkStateSerializerImpl@5a8482
"ESH-discovery-3148" Id=58396 in BLOCKED on lock=org.openhab.binding.zigbee.internal.ZigBeeNetworkStateSerializerImpl@5a8482
"ESH-discovery-3149" Id=58397 in BLOCKED on lock=org.openhab.binding.zigbee.internal.ZigBeeNetworkStateSerializerImpl@5a8482
"ESH-discovery-3150" Id=58398 in BLOCKED on lock=org.openhab.binding.zigbee.internal.ZigBeeNetworkStateSerializerImpl@5a8482
"ESH-discovery-3151" Id=58399 in BLOCKED on lock=org.openhab.binding.zigbee.internal.ZigBeeNetworkStateSerializerImpl@5a8482
"ESH-discovery-3154" Id=58402 in BLOCKED on lock=org.openhab.binding.zigbee.internal.ZigBeeNetworkStateSerializerImpl@5a8482
"ESH-discovery-3158" Id=58406 in BLOCKED on lock=org.openhab.binding.zigbee.internal.ZigBeeNetworkStateSerializerImpl@5a8482
"ESH-discovery-3159" Id=58407 in BLOCKED on lock=org.openhab.binding.zigbee.internal.ZigBeeNetworkStateSerializerImpl@5a8482
"ESH-discovery-3161" Id=58409 in BLOCKED on lock=org.openhab.binding.zigbee.internal.ZigBeeNetworkStateSerializerImpl@5a8482
"ESH-discovery-3162" Id=58410 in BLOCKED on lock=org.openhab.binding.zigbee.internal.ZigBeeNetworkStateSerializerImpl@5a8482
"ESH-discovery-3168" Id=58416 in BLOCKED on lock=org.openhab.binding.zigbee.internal.ZigBeeNetworkStateSerializerImpl@5a8482

I had raised in the past the amount of discovery threads for other reasons. I have reduced the thread pool back to mitigate this issue and, even if it is not happening that much, it is still happening from time to time.

There is not a deadlock: more a contention or race condition where several threads are trying to get a lock in the NetworkManager to write the networt state, with the thread reading telegesis frames from the serial port competing against these threads to get the same lock to notify a frame received event.

With >40 devices, when there is a network update, there are around 40 writes in a row of the network state file, which seems a costly operation: writing 40 times a 3MB file on a SD card (rpi), on an encrypted partition. Each of these writes is, moreover, synchronized on the network manager.

So there is a contention and competition of 40 costly writes against 1 thread (telegesis) waiting to notify an event for a frame received (also synchronizing on the NetworkManager for the event notification) before reading again from the serial port.

The result (summary of the relevant parts from the log):

... Everything working. TX queue is between empty and 1 / 2 packets

2019-01-13 09:27:02.811 [DEBUG] [gesis.internal.TelegesisFrameHandler] - TX Telegesis: TelegesisSendUnicastCommand [address=23332, sourceEp=0, destEp=1, profileId=260, clusterId=6, messageData=00 6E 00 00 00, messageId=null]
2019-01-13 09:27:02.814 [DEBUG] [gesis.internal.TelegesisFrameHandler] - TX Telegesis Data:AT+SENDUCASTB:05,5B24,00,01,0104,0006^M^@n^@^@^@
2019-01-13 09:27:02.815 [TRACE] [gesis.internal.TelegesisFrameHandler] - TELEGESIS Timer: Stop
2019-01-13 09:27:02.816 [TRACE] [gesis.internal.TelegesisFrameHandler] - TELEGESIS Timer: Start
2019-01-13 09:27:02.818 [TRACE] [gesis.internal.TelegesisFrameHandler] - TELEGESIS: Get Packet
2019-01-13 09:27:02.901 [DEBUG] [ngle.telegesis.ZigBeeDongleTelegesis] - Telegesis send: TelegesisSendUnicastCommand [address=22328, sourceEp=0, destEp=1, profileId=260, clusterId=1280, messageData=00 70 00 01 00, messageId=null]
2019-01-13 09:27:02.902 [DEBUG] [gesis.internal.TelegesisFrameHandler] - TX Telegesis queue: 2
2019-01-13 09:27:03.054 [DEBUG] [gesis.internal.TelegesisFrameHandler] - RX Telegesis Data:SR:00,7CB03EAA00B1A662,731E
2019-01-13 09:27:03.055 [DEBUG] [gesis.internal.TelegesisFrameHandler] - RX Telegesis: TelegesisRouteRecordMessageEvent [hops=0, remoteAddress=7CB03EAA00B1A662, networkRoute=[29470]]
2019-01-13 09:27:03.057 [DEBUG] [ngle.telegesis.ZigBeeDongleTelegesis] - Unhandled Telegesis Frame: TelegesisRouteRecordMessageEvent [hops=0, remoteAddress=7CB03EAA00B1A662, networkRoute=[29470]]
2019-01-13 09:27:03.058 [TRACE] [gesis.internal.TelegesisFrameHandler] - TELEGESIS: Get Packet
2019-01-13 09:27:03.074 [DEBUG] [gesis.internal.TelegesisFrameHandler] - RX Telegesis Data:ACK:2C
2019-01-13 09:27:03.076 [DEBUG] [gesis.internal.TelegesisFrameHandler] - RX Telegesis: TelegesisAckMessageEvent [messageId=44]
2019-01-13 09:27:03.077 [DEBUG] [ngle.telegesis.ZigBeeDongleTelegesis] - Unhandled Telegesis Frame: TelegesisAckMessageEvent [messageId=44]
2019-01-13 09:27:03.078 [TRACE] [gesis.internal.TelegesisFrameHandler] - TELEGESIS: Get Packet
2019-01-13 09:27:03.125 [DEBUG] [gesis.internal.TelegesisFrameHandler] - RX Telegesis Data:SR:00,000B57FFFE92BC0D,2E51
2019-01-13 09:27:03.126 [DEBUG] [gesis.internal.TelegesisFrameHandler] - RX Telegesis: TelegesisRouteRecordMessageEvent [hops=0, remoteAddress=000B57FFFE92BC0D, networkRoute=[11857]]
2019-01-13 09:27:03.128 [DEBUG] [ngle.telegesis.ZigBeeDongleTelegesis] - Unhandled Telegesis Frame: TelegesisRouteRecordMessageEvent [hops=0, remoteAddress=000B57FFFE92BC0D, networkRoute=[11857]]
2019-01-13 09:27:03.128 [TRACE] [gesis.internal.TelegesisFrameHandler] - TELEGESIS: Get Packet
2019-01-13 09:27:03.163 [DEBUG] [gesis.internal.TelegesisFrameHandler] - RX Telegesis Data:SR:00,000B57FFFE92BC0D,2E51
2019-01-13 09:27:03.164 [DEBUG] [gesis.internal.TelegesisFrameHandler] - RX Telegesis: TelegesisRouteRecordMessageEvent [hops=0, remoteAddress=000B57FFFE92BC0D, networkRoute=[11857]]
2019-01-13 09:27:03.165 [DEBUG] [ngle.telegesis.ZigBeeDongleTelegesis] - Unhandled Telegesis Frame: TelegesisRouteRecordMessageEvent [hops=0, remoteAddress=000B57FFFE92BC0D, networkRoute=[11857]]

... The last packet received before everything goes wrong (for some seconds, not permanent). Noded added.

2019-01-13 09:27:03.166 [TRACE] [gesis.internal.TelegesisFrameHandler] - TELEGESIS: Get Packet
2019-01-13 09:27:03.202 [DEBUG] [gesis.internal.TelegesisFrameHandler] - RX Telegesis Data:NEWNODE:FC55,000B57FFFE99EC70,2E51
2019-01-13 09:27:03.204 [DEBUG] [gesis.internal.TelegesisFrameHandler] - RX Telegesis: TelegesisDeviceJoinedNetworkEvent [networkAddress=64597, ieeeAddress=000B57FFFE99EC70, parentAddress=11857]
2019-01-13 09:27:03.205 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - 000B57FFFE99EC70: nodeStatusUpdate - node status is UNSECURED_JOIN, network address is 64597.

... No more RX. No call to "Get Packet". Telegesis thread waiting for lock on NetworkManager in ZigBeeNetworkManager.nodeStatusUpdate.
... Several threads waiting to write the network state file, one doing it.

2019-01-13 09:27:07.558 [DEBUG] [gesis.internal.TelegesisFrameHandler] - TX Telegesis queue: 18
2019-01-13 09:27:07.818 [DEBUG] [gesis.internal.TelegesisFrameHandler] - TELEGESIS Timer: Timeout 0
2019-01-13 09:27:07.822 [TRACE] [gesis.internal.TelegesisFrameHandler] - TELEGESIS Timer: Start
2019-01-13 09:27:07.930 [DEBUG] [gesis.internal.TelegesisFrameHandler] - TX Telegesis queue: 18

... First TX timeout. Queue growing. No TX or RX, still waiting on lock.
... Several threads still waiting to write network state file, a different one writing to it

2019-01-13 09:27:12.655 [DEBUG] [gesis.internal.TelegesisFrameHandler] - TX Telegesis queue: 27
2019-01-13 09:27:12.824 [DEBUG] [gesis.internal.TelegesisFrameHandler] - TELEGESIS Timer: Timeout 1
2019-01-13 09:27:12.828 [TRACE] [gesis.internal.TelegesisFrameHandler] - TELEGESIS Timer: Start
2019-01-13 09:27:12.912 [DEBUG] [gesis.internal.TelegesisFrameHandler] - TX Telegesis queue: 27

... Second TX timeout. Queue growing. No TX or RX, still waiting on lock. 
... Several threads still waiting to write network state file, a different one writing to it

2019-01-13 09:27:17.830 [DEBUG] [gesis.internal.TelegesisFrameHandler] - TELEGESIS Timer: Timeout 2
2019-01-13 09:27:17.831 [DEBUG] [gesis.internal.TelegesisFrameHandler] - Error: number of retries exceeded [3].

... Third TX timeout. No TX or RX, still waiting on lock. Several threads still waiting to write network state file, a different one writing to it
... Network state should be changed to OFFLINE, but this does not happen yet: the update event for this gets also waiting on the same lock on NetworkManager

2019-01-13 09:27:40.416 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - Network state is updated to OFFLINE
2019-01-13 09:27:40.418 [DEBUG] [pp.discovery.ZigBeeNetworkDiscoverer] - 000B57FFFE99EC70: Device status updated. NWK=64597

... Quite a while after, the writing race finishes, still a couple of threads writing, but no competition now.
... Network state is changed to OFFLINE (this event can now go through, 23 seconds after it was produced)

2019-01-13 09:27:40.418 [TRACE] [gesis.internal.TelegesisFrameHandler] - TELEGESIS: Get Packet

... But now everything is working fine again, The Telegesis frame handler is back to life, 37 seconds without TX or RX waiting on lock to notify the last RX frame received

2019-01-13 09:27:40.419 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - 000B57FFFE99EC70: Node 64597 added to the network

... The pending node added event

2019-01-13 09:27:40.644 [DEBUG] [gesis.internal.TelegesisFrameHandler] - RX Telegesis Data:NEWNODE:FC55,000B57FFFE99EC70,2E51
2019-01-13 09:27:40.649 [DEBUG] [gesis.internal.TelegesisFrameHandler] - RX Telegesis: TelegesisDeviceJoinedNetworkEvent [networkAddress=64597, ieeeAddress=000B57FFFE99EC70, parentAddress=11857]
2019-01-13 09:27:40.655 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - 000B57FFFE99EC70: nodeStatusUpdate - node status is UNSECURED_JOIN, network address is 64597.
2019-01-13 09:27:40.729 [INFO ] [g.discovery.internal.PersistentInbox] - Added new thing 'zigbee:device:04000AB3:000b57fffe99ec70' to inbox.
2019-01-13 09:27:40.703 [TRACE] [gesis.internal.TelegesisFrameHandler] - TELEGESIS: Get Packet
2019-01-13 09:27:40.728 [DEBUG] [iscovery.ZigBeeNodeServiceDiscoverer] - 000B57FFFE99EC70: Node SVC Discovery: scheduled [NWK_ADDRESS, NEIGHBORS, NODE_DESCRIPTOR, POWER_DESCRIPTOR, ACTIVE_ENDPOINTS]

... Everything back to normal, except, of course, that the Telegisis dongle status is OFFLINE (COMMUNICATION_ERROR)

I think there are two issues here:

  • The root cause (in ZSS): should notifying events synchonize on ā€œthisā€ (NetworkManager), together with the Serialization code? If so, should then the Telegesis Frame handler wait for the frame received events being notified before trying to read again? There might be a good reason for it (I do not know the impact of not synchronizing together event updates and serialization).
  • The triggering cause (in the binding): I think that parallel writings to the network state file should be skipped. This is a costly (and useless, as only the first and last one would suffice) operation, and not good for the life of the SD card in some systems anyway. Would it make sense to sort of ā€œlatchingā€ the writes, so that a first write is always accepted, and concurrent writes are skipped when the writing backlog is bigger than 1 thread? (so that the last one of the parallel writing burst is also taken care of).

@chris, would you look at the ZSS, if possible and if you consider this a bug? I can implement the change in the binding, as it would be anyway an improvement regardless of this issue.

Again, sorry for the long and dense ā€œpossible bugā€ report :slight_smile:

Pedro

Can you point me to the exact point you mean? There are a few notifications that are sent, but in general the notifications are sent via separate threads to avoid the possibility of blocking notifications due to a single handler. If this is where you mean, then synchronising will not make any difference.

In general, the link between the transport layer and the network layer should be decoupled - there should not be any blocking here, with decoupling handled through queuing (on the transmit side) and multiple threads on the receive side. I could look to implement a queue on the receive side as well which might also help reduce threads.

I want to revisit the whole serialisation to try and make it more robust as we are now seeing people loose the XML file. Iā€™m not sure what has started to trigger this, but Iā€™m thinking about splitting the XML into separate files per device. This will help in a number of ways - reduction in the time taken to write, and if there is a lost file, itā€™s only one device, and not the complete network.

This does require some thought though since currently the ZSS libs define this interface for serialising the network. Probably (well, maybe!) it can be removed and we could just use the node updated listener to write the node data to disk.

Regarding your proposal though, Iā€™m in general ok with the idea, and we could also envisage an extension of this. In the JsonDB implementation, I have a holdoff - so writes are deferred by X milliseconds to avoid this issue. There is a retriggerable timer used so that if there are multiple writes within this short time, then only the last one will win.

Sorry, @chris, I meant addNode / updateNode / removeNode methods in NetworkManager, so only add/update/remove node events get synchonized on the NetworkManager, not every kind of event.

As you say, events are notified via the NotificationService, but before ā€œqueingā€ the event there is a synchronization on (this) on these methods. i.e. in addNode:

public void addNode(final ZigBeeNode node) {
    if (node == null) {
        return;
    }

    logger.debug("{}: Node {} added to the network", node.getIeeeAddress(), node.getNetworkAddress());

    synchronized (networkNodes) {
        // Don't add if the node is already known
        // We especially don't want to notify listeners
        if (networkNodes.containsKey(node.getIeeeAddress())) {
            updateNode(node);
            return;
        }
        networkNodes.put(node.getIeeeAddress(), node);
    }

    synchronized (this) {
        if (networkState != ZigBeeTransportState.ONLINE) {
            return;
        }

        for (final ZigBeeNetworkNodeListener listener : nodeListeners) {
            NotificationService.execute(new Runnable() {
                @Override
                public void run() {
                    listener.nodeAdded(node);
                }
            });
        }

        if (networkStateSerializer != null) {
            networkStateSerializer.serialize(this);
        }
    }
}

So this is where the waiting is occuring (I did not add these threads to the previous post):

"pool-25-thread-4086" Id=58294 in BLOCKED on lock=org.openhab.binding.zigbee.internal.ZigBeeNetworkStateSerializerImpl@5a8482
     owned by ESH-discovery-3157 Id=58405
    at org.openhab.binding.zigbee.internal.ZigBeeNetworkStateSerializerImpl.serialize(ZigBeeNetworkStateSerializerImpl.java:130)
    at com.zsmartsystems.zigbee.ZigBeeNetworkManager.addNode(ZigBeeNetworkManager.java:1296)
      - locked com.zsmartsystems.zigbee.ZigBeeNetworkManager@d4c7bb
    at com.zsmartsystems.zigbee.app.discovery.ZigBeeNetworkDiscoverer.addNode(ZigBeeNetworkDiscoverer.java:402)
    at com.zsmartsystems.zigbee.app.discovery.ZigBeeNetworkDiscoverer.deviceStatusUpdate(ZigBeeNetworkDiscoverer.java:172)
    at com.zsmartsystems.zigbee.ZigBeeNetworkManager$1.run(ZigBeeNetworkManager.java:877)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

    Locked synchronizers: count = 1
      - java.util.concurrent.ThreadPoolExecutor$Worker@714b7c

"ESH-discovery-3157" Id=58405 in RUNNABLE
    at java.lang.Class.isAssignableFrom(Native Method)
    at sun.reflect.UnsafeFieldAccessorImpl.ensureObj(UnsafeFieldAccessorImpl.java:57)
    at sun.reflect.UnsafeQualifiedObjectFieldAccessorImpl.get(UnsafeQualifiedObjectFieldAccessorImpl.java:38)
    at java.lang.reflect.Field.get(Field.java:393)
    at com.thoughtworks.xstream.converters.reflection.PureJavaReflectionProvider.visitSerializableFields(PureJavaReflectionProvider.java:136)
    ... (several more lines here)
    at org.openhab.binding.zigbee.internal.ZigBeeNetworkStateSerializerImpl.serialize(ZigBeeNetworkStateSerializerImpl.java:148)
      - locked org.openhab.binding.zigbee.internal.ZigBeeNetworkStateSerializerImpl@5a8482
    at org.openhab.binding.zigbee.handler.ZigBeeCoordinatorHandler.serializeNetwork(ZigBeeCoordinatorHandler.java:920)
    at org.openhab.binding.zigbee.discovery.ZigBeeDiscoveryService$2.run(ZigBeeDiscoveryService.java:247)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

    Locked synchronizers: count = 1
      - java.util.concurrent.ThreadPoolExecutor$Worker@733dd6

Pedro

@puzzle-star firstly, thanks for the nice analyses. I will have a think about this - Iā€™m not completely sure that fixing this in the binding is the best thing since the problem is still there in the library for other users/systems.

Out of interest do you know what is causing the actual delays? It looks from the log like itā€™s delaying for around 5 seconds - is it the fact that you have a large network, and writing the XML is slow, or is there another reason?

I think there are a couple of issues that need to be resolved - the nodeStatusUpdate callback from the transport should be decoupled so that the transport canā€™t get delayed like this. That would probably fix your problem with the Telegesis handler, but still leaves other issues.

We then need to solve the serialisation - we could probably do what you suggested (ie donā€™t call the serialise method when we are already serialising). However, I really wonder about changing the serialisation system to serialise each node separately rather than the whole network. This would have a couple of benefits -:

  • It would significantly reduce the serialisation time (maybe this also solves your problem if the root cause of the delay is the serialisation time).
  • If thereā€™s a problem, we donā€™t loose the whole network, just one device

Iā€™d welcome any thoughts you may have.

Hi @chris,

Agreed. As I commented, the root cause seemed to be in the ZSS, but avoiding the parallel network state writes in the binding has also some other benefits. Even though the right thing would be, as you say, writing xml per nodes.

I cut the logs to leave just the relevant parts to spot the issue, so between the (ā€¦) sections there is a lot of ā€œnormalā€ stuff happening. I mean ā€œnormalā€ by having the binding trying to send things to the network, with all TX being queued and no RX. I can send you the full logs if interested (one of the cut parts below).

i.e., for the block:

2019-01-13 09:27:03.205 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - 000B57FFFE99EC70: nodeStatusUpdate - node status is UNSECURED_JOIN, network address is 64597.

... No more RX. No call to "Get Packet". Telegesis thread waiting for lock on NetworkManager in ZigBeeNetworkManager.nodeStatusUpdate.
... Several threads waiting to write the network state file, one doing it.

2019-01-13 09:27:07.558 [DEBUG] [gesis.internal.TelegesisFrameHandler] - TX Telegesis queue: 18

Indeed we have this in between:

2019-01-13 09:27:03.205 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - 000B57FFFE99EC70: nodeStatusUpdate - node status is UNSECURED_JOIN, network address is 64597.
...
2019-01-13 09:27:03.663 [DEBUG] [ng.zigbee.handler.ZigBeeThingHandler] - 000B57FFFE3953C5: Polling...
2019-01-13 09:27:03.665 [DEBUG] [ng.zigbee.handler.ZigBeeThingHandler] - 000B57FFFE3953C5: Polling zigbee:device:04000AB3:000b57fffe3953c5:000B57FFFE3953C5_1_dimmer
2019-01-13 09:27:03.667 [DEBUG] [.zsmartsystems.zigbee.zcl.ZclCluster] - readSync request: ZclAttribute [cluster=ON_OFF, id=0, name=OnOff, dataType=BOOLEAN, lastValue=true, lastReportTime=Sun Jan 13 09:26:00 CET 2019]
2019-01-13 09:27:03.668 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX CMD: ReadAttributesCommand [On/Off: 0/0 -> 18413/1, cluster=0006, TID=71, identifiers=[0]]
2019-01-13 09:27:03.670 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX ZCL: ZclHeader [frameType=ENTIRE_PROFILE_COMMAND, manufacturerSpecific=false, direction=CLIENT_TO_SERVER, disableDefaultResponse=false, manufacturerCode=0, sequenceNumber=113, commandId=0]
2019-01-13 09:27:03.671 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX APS: ZigBeeApsFrame [sourceAddress=0/1, destinationAddress=18413/1, profile=0104, cluster=6, addressMode=DEVICE, radius=31, apsSecurity=false, apsCounter=113, payload=00 71 00 00 00]
2019-01-13 09:27:03.673 [DEBUG] [ngle.telegesis.ZigBeeDongleTelegesis] - Telegesis send: TelegesisSendUnicastCommand [address=18413, sourceEp=0, destEp=1, profileId=260, clusterId=6, messageData=00 71 00 00 00, messageId=null]
2019-01-13 09:27:03.674 [DEBUG] [gesis.internal.TelegesisFrameHandler] - TX Telegesis queue: 3
2019-01-13 09:27:04.124 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Done.
2019-01-13 09:27:04.135 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Start.
2019-01-13 09:27:05.157 [DEBUG] [ng.zigbee.handler.ZigBeeThingHandler] - 000B57FFFEA6A44D: Polling...
2019-01-13 09:27:05.159 [DEBUG] [ng.zigbee.handler.ZigBeeThingHandler] - 000B57FFFEA6A44D: Polling zigbee:device:04000AB3:000b57fffea6a44d:000B57FFFEA6A44D_1_dimmer
2019-01-13 09:27:05.160 [DEBUG] [.zsmartsystems.zigbee.zcl.ZclCluster] - readSync request: ZclAttribute [cluster=ON_OFF, id=0, name=OnOff, dataType=BOOLEAN, lastValue=false, lastReportTime=Sun Jan 13 09:26:07 CET 2019]
2019-01-13 09:27:05.161 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX CMD: ReadAttributesCommand [On/Off: 0/0 -> 59213/1, cluster=0006, TID=72, identifiers=[0]]
2019-01-13 09:27:05.162 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX ZCL: ZclHeader [frameType=ENTIRE_PROFILE_COMMAND, manufacturerSpecific=false, direction=CLIENT_TO_SERVER, disableDefaultResponse=false, manufacturerCode=0, sequenceNumber=114, commandId=0]
2019-01-13 09:27:05.164 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX APS: ZigBeeApsFrame [sourceAddress=0/1, destinationAddress=59213/1, profile=0104, cluster=6, addressMode=DEVICE, radius=31, apsSecurity=false, apsCounter=114, payload=00 72 00 00 00]
2019-01-13 09:27:05.165 [DEBUG] [ngle.telegesis.ZigBeeDongleTelegesis] - Telegesis send: TelegesisSendUnicastCommand [address=59213, sourceEp=0, destEp=1, profileId=260, clusterId=6, messageData=00 72 00 00 00, messageId=null]
2019-01-13 09:27:05.166 [DEBUG] [gesis.internal.TelegesisFrameHandler] - TX Telegesis queue: 4
2019-01-13 09:27:05.255 [DEBUG] [iscovery.ZigBeeNodeServiceDiscoverer] - 00158D0002367748: Node SVC Discovery: ManagementLqiRequest response CommandResult [TIMEOUT]
2019-01-13 09:27:05.256 [DEBUG] [iscovery.ZigBeeNodeServiceDiscoverer] - 00158D0002367748: Node SVC Discovery: request NEIGHBORS failed. Retry 2, wait 2309ms before retry.
2019-01-13 09:27:05.257 [DEBUG] [zigbee.transaction.ZigBeeTransaction] - Transaction timeout: NetworkAddressRequest [0/0 -> 65535/0, cluster=0000, TID=C6, ieeeAddr=000B57FFFE9B33C2, requestType=0, startIndex=0]
2019-01-13 09:27:05.258 [DEBUG] [zigbee.transaction.ZigBeeTransaction] - Transaction timeout: NetworkAddressRequest [0/0 -> 65535/0, cluster=0000, TID=C9, ieeeAddr=7CB03EAA00A34055, requestType=0, startIndex=0]
2019-01-13 09:27:05.260 [DEBUG] [zigbee.transaction.ZigBeeTransaction] - Transaction timeout: ReadAttributesCommand [IAS Zone: 0/0 -> 22328/1, cluster=0500, TID=CE, identifiers=[1]]
2019-01-13 09:27:05.261 [DEBUG] [zigbee.transaction.ZigBeeTransaction] - Transaction timeout: ReadAttributesCommand [On/Off: 0/0 -> 29801/3, cluster=0006, TID=DB, identifiers=[0]]
2019-01-13 09:27:05.262 [DEBUG] [iscovery.ZigBeeNodeServiceDiscoverer] - 000D6F001113A331: Node SVC Discovery: running
2019-01-13 09:27:05.264 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX CMD: ManagementLqiRequest [0/0 -> 0/0, cluster=0031, TID=73, startIndex=0]
2019-01-13 09:27:05.265 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX APS: ZigBeeApsFrame [sourceAddress=0/0, destinationAddress=0/0, profile=0000, cluster=49, addressMode=DEVICE, radius=31, apsSecurity=false, apsCounter=115, payload=00 00]
2019-01-13 09:27:05.267 [DEBUG] [ngle.telegesis.ZigBeeDongleTelegesis] - Telegesis send: TelegesisSendUnicastCommand [address=0, sourceEp=0, destEp=0, profileId=0, clusterId=49, messageData=00 00, messageId=null]
2019-01-13 09:27:05.268 [DEBUG] [gesis.internal.TelegesisFrameHandler] - TX Telegesis queue: 5
2019-01-13 09:27:05.347 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Done.
2019-01-13 09:27:05.358 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Start.
2019-01-13 09:27:05.625 [DEBUG] [iscovery.ZigBeeNodeServiceDiscoverer] - 000B57FFFE382ACE: Node SVC Discovery: ManagementLqiRequest response CommandResult [TIMEOUT]
2019-01-13 09:27:05.627 [DEBUG] [iscovery.ZigBeeNodeServiceDiscoverer] - 000B57FFFE382ACE: Node SVC Discovery: request NEIGHBORS failed. Retry 2, wait 2256ms before retry.
2019-01-13 09:27:05.628 [DEBUG] [iscovery.ZigBeeNodeServiceDiscoverer] - 000B57FFFE9B33C2: Node SVC Discovery: running
2019-01-13 09:27:05.629 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX CMD: NetworkAddressRequest [0/0 -> 65535/0, cluster=0000, TID=74, ieeeAddr=000B57FFFE9B33C2, requestType=0, startIndex=0]
2019-01-13 09:27:05.630 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX APS: ZigBeeApsFrame [sourceAddress=0/0, destinationAddress=65535/0, profile=0000, cluster=0, addressMode=DEVICE, radius=31, apsSecurity=false, apsCounter=116, payload=00 C2 33 9B FE FF 57 0B 00 00 00]
2019-01-13 09:27:05.632 [DEBUG] [ngle.telegesis.ZigBeeDongleTelegesis] - Telegesis send: TelegesisSendMulticastCommand [radius=31, address=65535, sourceEp=0, destEp=0, profileId=0, clusterId=0, messageData=00 C2 33 9B FE FF 57 0B 00 00 00]
2019-01-13 09:27:05.632 [DEBUG] [gesis.internal.TelegesisFrameHandler] - TX Telegesis queue: 6
2019-01-13 09:27:05.723 [DEBUG] [ng.zigbee.handler.ZigBeeThingHandler] - 000B57FFFE9A3F6C: Polling...
2019-01-13 09:27:05.725 [DEBUG] [ng.zigbee.handler.ZigBeeThingHandler] - 000B57FFFE9A3F6C: Polling zigbee:device:04000AB3:000b57fffe9a3f6c:000B57FFFE9A3F6C_1_dimmer
2019-01-13 09:27:05.726 [DEBUG] [.zsmartsystems.zigbee.zcl.ZclCluster] - readSync request: ZclAttribute [cluster=ON_OFF, id=0, name=OnOff, dataType=BOOLEAN, lastValue=false, lastReportTime=Sun Jan 13 09:26:07 CET 2019]
2019-01-13 09:27:05.728 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX CMD: ReadAttributesCommand [On/Off: 0/0 -> 22015/1, cluster=0006, TID=75, identifiers=[0]]
2019-01-13 09:27:05.729 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX ZCL: ZclHeader [frameType=ENTIRE_PROFILE_COMMAND, manufacturerSpecific=false, direction=CLIENT_TO_SERVER, disableDefaultResponse=false, manufacturerCode=0, sequenceNumber=117, commandId=0]
2019-01-13 09:27:05.730 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX APS: ZigBeeApsFrame [sourceAddress=0/1, destinationAddress=22015/1, profile=0104, cluster=6, addressMode=DEVICE, radius=31, apsSecurity=false, apsCounter=117, payload=00 75 00 00 00]
2019-01-13 09:27:05.732 [DEBUG] [ngle.telegesis.ZigBeeDongleTelegesis] - Telegesis send: TelegesisSendUnicastCommand [address=22015, sourceEp=0, destEp=1, profileId=260, clusterId=6, messageData=00 75 00 00 00, messageId=null]
2019-01-13 09:27:05.733 [DEBUG] [gesis.internal.TelegesisFrameHandler] - TX Telegesis queue: 7
2019-01-13 09:27:06.047 [DEBUG] [ng.zigbee.handler.ZigBeeThingHandler] - 000B57FFFEA5A2BD: Polling...
2019-01-13 09:27:06.048 [DEBUG] [ng.zigbee.handler.ZigBeeThingHandler] - 000B57FFFEA5A2BD: Polling zigbee:device:04000AB3:000b57fffea5a2bd:000B57FFFEA5A2BD_1_dimmer
2019-01-13 09:27:06.049 [DEBUG] [.zsmartsystems.zigbee.zcl.ZclCluster] - readSync request: ZclAttribute [cluster=ON_OFF, id=0, name=OnOff, dataType=BOOLEAN, lastValue=false, lastReportTime=Sun Jan 13 09:26:05 CET 2019]
2019-01-13 09:27:06.051 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX CMD: ReadAttributesCommand [On/Off: 0/0 -> 30505/1, cluster=0006, TID=76, identifiers=[0]]
2019-01-13 09:27:06.052 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX ZCL: ZclHeader [frameType=ENTIRE_PROFILE_COMMAND, manufacturerSpecific=false, direction=CLIENT_TO_SERVER, disableDefaultResponse=false, manufacturerCode=0, sequenceNumber=118, commandId=0]
2019-01-13 09:27:06.053 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX APS: ZigBeeApsFrame [sourceAddress=0/1, destinationAddress=30505/1, profile=0104, cluster=6, addressMode=DEVICE, radius=31, apsSecurity=false, apsCounter=118, payload=00 76 00 00 00]
2019-01-13 09:27:06.054 [DEBUG] [ngle.telegesis.ZigBeeDongleTelegesis] - Telegesis send: TelegesisSendUnicastCommand [address=30505, sourceEp=0, destEp=1, profileId=260, clusterId=6, messageData=00 76 00 00 00, messageId=null]
2019-01-13 09:27:06.055 [DEBUG] [gesis.internal.TelegesisFrameHandler] - TX Telegesis queue: 8
2019-01-13 09:27:06.465 [DEBUG] [ng.zigbee.handler.ZigBeeThingHandler] - 000B57FFFE40F80D: Polling...
2019-01-13 09:27:06.466 [DEBUG] [ng.zigbee.handler.ZigBeeThingHandler] - 000B57FFFE40F80D: Polling zigbee:device:04000AB3:000b57fffe40f80d:000B57FFFE40F80D_1_dimmer
2019-01-13 09:27:06.468 [DEBUG] [.zsmartsystems.zigbee.zcl.ZclCluster] - readSync request: ZclAttribute [cluster=ON_OFF, id=0, name=OnOff, dataType=BOOLEAN, lastValue=false, lastReportTime=Sun Jan 13 09:26:09 CET 2019]
2019-01-13 09:27:06.469 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX CMD: ReadAttributesCommand [On/Off: 0/0 -> 34756/1, cluster=0006, TID=77, identifiers=[0]]
2019-01-13 09:27:06.470 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX ZCL: ZclHeader [frameType=ENTIRE_PROFILE_COMMAND, manufacturerSpecific=false, direction=CLIENT_TO_SERVER, disableDefaultResponse=false, manufacturerCode=0, sequenceNumber=119, commandId=0]
2019-01-13 09:27:06.471 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX APS: ZigBeeApsFrame [sourceAddress=0/1, destinationAddress=34756/1, profile=0104, cluster=6, addressMode=DEVICE, radius=31, apsSecurity=false, apsCounter=119, payload=00 77 00 00 00]
2019-01-13 09:27:06.472 [DEBUG] [ngle.telegesis.ZigBeeDongleTelegesis] - Telegesis send: TelegesisSendUnicastCommand [address=34756, sourceEp=0, destEp=1, profileId=260, clusterId=6, messageData=00 77 00 00 00, messageId=null]
2019-01-13 09:27:06.473 [DEBUG] [gesis.internal.TelegesisFrameHandler] - TX Telegesis queue: 9
2019-01-13 09:27:06.506 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Done.
2019-01-13 09:27:06.518 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Start.
2019-01-13 09:27:07.200 [DEBUG] [scovery.ZigBeeNodePropertyDiscoverer] - 00158D0002379093: Manufacturer request failed (retry 0)
2019-01-13 09:27:07.202 [DEBUG] [.zsmartsystems.zigbee.zcl.ZclCluster] - readSync request: ZclAttribute [cluster=BASIC, id=4, name=ManufacturerName, dataType=CHARACTER_STRING, lastValue=null]
2019-01-13 09:27:07.203 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX CMD: ReadAttributesCommand [Basic: 0/0 -> 7442/1, cluster=0000, TID=78, identifiers=[4]]
2019-01-13 09:27:07.204 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX ZCL: ZclHeader [frameType=ENTIRE_PROFILE_COMMAND, manufacturerSpecific=false, direction=CLIENT_TO_SERVER, disableDefaultResponse=false, manufacturerCode=0, sequenceNumber=120, commandId=0]
2019-01-13 09:27:07.206 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX APS: ZigBeeApsFrame [sourceAddress=0/1, destinationAddress=7442/1, profile=0104, cluster=0, addressMode=DEVICE, radius=31, apsSecurity=false, apsCounter=120, payload=00 78 00 04 00]
2019-01-13 09:27:07.207 [DEBUG] [ngle.telegesis.ZigBeeDongleTelegesis] - Telegesis send: TelegesisSendUnicastCommand [address=7442, sourceEp=0, destEp=1, profileId=260, clusterId=0, messageData=00 78 00 04 00, messageId=null]
2019-01-13 09:27:07.208 [DEBUG] [gesis.internal.TelegesisFrameHandler] - TX Telegesis queue: 10
2019-01-13 09:27:07.261 [DEBUG] [ng.zigbee.handler.ZigBeeThingHandler] - 000B57FFFE998A18: Polling...
2019-01-13 09:27:07.312 [DEBUG] [scovery.ZigBeeNodePropertyDiscoverer] - 00158D0002367748: Manufacturer request failed (retry 0)
2019-01-13 09:27:07.312 [DEBUG] [scovery.ZigBeeNodePropertyDiscoverer] - 000B57FFFE122440: Manufacturer request failed (retry 0)
2019-01-13 09:27:07.313 [DEBUG] [.zsmartsystems.zigbee.zcl.ZclCluster] - readSync request: ZclAttribute [cluster=BASIC, id=4, name=ManufacturerName, dataType=CHARACTER_STRING, lastValue=null]
2019-01-13 09:27:07.313 [DEBUG] [.zsmartsystems.zigbee.zcl.ZclCluster] - readSync request: ZclAttribute [cluster=BASIC, id=4, name=ManufacturerName, dataType=CHARACTER_STRING, lastValue=null]
2019-01-13 09:27:07.263 [DEBUG] [ng.zigbee.handler.ZigBeeThingHandler] - 000B57FFFE998A18: Polling zigbee:device:04000AB3:000b57fffe998a18:000B57FFFE998A18_1_dimmer
2019-01-13 09:27:07.315 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX CMD: ReadAttributesCommand [Basic: 0/0 -> 6329/1, cluster=0000, TID=79, identifiers=[4]]
2019-01-13 09:27:07.315 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX CMD: ReadAttributesCommand [Basic: 0/0 -> 30474/1, cluster=0000, TID=7A, identifiers=[4]]
2019-01-13 09:27:07.316 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX ZCL: ZclHeader [frameType=ENTIRE_PROFILE_COMMAND, manufacturerSpecific=false, direction=CLIENT_TO_SERVER, disableDefaultResponse=false, manufacturerCode=0, sequenceNumber=122, commandId=0]
2019-01-13 09:27:07.316 [DEBUG] [.zsmartsystems.zigbee.zcl.ZclCluster] - readSync request: ZclAttribute [cluster=ON_OFF, id=0, name=OnOff, dataType=BOOLEAN, lastValue=false, lastReportTime=Sun Jan 13 09:26:08 CET 2019]
2019-01-13 09:27:07.316 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX ZCL: ZclHeader [frameType=ENTIRE_PROFILE_COMMAND, manufacturerSpecific=false, direction=CLIENT_TO_SERVER, disableDefaultResponse=false, manufacturerCode=0, sequenceNumber=121, commandId=0]
2019-01-13 09:27:07.317 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX APS: ZigBeeApsFrame [sourceAddress=0/1, destinationAddress=30474/1, profile=0104, cluster=0, addressMode=DEVICE, radius=31, apsSecurity=false, apsCounter=121, payload=00 7A 00 04 00]
2019-01-13 09:27:07.318 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX APS: ZigBeeApsFrame [sourceAddress=0/1, destinationAddress=6329/1, profile=0104, cluster=0, addressMode=DEVICE, radius=31, apsSecurity=false, apsCounter=122, payload=00 79 00 04 00]
2019-01-13 09:27:07.319 [DEBUG] [ngle.telegesis.ZigBeeDongleTelegesis] - Telegesis send: TelegesisSendUnicastCommand [address=30474, sourceEp=0, destEp=1, profileId=260, clusterId=0, messageData=00 7A 00 04 00, messageId=null]
2019-01-13 09:27:07.320 [DEBUG] [gesis.internal.TelegesisFrameHandler] - TX Telegesis queue: 11
2019-01-13 09:27:07.320 [DEBUG] [ngle.telegesis.ZigBeeDongleTelegesis] - Telegesis send: TelegesisSendUnicastCommand [address=6329, sourceEp=0, destEp=1, profileId=260, clusterId=0, messageData=00 79 00 04 00, messageId=null]
2019-01-13 09:27:07.319 [DEBUG] [.zsmartsystems.zigbee.zcl.ZclCluster] - readSync request: ZclAttribute [cluster=LEVEL_CONTROL, id=0, name=CurrentLevel, dataType=UNSIGNED_8_BIT_INTEGER, lastValue=254, lastReportTime=Sun Jan 13 09:25:56 CET 2019]
2019-01-13 09:27:07.321 [DEBUG] [gesis.internal.TelegesisFrameHandler] - TX Telegesis queue: 12
2019-01-13 09:27:07.322 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX CMD: ReadAttributesCommand [On/Off: 0/0 -> 27451/1, cluster=0006, TID=7B, identifiers=[0]]
2019-01-13 09:27:07.323 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX ZCL: ZclHeader [frameType=ENTIRE_PROFILE_COMMAND, manufacturerSpecific=false, direction=CLIENT_TO_SERVER, disableDefaultResponse=false, manufacturerCode=0, sequenceNumber=123, commandId=0]
2019-01-13 09:27:07.323 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX CMD: ReadAttributesCommand [Level Control: 0/0 -> 29253/1, cluster=0008, TID=7C, identifiers=[0]]
2019-01-13 09:27:07.324 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX APS: ZigBeeApsFrame [sourceAddress=0/1, destinationAddress=27451/1, profile=0104, cluster=6, addressMode=DEVICE, radius=31, apsSecurity=false, apsCounter=123, payload=00 7B 00 00 00]
2019-01-13 09:27:07.326 [DEBUG] [ngle.telegesis.ZigBeeDongleTelegesis] - Telegesis send: TelegesisSendUnicastCommand [address=27451, sourceEp=0, destEp=1, profileId=260, clusterId=6, messageData=00 7B 00 00 00, messageId=null]
2019-01-13 09:27:07.325 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX ZCL: ZclHeader [frameType=ENTIRE_PROFILE_COMMAND, manufacturerSpecific=false, direction=CLIENT_TO_SERVER, disableDefaultResponse=false, manufacturerCode=0, sequenceNumber=124, commandId=0]
2019-01-13 09:27:07.327 [DEBUG] [gesis.internal.TelegesisFrameHandler] - TX Telegesis queue: 13
2019-01-13 09:27:07.327 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX APS: ZigBeeApsFrame [sourceAddress=0/1, destinationAddress=29253/1, profile=0104, cluster=8, addressMode=DEVICE, radius=31, apsSecurity=false, apsCounter=124, payload=00 7C 00 00 00]
2019-01-13 09:27:07.329 [DEBUG] [ngle.telegesis.ZigBeeDongleTelegesis] - Telegesis send: TelegesisSendUnicastCommand [address=29253, sourceEp=0, destEp=1, profileId=260, clusterId=8, messageData=00 7C 00 00 00, messageId=null]
2019-01-13 09:27:07.331 [DEBUG] [gesis.internal.TelegesisFrameHandler] - TX Telegesis queue: 14
2019-01-13 09:27:07.355 [DEBUG] [iscovery.ZigBeeNodeServiceDiscoverer] - 000B57FFFE9A3F6C: Node SVC Discovery: ManagementLqiRequest response CommandResult [TIMEOUT]
2019-01-13 09:27:07.358 [DEBUG] [iscovery.ZigBeeNodeServiceDiscoverer] - 000B57FFFE9A3F6C: Node SVC Discovery: request NEIGHBORS failed. Retry 1, wait 2190ms before retry.
2019-01-13 09:27:07.359 [DEBUG] [iscovery.ZigBeeNodeServiceDiscoverer] - 7CB03EAA00A34067: Node SVC Discovery: running
2019-01-13 09:27:07.360 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX CMD: ManagementLqiRequest [0/0 -> 60662/0, cluster=0031, TID=7D, startIndex=0]
2019-01-13 09:27:07.362 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX APS: ZigBeeApsFrame [sourceAddress=0/0, destinationAddress=60662/0, profile=0000, cluster=49, addressMode=DEVICE, radius=31, apsSecurity=false, apsCounter=125, payload=00 00]
2019-01-13 09:27:07.363 [DEBUG] [ngle.telegesis.ZigBeeDongleTelegesis] - Telegesis send: TelegesisSendUnicastCommand [address=60662, sourceEp=0, destEp=0, profileId=0, clusterId=49, messageData=00 00, messageId=null]
2019-01-13 09:27:07.365 [DEBUG] [gesis.internal.TelegesisFrameHandler] - TX Telegesis queue: 15
2019-01-13 09:27:07.448 [DEBUG] [scovery.ZigBeeNodePropertyDiscoverer] - 00158D000245A85C: Manufacturer request failed (retry 0)
2019-01-13 09:27:07.450 [DEBUG] [.zsmartsystems.zigbee.zcl.ZclCluster] - readSync request: ZclAttribute [cluster=BASIC, id=4, name=ManufacturerName, dataType=CHARACTER_STRING, lastValue=null]
2019-01-13 09:27:07.451 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX CMD: ReadAttributesCommand [Basic: 0/0 -> 4824/1, cluster=0000, TID=7E, identifiers=[4]]
2019-01-13 09:27:07.452 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX ZCL: ZclHeader [frameType=ENTIRE_PROFILE_COMMAND, manufacturerSpecific=false, direction=CLIENT_TO_SERVER, disableDefaultResponse=false, manufacturerCode=0, sequenceNumber=126, commandId=0]
2019-01-13 09:27:07.453 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX APS: ZigBeeApsFrame [sourceAddress=0/1, destinationAddress=4824/1, profile=0104, cluster=0, addressMode=DEVICE, radius=31, apsSecurity=false, apsCounter=126, payload=00 7E 00 04 00]
2019-01-13 09:27:07.454 [DEBUG] [ngle.telegesis.ZigBeeDongleTelegesis] - Telegesis send: TelegesisSendUnicastCommand [address=4824, sourceEp=0, destEp=1, profileId=260, clusterId=0, messageData=00 7E 00 04 00, messageId=null]
2019-01-13 09:27:07.456 [DEBUG] [gesis.internal.TelegesisFrameHandler] - TX Telegesis queue: 16
2019-01-13 09:27:07.525 [DEBUG] [scovery.ZigBeeNodePropertyDiscoverer] - 00158D00022EC46A: Manufacturer request failed (retry 0)
2019-01-13 09:27:07.527 [DEBUG] [.zsmartsystems.zigbee.zcl.ZclCluster] - readSync request: ZclAttribute [cluster=BASIC, id=4, name=ManufacturerName, dataType=CHARACTER_STRING, lastValue=null]
2019-01-13 09:27:07.529 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX CMD: ReadAttributesCommand [Basic: 0/0 -> 40475/1, cluster=0000, TID=7F, identifiers=[4]]
2019-01-13 09:27:07.530 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX ZCL: ZclHeader [frameType=ENTIRE_PROFILE_COMMAND, manufacturerSpecific=false, direction=CLIENT_TO_SERVER, disableDefaultResponse=false, manufacturerCode=0, sequenceNumber=127, commandId=0]
2019-01-13 09:27:07.532 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX APS: ZigBeeApsFrame [sourceAddress=0/1, destinationAddress=40475/1, profile=0104, cluster=0, addressMode=DEVICE, radius=31, apsSecurity=false, apsCounter=127, payload=00 7F 00 04 00]
2019-01-13 09:27:07.533 [DEBUG] [ngle.telegesis.ZigBeeDongleTelegesis] - Telegesis send: TelegesisSendUnicastCommand [address=40475, sourceEp=0, destEp=1, profileId=260, clusterId=0, messageData=00 7F 00 04 00, messageId=null]
2019-01-13 09:27:07.534 [DEBUG] [gesis.internal.TelegesisFrameHandler] - TX Telegesis queue: 17
2019-01-13 09:27:07.549 [DEBUG] [scovery.ZigBeeNodePropertyDiscoverer] - 00158D0002371F19: Manufacturer request failed (retry 0)
2019-01-13 09:27:07.550 [DEBUG] [.zsmartsystems.zigbee.zcl.ZclCluster] - readSync request: ZclAttribute [cluster=BASIC, id=4, name=ManufacturerName, dataType=CHARACTER_STRING, lastValue=null]
2019-01-13 09:27:07.552 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX CMD: ReadAttributesCommand [Basic: 0/0 -> 22328/1, cluster=0000, TID=80, identifiers=[4]]
2019-01-13 09:27:07.553 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX ZCL: ZclHeader [frameType=ENTIRE_PROFILE_COMMAND, manufacturerSpecific=false, direction=CLIENT_TO_SERVER, disableDefaultResponse=false, manufacturerCode=0, sequenceNumber=128, commandId=0]
2019-01-13 09:27:07.555 [DEBUG] [tsystems.zigbee.ZigBeeNetworkManager] - TX APS: ZigBeeApsFrame [sourceAddress=0/1, destinationAddress=22328/1, profile=0104, cluster=0, addressMode=DEVICE, radius=31, apsSecurity=false, apsCounter=128, payload=00 80 00 04 00]
2019-01-13 09:27:07.557 [DEBUG] [ngle.telegesis.ZigBeeDongleTelegesis] - Telegesis send: TelegesisSendUnicastCommand [address=22328, sourceEp=0, destEp=1, profileId=260, clusterId=0, messageData=00 80 00 04 00, messageId=null]
...
2019-01-13 09:27:07.558 [DEBUG] [gesis.internal.TelegesisFrameHandler] - TX Telegesis queue: 18
2019-01-13 09:27:07.736 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Done.
2019-01-13 09:27:07.748 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Start.
2019-01-13 09:27:07.818 [DEBUG] [gesis.internal.TelegesisFrameHandler] - TELEGESIS Timer: Timeout 0

I think it is because of a large network (>40 devices), if we can consider that to be ā€œlargeā€ (I do not know what others have), and a slow writing of the XML: I use a rpi3, but the whole filesystem resides on an encrypted partition on the SD card. The SD card is one of the fast ones, but not enoughā€¦

When for some reason there are several writes in a row (i.e., during a network discovery, where most of all devices receive a nodeUpdate), I see the issue happen: the file is written and rewritten several times. Each of the writes seems fast enough, but all together take several seconds.

I have implemented this today and I am testing it. At least it will relief the issue while (if) the ZSS is changed to have a file per node, but also has some other benefits: as threads are a valuable resource in OH, freeing them inmediately when there is already another thread waiting to write should improve latency of busy systems (as mine). Otherwise everything gets blocked while waiting: specially the discovery thread pool, but I have seen other thread pools affected. I will submit the PR once I check it is working OK.

Edit: it seems to work nice (see below, extract from the log), so I am submitting the PR and we can follow the discussion there:

> egrep -a "Saving|Skipping|Deferring" /var/log/openhab2/openhab.log
2019-01-19 09:33:56.973 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state
2019-01-19 09:33:56.985 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Start.
2019-01-19 09:33:57.206 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Deferring ZigBee network state file save (another thread already writing)
2019-01-19 09:33:57.760 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 09:33:57.852 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 09:33:58.373 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 09:33:58.409 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 09:33:58.513 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Done.
2019-01-19 09:33:58.518 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Deferring ZigBee network state file save (another thread already writing)
2019-01-19 09:33:58.529 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Start.
2019-01-19 09:33:58.647 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 09:33:58.712 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 09:33:58.718 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 09:33:58.735 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 09:33:58.801 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 09:33:58.854 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 09:33:58.858 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 09:33:58.862 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 09:33:58.883 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 09:33:58.888 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 09:33:58.932 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 09:33:58.941 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 09:33:58.955 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 09:33:58.959 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 09:33:58.961 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 09:33:58.964 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 09:34:00.329 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Done.
2019-01-19 09:34:00.347 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Start.
2019-01-19 09:34:01.712 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Done.

If you like the approach, you could reuse it for the Json DBs.

I think this is the right way to go. But I think it will require at least a full redefinition of the serialization interface in ZSS (and hence how the binding uses it), to allow for different node added / updated / deleted calls, and still keep the possibility of a full network state save.

Pedro

Ok, so I think this is definately the issue - slow computer and writing what ends up being a very large XML. 40 devices isnā€™t super large, but the XML is probably many MB, and on a slow RPi, it is likely taking a few seconds to write.

Iā€™m not sure why we would need to do this in the Json DB? There is a system already in place to avoid this sort of problem there as the writes are delayed as I mentioned in a previous post yesterday.

Iā€™m not sure I agree. If we just serialise per node, then I think there is no need to also have the ability to serialise the complete network is there?

@puzzle-star I was wrong about the following statement -:

I thought that I used separate threads for the processing of received messages as I normally like to decouple the data reception from the processing to avoid packet processing delays impacting on the dongle interface. However, it looks like that is not implemented in this case.

I think what I will do is to add the decoupling in ZigBeeNetworkManager so that it adds all notifications from the dongle (be it received messages, state updates etc) into a thread queue for processing (probably just a single thread in the queue). This will prevent any issue in the upper layer affecting the dongle interface.

Iā€™ve raised two issues to look at these problems if you want to discuss there Iā€™d welcome your input -:

If you look at the log extract below (with my patch applied for testing), each write to the file takes around 1.3secs quite consistently (so it is not dependant on external load), and other than OH activity my system is quite idle in terms of load. In a given moment, there are 24 parallel write attemps in a second. This would have blocked the system for >30secs :slight_smile:, and there are a few more just after

This must be quite standard with people using rPIs and things like that.

#> tail -f /var/log/openhab2/openhab.log | egrep -a --line-buffered "Saving|Skipping|Deferring|Resuming"
2019-01-19 10:43:09.129 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state
2019-01-19 10:43:09.159 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Start.
2019-01-19 10:43:10.530 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Done.
2019-01-19 10:43:10.542 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state
2019-01-19 10:43:10.559 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Start.
2019-01-19 10:43:11.961 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Done.
2019-01-19 10:43:11.972 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state
2019-01-19 10:43:11.986 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Start.
2019-01-19 10:43:13.290 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Done.
2019-01-19 10:43:13.302 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state
2019-01-19 10:43:13.314 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Start.
2019-01-19 10:43:14.752 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Done.
2019-01-19 10:43:14.771 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state
2019-01-19 10:43:14.782 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Start.
2019-01-19 10:43:15.249 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Deferring ZigBee network state file save (another thread already writing)
2019-01-19 10:43:16.819 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Done.
2019-01-19 10:43:16.822 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Resuming ZigBee network state file save (previous thread finished)
2019-01-19 10:43:16.833 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Start.
2019-01-19 10:43:16.849 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Deferring ZigBee network state file save (another thread already writing)
2019-01-19 10:43:18.207 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Done.
2019-01-19 10:43:18.209 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Resuming ZigBee network state file save (previous thread finished)
2019-01-19 10:43:18.220 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Deferring ZigBee network state file save (another thread already writing)
2019-01-19 10:43:18.222 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Start.
2019-01-19 10:43:19.163 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:19.186 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:19.199 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:19.210 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:19.222 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:19.233 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:19.619 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Done.
2019-01-19 10:43:19.621 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Resuming ZigBee network state file save (previous thread finished)
2019-01-19 10:43:19.627 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Start.
2019-01-19 10:43:19.630 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Deferring ZigBee network state file save (another thread already writing)
2019-01-19 10:43:19.726 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:19.752 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:19.775 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:20.757 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:20.769 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:20.781 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:20.855 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:20.866 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:20.877 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:20.889 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:20.900 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:20.912 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:20.922 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:20.933 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:20.944 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:20.955 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:20.966 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:20.978 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:20.988 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:20.999 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:21.009 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:21.020 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:21.030 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:21.041 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:21.044 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Done.
2019-01-19 10:43:21.045 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Resuming ZigBee network state file save (previous thread finished)
2019-01-19 10:43:21.052 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Deferring ZigBee network state file save (another thread already writing)
2019-01-19 10:43:21.055 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:21.058 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Start.
2019-01-19 10:43:21.066 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:21.077 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:21.096 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:21.107 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:21.119 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:22.346 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Done.
2019-01-19 10:43:22.348 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Resuming ZigBee network state file save (previous thread finished)
2019-01-19 10:43:22.442 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Deferring ZigBee network state file save (another thread already writing)
2019-01-19 10:43:22.442 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Start.
2019-01-19 10:43:23.818 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Done.
2019-01-19 10:43:23.820 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Resuming ZigBee network state file save (previous thread finished)
2019-01-19 10:43:23.830 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Deferring ZigBee network state file save (another thread already writing)
2019-01-19 10:43:23.835 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Start.
2019-01-19 10:43:24.191 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:24.202 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:24.215 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:24.226 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Skipping ZigBee network state save (another thread already waiting)
2019-01-19 10:43:25.166 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Done.
2019-01-19 10:43:25.168 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Resuming ZigBee network state file save (previous thread finished)
2019-01-19 10:43:25.178 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Start.
2019-01-19 10:43:26.599 [DEBUG] [nal.ZigBeeNetworkStateSerializerImpl] - Saving ZigBee network state: Done.

I was just following your comment. No real suggestion here. This would only be necessary if at the time of shutting down, scheduler would refuse new timed events, in which case we could lose saves launched just before (or during) shutting down, but I have not looked at the OH / JsonDB implementation.

I totally agree with you. I still see, anyway, some benefit in preventing parallel writes as we may still keep the functionalitty for a full network state save, specially if this is not too invasive in the code. I am submitting the PR with it so that you can have a lock, and then decide whether we keep it or not.

The Json DB implementation has the ability to finalise any waiting writes if the system closes while it is waiting. This is clearly needed what ever is doneā€¦

Again, we are in agreement. I think though that this can be handled directly through an interface for writing at node level. If it is handled in the library (as now), then the library can ensure that the nodes are serialised, and the full network is serialised as needed - a separate ā€œall network serialisationā€ interface isnā€™t needed.

Ok, Iā€™m not sure if you intend to do this into the binding or the library? As above, my preference is not to solve this issue in the binding. Of course the binding will also need some changes if the interface changes, but the majority of the code should go in the library.

Well, I did it on the binding, but as implemented I think it could perfectly go into the library.

I would tend to think that in some situations a full network save interface might be necessary. i.e. when adding a new serializer handler or if the framework / user wants to rebuild the database (i.e. if it gets corrupted for whatever reason).

It can be as simple as iterating through all nodes and serializing them, but I think this interface should still be there and implemented on the network manager. Does it make sense?

Yes, as discussed, I would look to do it in the library. Please note my comments as I think your implementation is a little complex as this should be implemented easily with an AtomicBoolean I think.

Iā€™m afraid I donā€™t understand. There canā€™t be two systems, so you canā€™t add a new serialiser if there is one already running?

If a full save is needed, a node by node can be done through the same interface as a full network serialisation (ie a call to a full network save is simply going to loop through each node and serialise each node individually, therefore we can simply have a single interface to serialise each node). Or am I missing something and you are thinking about totally new way of working?

No - sorry - I donā€™t see the difference.

If we have an interface ZigBeeSerializer.serialiseNode() then we can use this to loop through all nodes to serialise the complete network. We do not need a second interface ZigBeeSerializer.serialiseNetwork() which is simply going to do the same thing.

Again, maybe I miss something, but as I see it, there only needs to be a single interface.

I had in mind there could be more than one, but even like this, there is a public call to set the serialization handler which could change the existing one, not just being called at initialization. The new one would not get full the state saved till all nodes are updated (if ever), so we might still need to go through all nodes.

This would require the (even simple) full save functionality to be implemented (duplicated) in all libraries using the binding. Why not providing it in the NetworkManager itself?

Well, not thinking of a totally new way of working :slight_smile:, just trying to put in the common library something that can be reused instead of duplicating the same thing everywhere needing it out of the library.

I just answered in the PR: I had already tried to keep it simple but the solution for a single thread latch is trivial as you suggest, but for a two thread latch it is not (unless I am missing something)

You are right, it would be possible to change them, but this would still not do what you want!

If you changed the serialiser while the system was operating, and the system only saves nodes that change, then the new system will still not get all the nodes until the framework closes. Either it uses the saveNode method, or it saves all. The framework doesnā€™t know when to use one or the other - and if it did, then it could STILL save the whole network using the saveNode method, so there is still no need for both calls - right?

I think that if an implementation wants to change serialiser, then it has the responsibility to ensure continuity of data.

Clearly you are not understanding me. That is what I am proposing! There would be a single method in the interface - saveNode, and this would be called to save each node. If the system wanted to save all nodes, it will save them all by calling this method!

From my post above -:

If we have an interface ZigBeeSerializer.serialiseNode() then we can use this to loop through all nodes to serialise the complete network. We do not need a second interface ZigBeeSerializer.serialiseNetwork() which is simply going to do the same thing.

I see now. Please have a look at my response at the issue you opened (writing it now, to follow-up there). But then probably it makes sense not to provide the interface at all, as the implementation is then managing everything on its own

Have a great vacation, this might actually mean I can get a better understanding of the code.

I just got some Xiaomi Aqara door sensors and tried to integrate them with the zigbee binding. Unfortunately they are only discovered as a new thing but the channels are not detected. I found help for integration of motion and temperature sensors (manually) but no option to integrate the door sensors.
Does anyone have experience with that? or even got them working?
I think it is too early to ask chris (so I did not integrate logs:-)

I have tried many times. I have not been able to add them. I has come as far as you did. They got added, but no channels. Since then I gave up.

Hi Chris,
i bought a paulmann zigbee dimmer and i get it discovered. The binding discovered one level control thing. But when i link this and control it nothing happens and the slider jumps back to 100%. Here is the node from the zigbee network xml

  <ZigBeeNode>
    <ieeeAddress>00158D0002927954</ieeeAddress>
    <networkAddress>18979</networkAddress>
    <nodeDescriptor>
      <apsFlags>0</apsFlags>
      <bufferSize>127</bufferSize>
      <complexDescriptorAvailable>false</complexDescriptorAvailable>
      <manufacturerCode>4509</manufacturerCode>
      <logicalType>ROUTER</logicalType>
      <serverCapabilities/>
      <incomingTransferSize>90</incomingTransferSize>
      <outgoingTransferSize>90</outgoingTransferSize>
      <userDescriptorAvailable>false</userDescriptorAvailable>
      <frequencyBands>
        <FrequencyBandType>FREQ_2400_MHZ</FrequencyBandType>
      </frequencyBands>
      <macCapabilities>
        <MacCapabilitiesType>RECEIVER_ON_WHEN_IDLE</MacCapabilitiesType>
        <MacCapabilitiesType>FULL_FUNCTION_DEVICE</MacCapabilitiesType>
        <MacCapabilitiesType>MAINS_POWER</MacCapabilitiesType>
      </macCapabilities>
      <extendedEndpointListAvailable>false</extendedEndpointListAvailable>
      <extendedSimpleDescriptorListAvailable>false</extendedSimpleDescriptorListAvailable>
      <stackCompliance>21</stackCompliance>
    </nodeDescriptor>
    <powerDescriptor>
      <currentPowerMode>RECEIVER_ON_IDLE</currentPowerMode>
      <availablePowerSources>
        <PowerSourceType>MAINS</PowerSourceType>
      </availablePowerSources>
      <currentPowerSource>MAINS</currentPowerSource>
      <powerLevel>FULL</powerLevel>
    </powerDescriptor>
    <endpoints>
      <ZigBeeEndpoint>
        <profileId>260</profileId>
        <endpointId>1</endpointId>
        <inputClusterIds/>
        <outputClusterIds/>
        <inputClusters>
          <ZclCluster>
            <clusterId>0</clusterId>
            <isClient>false</isClient>
            <attributes>
              <entry>
                <int>0</int>
                <ZclAttribute>
                  <cluster>BASIC</cluster>
                  <id>0</id>
                  <name>ZCLVersion</name>
                  <dataType>UNSIGNED_8_BIT_INTEGER</dataType>
                  <mandatory>true</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>false</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>16384</int>
                <ZclAttribute>
                  <cluster>BASIC</cluster>
                  <id>16384</id>
                  <name>SWBuildID</name>
                  <dataType>CHARACTER_STRING</dataType>
                  <mandatory>false</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>false</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>1</int>
                <ZclAttribute>
                  <cluster>BASIC</cluster>
                  <id>1</id>
                  <name>ApplicationVersion</name>
                  <dataType>UNSIGNED_8_BIT_INTEGER</dataType>
                  <mandatory>true</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>false</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>2</int>
                <ZclAttribute>
                  <cluster>BASIC</cluster>
                  <id>2</id>
                  <name>StackVersion</name>
                  <dataType>UNSIGNED_8_BIT_INTEGER</dataType>
                  <mandatory>true</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>false</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>3</int>
                <ZclAttribute>
                  <cluster>BASIC</cluster>
                  <id>3</id>
                  <name>HWVersion</name>
                  <dataType>UNSIGNED_8_BIT_INTEGER</dataType>
                  <mandatory>true</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>false</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>4</int>
                <ZclAttribute>
                  <cluster>BASIC</cluster>
                  <id>4</id>
                  <name>ManufacturerName</name>
                  <dataType>CHARACTER_STRING</dataType>
                  <mandatory>true</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>false</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>5</int>
                <ZclAttribute>
                  <cluster>BASIC</cluster>
                  <id>5</id>
                  <name>ModelIdentifier</name>
                  <dataType>CHARACTER_STRING</dataType>
                  <mandatory>true</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>false</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>6</int>
                <ZclAttribute>
                  <cluster>BASIC</cluster>
                  <id>6</id>
                  <name>DateCode</name>
                  <dataType>CHARACTER_STRING</dataType>
                  <mandatory>true</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>false</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>7</int>
                <ZclAttribute>
                  <cluster>BASIC</cluster>
                  <id>7</id>
                  <name>PowerSource</name>
                  <dataType>ENUMERATION_8_BIT</dataType>
                  <mandatory>true</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>false</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>16</int>
                <ZclAttribute>
                  <cluster>BASIC</cluster>
                  <id>16</id>
                  <name>LocationDescription</name>
                  <dataType>CHARACTER_STRING</dataType>
                  <mandatory>true</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>true</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>17</int>
                <ZclAttribute>
                  <cluster>BASIC</cluster>
                  <id>17</id>
                  <name>PhysicalEnvironment</name>
                  <dataType>ENUMERATION_8_BIT</dataType>
                  <mandatory>true</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>true</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>18</int>
                <ZclAttribute>
                  <cluster>BASIC</cluster>
                  <id>18</id>
                  <name>DeviceEnabled</name>
                  <dataType>BOOLEAN</dataType>
                  <mandatory>true</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>true</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>19</int>
                <ZclAttribute>
                  <cluster>BASIC</cluster>
                  <id>19</id>
                  <name>AlarmMask</name>
                  <dataType>BITMAP_8_BIT</dataType>
                  <mandatory>true</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>true</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>20</int>
                <ZclAttribute>
                  <cluster>BASIC</cluster>
                  <id>20</id>
                  <name>DisableLocalConfig</name>
                  <dataType>BITMAP_8_BIT</dataType>
                  <mandatory>true</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>true</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
            </attributes>
            <supportedCommandsReceived/>
            <supportedCommandsGenerated/>
            <supportedAttributes class="sorted-set"/>
          </ZclCluster>
          <ZclCluster>
            <clusterId>4096</clusterId>
            <isClient>false</isClient>
            <attributes/>
            <supportedCommandsReceived/>
            <supportedCommandsGenerated/>
            <supportedAttributes class="sorted-set"/>
          </ZclCluster>
          <ZclCluster>
            <clusterId>3</clusterId>
            <isClient>false</isClient>
            <attributes>
              <entry>
                <int>0</int>
                <ZclAttribute>
                  <cluster>IDENTIFY</cluster>
                  <id>0</id>
                  <name>IdentifyTime</name>
                  <dataType>UNSIGNED_16_BIT_INTEGER</dataType>
                  <mandatory>true</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>true</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
            </attributes>
            <supportedCommandsReceived/>
            <supportedCommandsGenerated/>
            <supportedAttributes class="sorted-set"/>
          </ZclCluster>
          <ZclCluster>
            <clusterId>4</clusterId>
            <isClient>false</isClient>
            <attributes/>
            <supportedCommandsReceived/>
            <supportedCommandsGenerated/>
            <supportedAttributes class="sorted-set"/>
          </ZclCluster>
          <ZclCluster>
            <clusterId>5</clusterId>
            <isClient>false</isClient>
            <attributes>
              <entry>
                <int>0</int>
                <ZclAttribute>
                  <cluster>SCENES</cluster>
                  <id>0</id>
                  <name>SceneCount</name>
                  <dataType>UNSIGNED_8_BIT_INTEGER</dataType>
                  <mandatory>true</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>false</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>1</int>
                <ZclAttribute>
                  <cluster>SCENES</cluster>
                  <id>1</id>
                  <name>CurrentScene</name>
                  <dataType>UNSIGNED_8_BIT_INTEGER</dataType>
                  <mandatory>true</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>false</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>2</int>
                <ZclAttribute>
                  <cluster>SCENES</cluster>
                  <id>2</id>
                  <name>CurrentGroup</name>
                  <dataType>UNSIGNED_16_BIT_INTEGER</dataType>
                  <mandatory>true</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>false</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>3</int>
                <ZclAttribute>
                  <cluster>SCENES</cluster>
                  <id>3</id>
                  <name>SceneValid</name>
                  <dataType>BOOLEAN</dataType>
                  <mandatory>true</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>false</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>4</int>
                <ZclAttribute>
                  <cluster>SCENES</cluster>
                  <id>4</id>
                  <name>NameSupport</name>
                  <dataType>BITMAP_8_BIT</dataType>
                  <mandatory>true</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>false</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>5</int>
                <ZclAttribute>
                  <cluster>SCENES</cluster>
                  <id>5</id>
                  <name>LastConfiguredBy</name>
                  <dataType>IEEE_ADDRESS</dataType>
                  <mandatory>false</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>false</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
            </attributes>
            <supportedCommandsReceived/>
            <supportedCommandsGenerated/>
            <supportedAttributes class="sorted-set"/>
          </ZclCluster>
          <ZclCluster>
            <clusterId>6</clusterId>
            <isClient>false</isClient>
            <attributes>
              <entry>
                <int>0</int>
                <ZclAttribute>
                  <cluster>ON_OFF</cluster>
                  <id>0</id>
                  <name>OnOff</name>
                  <dataType>BOOLEAN</dataType>
                  <mandatory>true</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>false</writeable>
                  <reportable>true</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                  <lastReportTime>
                    <time>1548272510969</time>
                    <timezone>Etc/UTC</timezone>
                  </lastReportTime>
                  <lastValue class="boolean">true</lastValue>
                </ZclAttribute>
              </entry>
              <entry>
                <int>16384</int>
                <ZclAttribute>
                  <cluster>ON_OFF</cluster>
                  <id>16384</id>
                  <name>GlobalSceneControl</name>
                  <dataType>BOOLEAN</dataType>
                  <mandatory>false</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>false</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>16385</int>
                <ZclAttribute>
                  <cluster>ON_OFF</cluster>
                  <id>16385</id>
                  <name>OffTime</name>
                  <dataType>UNSIGNED_16_BIT_INTEGER</dataType>
                  <mandatory>false</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>true</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>16386</int>
                <ZclAttribute>
                  <cluster>ON_OFF</cluster>
                  <id>16386</id>
                  <name>OffWaitTime</name>
                  <dataType>UNSIGNED_16_BIT_INTEGER</dataType>
                  <mandatory>false</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>true</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
            </attributes>
            <supportedCommandsReceived/>
            <supportedCommandsGenerated/>
            <supportedAttributes class="sorted-set"/>
          </ZclCluster>
          <ZclCluster>
            <clusterId>8</clusterId>
            <isClient>false</isClient>
            <attributes>
              <entry>
                <int>0</int>
                <ZclAttribute>
                  <cluster>LEVEL_CONTROL</cluster>
                  <id>0</id>
                  <name>CurrentLevel</name>
                  <dataType>UNSIGNED_8_BIT_INTEGER</dataType>
                  <mandatory>true</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>false</writeable>
                  <reportable>true</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                  <lastReportTime>
                    <time>1548272510952</time>
                    <timezone>Etc/UTC</timezone>
                  </lastReportTime>
                  <lastValue class="int">254</lastValue>
                </ZclAttribute>
              </entry>
              <entry>
                <int>16</int>
                <ZclAttribute>
                  <cluster>LEVEL_CONTROL</cluster>
                  <id>16</id>
                  <name>OnOffTransitionTime</name>
                  <dataType>UNSIGNED_16_BIT_INTEGER</dataType>
                  <mandatory>false</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>true</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>1</int>
                <ZclAttribute>
                  <cluster>LEVEL_CONTROL</cluster>
                  <id>1</id>
                  <name>RemainingTime</name>
                  <dataType>UNSIGNED_16_BIT_INTEGER</dataType>
                  <mandatory>false</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>false</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>17</int>
                <ZclAttribute>
                  <cluster>LEVEL_CONTROL</cluster>
                  <id>17</id>
                  <name>OnLevel</name>
                  <dataType>UNSIGNED_8_BIT_INTEGER</dataType>
                  <mandatory>false</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>true</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>18</int>
                <ZclAttribute>
                  <cluster>LEVEL_CONTROL</cluster>
                  <id>18</id>
                  <name>OnTransitionTime</name>
                  <dataType>UNSIGNED_16_BIT_INTEGER</dataType>
                  <mandatory>false</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>true</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>19</int>
                <ZclAttribute>
                  <cluster>LEVEL_CONTROL</cluster>
                  <id>19</id>
                  <name>OffTransitionTime</name>
                  <dataType>UNSIGNED_16_BIT_INTEGER</dataType>
                  <mandatory>false</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>true</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>20</int>
                <ZclAttribute>
                  <cluster>LEVEL_CONTROL</cluster>
                  <id>20</id>
                  <name>DefaultMoveRate</name>
                  <dataType>UNSIGNED_16_BIT_INTEGER</dataType>
                  <mandatory>false</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>true</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
            </attributes>
            <supportedCommandsReceived/>
            <supportedCommandsGenerated/>
            <supportedAttributes class="sorted-set"/>
          </ZclCluster>
        </inputClusters>
        <outputClusters>
          <ZclCluster>
            <clusterId>25</clusterId>
            <isClient>true</isClient>
            <attributes>
              <entry>
                <int>0</int>
                <ZclAttribute>
                  <cluster>OTA_UPGRADE</cluster>
                  <id>0</id>
                  <name>UpgradeServerID</name>
                  <dataType>IEEE_ADDRESS</dataType>
                  <mandatory>true</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>false</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>1</int>
                <ZclAttribute>
                  <cluster>OTA_UPGRADE</cluster>
                  <id>1</id>
                  <name>FileOffset</name>
                  <dataType>UNSIGNED_32_BIT_INTEGER</dataType>
                  <mandatory>false</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>false</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>2</int>
                <ZclAttribute>
                  <cluster>OTA_UPGRADE</cluster>
                  <id>2</id>
                  <name>CurrentFileVersion</name>
                  <dataType>UNSIGNED_32_BIT_INTEGER</dataType>
                  <mandatory>false</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>false</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>3</int>
                <ZclAttribute>
                  <cluster>OTA_UPGRADE</cluster>
                  <id>3</id>
                  <name>CurrentZigBeeStackVersion</name>
                  <dataType>UNSIGNED_16_BIT_INTEGER</dataType>
                  <mandatory>false</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>false</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>4</int>
                <ZclAttribute>
                  <cluster>OTA_UPGRADE</cluster>
                  <id>4</id>
                  <name>DownloadedFileVersion</name>
                  <dataType>UNSIGNED_32_BIT_INTEGER</dataType>
                  <mandatory>false</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>false</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>5</int>
                <ZclAttribute>
                  <cluster>OTA_UPGRADE</cluster>
                  <id>5</id>
                  <name>DownloadedZigBeeStackVersion</name>
                  <dataType>UNSIGNED_16_BIT_INTEGER</dataType>
                  <mandatory>false</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>false</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>6</int>
                <ZclAttribute>
                  <cluster>OTA_UPGRADE</cluster>
                  <id>6</id>
                  <name>ImageUpgradeStatus</name>
                  <dataType>UNSIGNED_8_BIT_INTEGER</dataType>
                  <mandatory>false</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>false</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>7</int>
                <ZclAttribute>
                  <cluster>OTA_UPGRADE</cluster>
                  <id>7</id>
                  <name>ManufacturerID</name>
                  <dataType>UNSIGNED_16_BIT_INTEGER</dataType>
                  <mandatory>false</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>false</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>8</int>
                <ZclAttribute>
                  <cluster>OTA_UPGRADE</cluster>
                  <id>8</id>
                  <name>ImageTypeID</name>
                  <dataType>UNSIGNED_16_BIT_INTEGER</dataType>
                  <mandatory>false</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>false</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>9</int>
                <ZclAttribute>
                  <cluster>OTA_UPGRADE</cluster>
                  <id>9</id>
                  <name>MinimumBlockRequestDelay</name>
                  <dataType>UNSIGNED_16_BIT_INTEGER</dataType>
                  <mandatory>false</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>false</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
              <entry>
                <int>10</int>
                <ZclAttribute>
                  <cluster>OTA_UPGRADE</cluster>
                  <id>10</id>
                  <name>ImageStamp</name>
                  <dataType>UNSIGNED_32_BIT_INTEGER</dataType>
                  <mandatory>false</mandatory>
                  <implemented>false</implemented>
                  <readable>true</readable>
                  <writeable>false</writeable>
                  <reportable>false</reportable>
                  <minimumReportingPeriod>0</minimumReportingPeriod>
                  <maximumReportingPeriod>0</maximumReportingPeriod>
                  <reportingTimeout>0</reportingTimeout>
                </ZclAttribute>
              </entry>
            </attributes>
            <supportedCommandsReceived/>
            <supportedCommandsGenerated/>
            <supportedAttributes class="sorted-set"/>
          </ZclCluster>
        </outputClusters>
      </ZigBeeEndpoint>
    </endpoints>
    <bindingTable>
      <IeeeAddress>
        <srcAddr>00158D0002927954</srcAddr>
        <srcEndpoint>1</srcEndpoint>
        <clusterId>6</clusterId>
        <dstAddrMode>3</dstAddrMode>
        <dstGroupAddr>0</dstGroupAddr>
        <dstAddr>000D6F000AF7C525</dstAddr>
        <dstNodeEndpoint>1</dstNodeEndpoint>
      </IeeeAddress>
      <IeeeAddress>
        <srcAddr>00158D0002927954</srcAddr>
        <srcEndpoint>1</srcEndpoint>
        <clusterId>8</clusterId>
        <dstAddrMode>3</dstAddrMode>
        <dstGroupAddr>0</dstGroupAddr>
        <dstAddr>000D6F000AF7C525</dstAddr>
        <dstNodeEndpoint>1</dstNodeEndpoint>
      </IeeeAddress>
    </bindingTable>
  </ZigBeeNode>

if you need a sniffer log i can also support you with that.

THX for this great Binding
Frank

so this might be another topic for chrisā€¦
what do you need?

  • should I send you a pair of sensors to test
  • or some normal logs?
  • or if you should need a sniffer log I would kindly rely on your offer made some weeks ago to prepare a stick with suitable firmware and to send it to me (after you got the money from me:-)

thanks in advance

I believe @chris already know about the difficulties with the xiaomi devices. There are a few topics about them. Problem is, theyĀ“re not plain zigbee. And I doubt Chris can do anything about it.