OH2 Z-Wave refactoring and testing... and SECURITY

Can I edit this by myself? My user name is the same like here…

You can now :slight_smile:

Thanks. I just have added this setting for two more command classes: POWERLEVEL and MANUFACTURER_SPECIFIC. I also updated some other info. Please let me know when a new build is ready.
I will continue testing on my own build until then.

1 Like

Isnt this related to this thread Status updates for Qubino Relays/Dimmers generally do not work
And that you must activate extra channels or add a temperature sensor and exclude unit from controller and then add it back again
*or maybe not didnt read the whole thread before i posted sorry

No - the issue is different. The issue is related to the fact that device reports that it supports version 0 of the command class, and the standards state that if a device reports version 0 it means it doesn’t support the command class. The binding therefore removes it.

We had a discussion at the time on this - if you want to reopen the discussion then please have a read above first in case it explains your questions already.

I’ve created a new version of the test binding with the new DEAD node code here.

This only has a few small changes relating to the number of retries during the initial PING stage.

1 Like

Are you including database updates in the dead node test bindings?

FYI, I’ve been using the previous dead test binding since 5/31 and it is working well.

I think it should have the same database as the one without the dead node changes. I’ll look to update this tomorrow to be sure.

Thanks.

Naming is back to 2.3 instead of 2.4? :question: Or did you post a wrong link? :rofl:

As I posted somewhere else, there is some sort of issue with dependencies when changing the version and I’ve not had a chance to look at this yet. The link should be correct.

No problem, not an issue, just wanted to mention it.

I’ve only just dropped it in, but so far this looked markedly different than before. I only have 2 dead nodes out of 120, all battery powered. It had gotten up to 4, but 2 came back alive. None of the mains powered devices died. I’ll restart in an hour and see if it’s repeatable.

I also noticed 3 of my 4 minimotes seemed to not initialize, so maybe this is working now? I need to check the logs to confirm.

EDIT: I looked away for a bit and now have a bunch of dead nodes (including mains powered)…
openhab> smarthome:things list |grep zwave| grep OFFLINE| wc -l
25 25

Just for reference, the only change between the two versions is instead of only making 1 attempt during the initial PING stage, I now make 3 attempts. The rest of the binding is unchanged…

Never mind… they are still being initialized…

image

Got another request here.
I keep seeing that my message queue isn’t completely empty. I keep seeing a remaining NODE 255: Added to queue - size 1 all day and it makes wonder if there’s some device to not work properly or have bad routes or whatever. For some nodes, I keep seeing constant values >>1 such as NODE 10: Added to queue - size 7, but I don’t know what these messages are.

Is there a means to see what’s actually the messages in that/those queue(s) ? If no, could you build that in ?
Not sure if it has to or can be done in the binding but thought I’d ask here first. It would probably be best to become a functional extension to habmin.
RaZberry’s zway UI has this, and it’s of tremendous help when diagnosing a zwave network status, but I obviously cannot use any 3rd party tool on a running OH instance.

Note that there is only 1 queue - so this isn’t related to the node. It’s the total number of messages currently in the queue for ALL nodes in your network. If you any have battery nodes in your network, then it’s quite likely that this number will not be 0 all that often (unless polling is set to a long duration maybe).

No - this isn’t available.

What do you suggest? Dumping the queue would add a LOT of extra logging.

I can’t think of any way that it can easily be added to HABmin - it would need some sort of REST extension to get the queue.

Does that include the broadcast ‘node’ 255, too, or is there a special count for broadcasts? If no, the following (filtered debug log for occurences of ‘queue’) looks strange to me, or does it really imply that there were 17 messages being processed between log lines 1 and 2 ?

2018-06-17 14:08:30.265 [DEBUG] [nal.protocol.ZWaveTransactionManager] - NODE 70: Added to queue - size 17
2018-06-17 14:08:31.334 [DEBUG] [nal.protocol.ZWaveTransactionManager] - NODE 255: Added to queue - size 1
2018-06-17 14:08:31.496 [DEBUG] [nal.protocol.ZWaveTransactionManager] - NODE 255: Added to queue - size 2
2018-06-17 14:08:34.517 [DEBUG] [nal.protocol.ZWaveTransactionManager] - NODE 255: Added to queue - size 3
2018-06-17 14:08:34.699 [DEBUG] [nal.protocol.ZWaveTransactionManager] - NODE 255: Added to queue - size 4
2018-06-17 14:08:42.458 [DEBUG] [nal.protocol.ZWaveTransactionManager] - NODE 19: Added to queue - size 17
2018-06-17 14:09:05.581 [DEBUG] [nal.protocol.ZWaveTransactionManager] - NODE 255: Added to queue - size 1
2018-06-17 14:09:05.688 [DEBUG] [nal.protocol.ZWaveTransactionManager] - NODE 255: Added to queue - size 1
2018-06-17 14:09:05.834 [DEBUG] [nal.protocol.ZWaveTransactionManager] - NODE 255: Added to queue - size 1
2018-06-17 14:09:05.954 [DEBUG] [nal.protocol.ZWaveTransactionManager] - NODE 255: Added to queue - size 1
2018-06-17 14:09:35.719 [DEBUG] [nal.protocol.ZWaveTransactionManager] - NODE 255: Added to queue - size 1
2018-06-17 14:09:35.903 [DEBUG] [nal.protocol.ZWaveTransactionManager] - NODE 255: Added to queue - size 1
2018-06-17 14:09:35.977 [DEBUG] [nal.protocol.ZWaveTransactionManager] - NODE 255: Added to queue - size 1
2018-06-17 14:09:36.127 [DEBUG] [nal.protocol.ZWaveTransactionManager] - NODE 255: Added to queue - size 1
2018-06-17 14:09:37.208 [DEBUG] [nal.protocol.ZWaveTransactionManager] - NODE 59: Added to queue - size 17
2018-06-17 14:09:37.297 [DEBUG] [nal.protocol.ZWaveTransactionManager] - NODE 59: Added to queue - size 17
2018-06-17 14:09:37.310 [DEBUG] [nal.protocol.ZWaveTransactionManager] - NODE 59: Added to queue - size 18

At the moment, there are no broadcasts or multicast messages used in the binding. The node 255 you see means it’s a controller message, and there is a separate queue for controller messages as they need to be handled a little differently.

Ok, so it’s 2 queues, with the standard one to be of most interest w.r.t. nodes to be well connected or not, for either good reason (battery) or not.

Battery nodes yes I have quite some of them. Quite often there’s some enbloc output from the binding such as the following. The nodes listed there are all battery ones and the number of nodes more or less matches the queue length.
I don’t feel comfortable with this. It may point to a zwave network problem or not, I still don’t know, but in the old times with the standard binding, my queue was mostly hovering around zero.
Any idea how to proceed in getting the queue shortened? Does it make sense to insert polling settings for these items ?

2018-06-17 14:08:35.828 [DEBUG] [ng.zwave.internal.protocol.ZWaveNode] - NODE 151: listening == false, frequentlyListening == false, awake == false
2018-06-17 14:08:35.830 [DEBUG] [nal.protocol.ZWaveTransactionManager] - NODE 151: Node not awake!
2018-06-17 14:08:35.831 [DEBUG] [ng.zwave.internal.protocol.ZWaveNode] - NODE 21: listening == false, frequentlyListening == false, awake == false
2018-06-17 14:08:35.832 [DEBUG] [nal.protocol.ZWaveTransactionManager] - NODE 21: Node not awake!
2018-06-17 14:08:35.834 [DEBUG] [ng.zwave.internal.protocol.ZWaveNode] - NODE 87: listening == false, frequentlyListening == false, awake == false
2018-06-17 14:08:35.835 [DEBUG] [nal.protocol.ZWaveTransactionManager] - NODE 87: Node not awake!
2018-06-17 14:08:35.836 [DEBUG] [ng.zwave.internal.protocol.ZWaveNode] - NODE 142: listening == false, frequentlyListening == false, awake == false
2018-06-17 14:08:35.838 [DEBUG] [nal.protocol.ZWaveTransactionManager] - NODE 142: Node not awake!
2018-06-17 14:08:35.840 [DEBUG] [ng.zwave.internal.protocol.ZWaveNode] - NODE 137: listening == false, frequentlyListening == false, awake == false
2018-06-17 14:08:35.841 [DEBUG] [nal.protocol.ZWaveTransactionManager] - NODE 137: Node not awake!
2018-06-17 14:08:35.843 [DEBUG] [ng.zwave.internal.protocol.ZWaveNode] - NODE 138: listening == false, frequentlyListening == false, awake == false
2018-06-17 14:08:35.844 [DEBUG] [nal.protocol.ZWaveTransactionManager] - NODE 138: Node not awake!
2018-06-17 14:08:35.845 [DEBUG] [ng.zwave.internal.protocol.ZWaveNode] - NODE 11: listening == false, frequentlyListening == false, awake == false
2018-06-17 14:08:35.847 [DEBUG] [nal.protocol.ZWaveTransactionManager] - NODE 11: Node not awake!
2018-06-17 14:08:35.848 [DEBUG] [ng.zwave.internal.protocol.ZWaveNode] - NODE 8: listening == false, frequentlyListening == false, awake == false
2018-06-17 14:08:35.849 [DEBUG] [nal.protocol.ZWaveTransactionManager] - NODE 8: Node not awake!
2018-06-17 14:08:35.850 [DEBUG] [ng.zwave.internal.protocol.ZWaveNode] - NODE 149: listening == false, frequentlyListening == false, awake == false
2018-06-17 14:08:35.852 [DEBUG] [nal.protocol.ZWaveTransactionManager] - NODE 149: Node not awake!

Yes please. Since I assume that’ll be quite some work, I don’t hope for this to be implemented right away, but please consider putting that on your TODO list. Would that require any work outside binding+habmin, too ?

In the “old days” there were separate queues for battery devices - now it’s all one queue. This doesn’t change the way it works though - it’s just that now there’s a logging entry that shows the queue length. I’ll remove the log entry, and then it will be less concerning :slight_smile: .

Why do you want to shorten the queue? I don’t really understand what you think the problem is? It is perfectly normal to have messages queued to be sent to sleeping devices :confused: .

What settings do you want to change? If you want to change the poll time, then you can do this - or is there something else you want to change?