Zwave queue length rapidly growing in OH2 by polling Greenwave GWPN6

I have six Greeenwave NP210 6-port power strips that I am polling every minute to graph with influxDB and Grafana, it works great except it generates a ton of zwave traffic since the polling seems to poll everything, not just the current power stats. Ideally I’d like to poll total power and switch states at a much lower rate.

One thing that worries me though is that the zwave binding seems to “leak” queued messages and over the course of a day, the queue length reaches over 30 000 messages.

I’ll try to attach a zwave log, I tried searching for greenwave on the forums and didn’t find much information, but it seems to me like maybe they need some device specific workarounds?

Is there a way to attach log files to posts? I stuck it on pastebin for now: http://pastebin.com/vprg3cP8

I guess the first question is why use polling? Can’t the device be configured to send data only when it changes (from a quick look at the manual, it looks like it)? Polling in zwave - or really any system - isn’t very efficient…

That said, I’m surprised it’s getting to such a large queue so I’m not sure what’s going on. Are you using slow hardware? I would have expected the binding to have handled this number of messages…

I guess I’ll look at changing the polling to reduce the requests as is done in OH1 so that polling gets reduced if the queue is large.

It does look like it’s supposed to be possible to use it without polling, I’ve just gone with the configuration example here:

And tried to copy that for OH2. Previous discussion threads said that you need to poll it like that to avoid it constantly blinking with a communication error LED, but I see there’s a configuration parameter both for disabling the LED and for adjusting the timeout for that now that I don’t think was present in the earlier OH1 zwave binding I used. I’ll try increasing it to 45 minutes and set the polling to 30 and see what that does to my graphs! :slight_smile:

…forgot to say, I switched from OH1 on a raspberrypi2 to OH2 on an Atom-based PC with 4GB ram and SSD after starting to have issues with OH1 on the pi, I decided my setup had grown large enough that I was most likely overloading the pi. Apart from the startup pegging the CPU at 100% for 20 seconds or so, it never struggles. System load is around 0.10 right now.

@ssvenn, I think to remember that in OH1 there were some problems getting the Greenwaves to send data and so polling was the only option.

In OH2 however triggering a data transfer by a delta of the values works for Greenwaves and perhabs you want to move away from the inefficient polling mechanism? I have a working config here and - if needed - happy to give some guidance.

@luotaus After modifying all the greenwaves to only poll every 30 minutes and setting up association groups they seem to be reporting back to the controller automatically and none of them are blinking. My zwave queue length is hovering around 70 after running overnight. I’ll keep an eye on it to see if it keeps growing, I’m not sure if it’s even a problem as long as it doesn’t get into the thousands. How quick are your Greenwaves to respond? It seems like mine needs a retransmit more often than not before they switch the outlet, but I still haven’t gotten all my devices moved to the new zwave controller so the mesh network probably isn’t as strong as it could be.

@chris I’d love to have the queue length exposed as a channel on the zwave binding, then I could stick that in a graph as well, or create an alert rule if it went above a threshold :smiley: (retransmits too!)

Did some tests and it took around 1 - 2 sec from changing the load of a device attached to a Greenwave till the change appeared in HABmin.