Received update triggered twice

Hi,

I am using a raspberry pi running OH2 to make HTTP POST’s with the sensor values of connected Z-Wave devices.
These values are then stored by the webserver in a MySQL database.

By using the following rule, my database contains the update twice with some time in between.
I am 100% sure that it is not the PHP web server who is inserting it twice.

    rule "Post multisensor values"
    	when 
    		Item Z1_OO1_Multi_Temp received update
    	then
    		Thread::sleep(100)
    		val String MY_URL = 'https://.../api/updateTemperature.php'
    		val name = Z1_OO1_Multi_Temp.name.toString.split("_") 
    		val String RoomNR = name.get(1)
    		var String myData = '{
    		    "room_id": "'+RoomNR+'",
    	    	"temperature": '+Z1_OO1_Multi_Temp.state.toString+'
    		}'
    		sendHttpPostRequest(MY_URL, "application/json", myData)
    end

Help is appreciated!

I already tried inserting a Thread::sleep without results

Can you tell us what you expected to happen instead?
If you are looking to record only changes, rather than periodic updates, you could try triggering the rule on changed

Well, rows 1-4 were manual HTTP posts to test my web server. --> I should have cleared this out sooner
From row 5, I always observe 2 records on an update.
For example:
row 5: time = 00:58:47 temperature = 18.5
row 6: time = 00:58:54 temperature = 18.5

What I expected was just one row with the temperature for each update.
Like what suddenly happens with row 9 for no reason.

I actually didn’t want to trigger the rule by a change.
I wanted an insert for each update of my value(whether or not it changed or not) based on the polling period in the config of my Z-Wave device.

Is it possible that the device is reporting the temp twice? Once as part of a periodic (unsolicited) report, and once as part of a response to a poll. What is the reporting interval, and what is the polling interval for the device?

You could verify this easily by putting the zwave binding in debug mode and viewing the transactions with Chris’s log viewer.

My controller is only associated to group 1.
I have the Z-Wave binding in debug mode.
Do you mean the Openhab2 Log Viewer (frontail)?
I am observing a lot and I don’t really see something suspicious.

I guess the easiest way is to go for the “changed” trigger.
Probably this would also be a better option to save data and redundancy.

Ok, so it’s sending an unsolicited report every hour.

Is this an Aeon ZW-100?

How often does the device wake up, and what is the polling interval. You can find both those parameters in the Thing panel in HABmin (under Wakeup Configuration and Device Configuration, respectively).

No. This viewer.
http://www.cd-jackson.com/index.php/openhab/zwave-log-viewer

Yes, this is a ZW100.

Wakeup interval: 3600
Report interval Group 1: 3600
Polling period: 30 min

Which file should I add? Excuse me for these probably frequently asked questions. I really appreciate the help!

So, I would expect you would get another temp report every hour when the device wakes up. For background, the binding puts a request on the queue for all the device’s (linked) channels every 30 minutes. If a poll message is already on the queue, it doesn’t add another poll message to the queue. When the device wakes up, the queued messages are sent to the device.

No problem.

You load the openhab.log file into the view. However, for the viewer to work, the binding needs to be in debug mode. You can put the binding into debug mode by issuing the following command at the Karaf console (sorry if you already know how to do this).

log:set DEBUG org.openhab.binding.zwave

This will generate a lot of data. To take it out of debug mode, do this.

log:set DEBUG org.openhab.binding.zwave

Info about the Karaf console is here.
https://docs.openhab.org/administration/console.html

Seems round-about. Is there a reason Persistence won’t work? Or the MQTT Eventbus? Just curious about why you chose this approach.

One thing you can do really quickly is look in events.log and verify that your device is generating more than one update event or if something else is going on. You will see a separate duplicate log entry if the Item is being updated twice. If it is then your rule will become more complicated.

As written it looks like it should work.

Good point! I was having him look at the zwave logs, but events.log would be a quicker way to see if there are multiple events being generated. If there are multiples, then dig into the zwave logs.

Well I am an engineering student in Elektronics-ICT. For my master thesis I am working with a hotel in order to do some energy management. In each room we will place 3 relays behind the sockets of the heater, water heater and fire plate. Furthermore we will place a window contact and a multi sensor. This means we will have a total of 500 nodes (5 nodes x 100 rooms). Therefore I will set up multiple Z-Wave hubs by using multiple hubs (pi + OH). I know I could use MQTT and create an OH instance on which all nodes are virtually available. However, I like keeping most of the logic on a bigger, resourceful master and just having my hubs as slaves. Does this make sense? :wink:

At the moment he is doing fine and I do not see a redundant insert.
I leave it for the night and I’ll see if it’s stable by tomorrow

What you are doing makes sense but it doesn’t explain why you making an HTTP call to a PHP program to save the data to a MySQL database when OH could save that data to MySQL directly, even if you have multiple OH instances saving to the same database.

Or instead of an HTTP driven PHP program, you can have OH publish all the events over MQTT and have your PHP program, or whatever you choose, subscribe to those topics and save the data based on that. This is a common approach for integrating OH with external home automation systems like NodeRed. It isn’t just used to link up two or more OH instances together.

In both cases, you can skip the Rule entirely.

Another approach you could take is to just have the one instance of OH and have it connect to your Pi/Controller over IP.

I’m not entirely certain how well the Zwave binding will be able to scale controllers though so that is something to watch out for. I’m sure two or three controllers has been invisioned, I doubt 100 was taken under consideration.

There are a lot of ways to approach this sort of problem and I’m calling into question your approach. I’m sure there are other considerations at play. But it still seems that there are ways to do this a little more natively to OH itself.

You’re certainly right on this one! Except that I like to have my database a little bit more secured.
Therefore I want it to be accessed by localhosts only.
This also allows me to create some higher level logic in the database structure I guess.

Again I think your comment is very well placed!
But wouldn’t this create some overhead? OH -> MQTT -> PHP -> Database VS OH -> PHP -> Database
Also, I am not very familiar with MQTT. Is it possible to make an MQTT broker publicly addressable by a webserver?
What would be the advantage of this? Would this lightweight protocol really be remarkably faster in my situation?

I really appreciate your effort to make me question my first approach. I am unexperienced with these things and I have a lot to learn. When I’m done with my master thesis, I will definitely post some documentation and refer to the people who were part of my project.

I’ve not profiled it but I do know that MQTT is much lighter weight than HTTP as a protocol overall. You would need to run a separate broker but you would not be required to run a web server for this any longer. I don’t do much with PHP so I don’t know how MQTT would work with it but I’m pretty sure there are libraries. There certainly are for JavaScript. Were I to write this in Python I would be able to cut out much of the middle of your path in < 100 lines of Python.

OH -> MQTT -> Python -> Database

I would expect it to be:

  • easier to implement which is priority number one in my book until you know you might have performance or scaling problems
  • be significantly faster
  • be significantly more reliable
  • be much easier to scale
  • more closely match the common industry/community approach to this sort of problem
  • opens more opportunities for you to monitor and control all your instances of OH from a single Master OH instance
  • offers more flexibility

It is easier to implement because you do not need OH Rules but instead just configurations and following conventions (i.e. for topic names) and there are good MQTT rules in just about every language you can think of. MQTT was developed to control SCADA in the oil industry over sketchy wireless connections so light-weight and reliability were both high priority requirements. It is easier to scale as it is possible to cluster brokers when necessary (shouldn’t be necessary in your case). The brokers support SSL/TLS, authentication via password or cert or both, and Access Control Lists to control access to the broker and topics at up to a topic by topic level. MQTT is HUGE in home automation and Internet of Things because it’s an open standard, easy to implement, very light weight, and very well suited for unreliable communication mediums. Messaging approaches like this are also very havily used in other computing domains for problems like this.

Another advantage in a deployment like this is that you can more easily monitor your servers availablity. MQTT supports a feature called the Last Will and Testament (LWT). When a client registers with a broker it can register a LWT topic and message. When the broker detects the client loses its connection, it will automatically publish the registered LWT message to the topic. Interested parties (e.g. an online status Item in OH) can subscribe to that message and get notification immediately when a client goes offline. With 100 nodes you will need some decent monitoring and you might want to have laternative logic to follow if a device is offline (e.g. error message back to a user saying a command cannot be completed because a device is offline).

Completely outside of OH, but some other things you should consider if you haven’t already:

  • PXE Boot/Kickstart to deploy images to your RPis (supported on RPI 3 only I think)
  • Alternately, Ansible/Chef/Puppet to standardize and remotely deploy and configure your RPi’s OS and configuration
  • Nagios or one of its many competetors to monitor all of these devices
  • Splunk or the ELK stack to centralize logging
  • Docker with perhaps Kerbernates (I think I spelled that right) to deploy your apps
  • Read Only configuration of your RPis if possible (probably not feasible with running OH on each RPi)
  • Standardized and easy approach to replacing failed hardware, in particular with this many RPis you will see upwards of ten SD card failures a month after the first few months of operation (another reason to look at PXE Boot/Kickstart, Ansible/Chef/Puppet)
  • a modified openHABian script might be another approach to PXE Boot/Kickstart, though it will take longer for the RPi to come back online and your image may not be identical to all other RPis which could cause some problems.

As soon as you move away from a dozen computers, the importance of simplifying maintainence and systems administration grows drastically. It becomes even more important when dealing with unreliable SBC computers. I don’t know how much of this you are implementing but I suspect if you even just acknowledge these in your thesis you should get some extra points. :wink:

I experience the double entry in my MySQL database as well and it might not be related to your issue here, because I am using OH persistence to write into my mariaDB.

However one thing came to my mind:

I have defined in my persistence file, that each single item (*) should be stored on changes and should be restored at startup.

I did this because i wanted the Groups‘ state to be stored and not just the items within the group (G_Lights*).

Does this potentially cause OH to store twice?
One for the single entry * and one for he same item included in the Group?

Thanks for your response!
However, at that point I did not implement groups yet so this cannot be my problem.

Right now I stepped away from using HTTPS posts and persistence will be built-in with the default feature of OH in the future.