Well, rows 1-4 were manual HTTP posts to test my web server. --> I should have cleared this out sooner
From row 5, I always observe 2 records on an update.
For example: row 5: time = 00:58:47 temperature = 18.5 row 6: time = 00:58:54 temperature = 18.5
What I expected was just one row with the temperature for each update.
Like what suddenly happens with row 9 for no reason.
I actually didn’t want to trigger the rule by a change.
I wanted an insert for each update of my value(whether or not it changed or not) based on the polling period in the config of my Z-Wave device.
Is it possible that the device is reporting the temp twice? Once as part of a periodic (unsolicited) report, and once as part of a response to a poll. What is the reporting interval, and what is the polling interval for the device?
You could verify this easily by putting the zwave binding in debug mode and viewing the transactions with Chris’s log viewer.
Ok, so it’s sending an unsolicited report every hour.
Is this an Aeon ZW-100?
How often does the device wake up, and what is the polling interval. You can find both those parameters in the Thing panel in HABmin (under Wakeup Configuration and Device Configuration, respectively).
So, I would expect you would get another temp report every hour when the device wakes up. For background, the binding puts a request on the queue for all the device’s (linked) channels every 30 minutes. If a poll message is already on the queue, it doesn’t add another poll message to the queue. When the device wakes up, the queued messages are sent to the device.
You load the openhab.log file into the view. However, for the viewer to work, the binding needs to be in debug mode. You can put the binding into debug mode by issuing the following command at the Karaf console (sorry if you already know how to do this).
log:set DEBUG org.openhab.binding.zwave
This will generate a lot of data. To take it out of debug mode, do this.
Seems round-about. Is there a reason Persistence won’t work? Or the MQTT Eventbus? Just curious about why you chose this approach.
One thing you can do really quickly is look in events.log and verify that your device is generating more than one update event or if something else is going on. You will see a separate duplicate log entry if the Item is being updated twice. If it is then your rule will become more complicated.
Well I am an engineering student in Elektronics-ICT. For my master thesis I am working with a hotel in order to do some energy management. In each room we will place 3 relays behind the sockets of the heater, water heater and fire plate. Furthermore we will place a window contact and a multi sensor. This means we will have a total of 500 nodes (5 nodes x 100 rooms). Therefore I will set up multiple Z-Wave hubs by using multiple hubs (pi + OH). I know I could use MQTT and create an OH instance on which all nodes are virtually available. However, I like keeping most of the logic on a bigger, resourceful master and just having my hubs as slaves. Does this make sense?
At the moment he is doing fine and I do not see a redundant insert.
I leave it for the night and I’ll see if it’s stable by tomorrow
What you are doing makes sense but it doesn’t explain why you making an HTTP call to a PHP program to save the data to a MySQL database when OH could save that data to MySQL directly, even if you have multiple OH instances saving to the same database.
Or instead of an HTTP driven PHP program, you can have OH publish all the events over MQTT and have your PHP program, or whatever you choose, subscribe to those topics and save the data based on that. This is a common approach for integrating OH with external home automation systems like NodeRed. It isn’t just used to link up two or more OH instances together.
In both cases, you can skip the Rule entirely.
Another approach you could take is to just have the one instance of OH and have it connect to your Pi/Controller over IP.
I’m not entirely certain how well the Zwave binding will be able to scale controllers though so that is something to watch out for. I’m sure two or three controllers has been invisioned, I doubt 100 was taken under consideration.
There are a lot of ways to approach this sort of problem and I’m calling into question your approach. I’m sure there are other considerations at play. But it still seems that there are ways to do this a little more natively to OH itself.
You’re certainly right on this one! Except that I like to have my database a little bit more secured.
Therefore I want it to be accessed by localhosts only.
This also allows me to create some higher level logic in the database structure I guess.
Again I think your comment is very well placed!
But wouldn’t this create some overhead? OH -> MQTT -> PHP -> Database VS OH -> PHP -> Database
Also, I am not very familiar with MQTT. Is it possible to make an MQTT broker publicly addressable by a webserver?
What would be the advantage of this? Would this lightweight protocol really be remarkably faster in my situation?
I really appreciate your effort to make me question my first approach. I am unexperienced with these things and I have a lot to learn. When I’m done with my master thesis, I will definitely post some documentation and refer to the people who were part of my project.
OH -> MQTT -> Python -> Database
I would expect it to be:
easier to implement which is priority number one in my book until you know you might have performance or scaling problems
be significantly faster
be significantly more reliable
be much easier to scale
more closely match the common industry/community approach to this sort of problem
opens more opportunities for you to monitor and control all your instances of OH from a single Master OH instance
offers more flexibility
It is easier to implement because you do not need OH Rules but instead just configurations and following conventions (i.e. for topic names) and there are good MQTT rules in just about every language you can think of. MQTT was developed to control SCADA in the oil industry over sketchy wireless connections so light-weight and reliability were both high priority requirements. It is easier to scale as it is possible to cluster brokers when necessary (shouldn’t be necessary in your case). The brokers support SSL/TLS, authentication via password or cert or both, and Access Control Lists to control access to the broker and topics at up to a topic by topic level. MQTT is HUGE in home automation and Internet of Things because it’s an open standard, easy to implement, very light weight, and very well suited for unreliable communication mediums. Messaging approaches like this are also very havily used in other computing domains for problems like this.
Another advantage in a deployment like this is that you can more easily monitor your servers availablity. MQTT supports a feature called the Last Will and Testament (LWT). When a client registers with a broker it can register a LWT topic and message. When the broker detects the client loses its connection, it will automatically publish the registered LWT message to the topic. Interested parties (e.g. an online status Item in OH) can subscribe to that message and get notification immediately when a client goes offline. With 100 nodes you will need some decent monitoring and you might want to have laternative logic to follow if a device is offline (e.g. error message back to a user saying a command cannot be completed because a device is offline).
Completely outside of OH, but some other things you should consider if you haven’t already:
PXE Boot/Kickstart to deploy images to your RPis (supported on RPI 3 only I think)
Alternately, Ansible/Chef/Puppet to standardize and remotely deploy and configure your RPi’s OS and configuration
Nagios or one of its many competetors to monitor all of these devices
Splunk or the ELK stack to centralize logging
Docker with perhaps Kerbernates (I think I spelled that right) to deploy your apps
Read Only configuration of your RPis if possible (probably not feasible with running OH on each RPi)
Standardized and easy approach to replacing failed hardware, in particular with this many RPis you will see upwards of ten SD card failures a month after the first few months of operation (another reason to look at PXE Boot/Kickstart, Ansible/Chef/Puppet)
a modified openHABian script might be another approach to PXE Boot/Kickstart, though it will take longer for the RPi to come back online and your image may not be identical to all other RPis which could cause some problems.
As soon as you move away from a dozen computers, the importance of simplifying maintainence and systems administration grows drastically. It becomes even more important when dealing with unreliable SBC computers. I don’t know how much of this you are implementing but I suspect if you even just acknowledge these in your thesis you should get some extra points.