How are you specifying the Items to be saved in your influxdb.persist file?
I’ve noticed that when using Groups, I have to restart OH after adding or removing a new Item to the Group that gets persisted for the change to take effect. In fact, I just faced this problem yesterday.
I have been having issues with the InfluxDB persistence as well. I was trying to persist a group of items using the everyChange strategy, but no data was getting sent to the database. I switched to using an everyMinute strategy instead (everyMinute : “0 * * * * ?”) and that has been working.
I’m curious, @Dim, @ThomDietrich, et al, why would one want to use everyChange or everyUpdate with any other strategy (ignoring restoreOnStartup)?
Is there some use case I’m not seeing? To me, it seems that this would simply inject duplicate data into your DB for no real benefit and it could actually cause problems as I don’t think calls like lastUpdate are able to distinguish between an entry in the DB caused by everyUpdate versus everyHour.
I don’t consider them duplicate data… just another data point in time
I gather various states, especially from my electricity meters, so it makes sense (to me) to have some data points on an hourly and daily basis (to see them in my grafana)
You have a valid point regarding the lastUpdate… I personally don’t use it in my rules…
Ps: my restoreOnStartup is against the tier1 persistence service (mapdb) and there I use only everyChange. I use influxdb as a tier2.
That is the part I don’t understand. If you use “last value” for the fill your graph should look the same whether you are saving every hour or not, right? I guess I’m still not seeing what the extra points actually add from a graphing and analysis standpoint.
I’m not trying to argue that it’s wrong. I really want to know what I’m missing. No matter how I look at it I see it causing problems or distortions of the data more than revealing anything that isn’t already captured by everyChange or everyUpdate.
There is one simple reason I can easily show in a Grafana screenshot:
Some Item values change pretty rarely. In this case the persistence service (DB or whatever) doesn’t have a well-founded understanding of the Items current state. openHAB could be offline for all we know. A regular update (one hour seems like a good choice) is a good compromise. Now that I think about it, I’ve basically implemented a Keep Alive / TTL mechanism known from network protocols.
Wondering if I should add a chapter on this topic to the InfluxDB+Grafana tutorial. Seems slightly out of scope thou…
I suppose it all depends on what one is persisting the data for. If it is strictly to drive Grafana then it makes some sense, though I still wonder whether your “Bad” field would look just fine with a “fill(last value)” instead of linear or what have you. Though then the graph may look a little more stair steppy for your tastes.
I have other mechanisms for that function. I honestly don’t look at my charts all that frequently, though am about to start analyzing data pretty soon on my power usage. I just installed a whole house zwave meter and last month’s bill was a lot more than the same time last year. I’ve some investigating to do.
But in this case, I still feel uncomfortable mixing periodic updates with event updates. But that is just me I guess.
I don’t know. It feels like it should be captured somewhere. But if you do capture it somewhere, be sure to include that mixing periodic and event updates in persistence will mess with the persistence calls in Rules. Off the top of my head, it will influence the following:
lastUpdate: the lastUpdate may be a periodic update rather than the result of an event
updatedSince: the periodic update will look like a real update giving false positives
averageSince: the periodic update will duplicate values in the average, skewing the calculation
previousState: the periodic update will look like an update so you will get the HistoriItem associated with the periodic save rather than the actual previous state (important if you care about the time of the previousState)
sumSince: see averageSince
In short, mixing the update and periodic saves to the DB breaks almost half of the persistence calls in the Rules. This should be captured as well.
tl;dr: pretty graphs, broken rules
I’m lazier than you. I just don’t want to have to deal with NULL Items. Plus I have some state I like to persist across OH restarts. So I restoreOnStartup EVERYTHING. :-D\
EDIT: I just ran a quick experiment and if there is no data point during the time period of the graph then Graphana will not show a line at all.
Interesting and frustrating. I would have expected Grafana to use the last value in the DB, not just the last value over the time period the graph covers.
I may need to rethink some things. I think though all my Items update at least once an hour so if I change to everyUpdate instead of everyChange I should be covered. We will see. I’d hate to have to set up a completely separate DB just for graphs.
Interesting topic. I’m not a big fan of the other fill options. What those effectively do is “invent” data that isn’t there. I don’t like that. Let’s think about a scenario where my temperature sensor would only updating data once per hour: For 59 minutes my diagram would create the illusion of a steady temperature over the last couple of minutes while in reality my “Bad” (bathroom) could have turned into a Sauna. I can already feel the surprise. As long as no evident data is available, I prefer the empty gap of ignorance
I see. I’ve not really dealt with these rule functions so far and wasn’t aware.
In conclusion I think we agree that it is not easy to decide on the best strategy and it may depend on the device/item. I’ll continue to recommend [everyChange, everyHour] because I feel it’s a good middle path.
That is indeed quite annoying at times. In the same way I have to say that I don’t want to deal with outdated or misleading data (you see the pattern? ). My system is completely config file based. Whenever I restart (or reinstall) openHAB I can be sure to end up with a system state that represents the current state of my reality. If a binding or device is not working properly, I will immediately become aware of that. No “ghost in the shell”.
Here are a few concepts helping with that:
Funnily enough posting every hour seems like the same to me. You are inventing a data entry just because it’s time.
But like you indicate with your example, almost all of this depends on what the data represents, how frequently it is updated, and what you are using the charts for.
So do you typically use linear, null, or 0 for fill? I’ve found that if I don’t use linear or last value the graphs are almost invisible for sparsely reported data. I suppose Bars would be a better choice than line graph in that case.
I think it depends on which side you attack that problem from. I suspect you don’t use restoreOnStartup but have System started rules, polling, and/or other mechanisms in place to get the current state of everything.
I do the same, but in the other direction. I let everything restoreOnStartup and have System started rules and other mechanisms to figure out what the current state is for those Items where it matters (which is less than half).
In neither case are the rules really operating on outdated data. For me, since more than half of my Items are storing states, things that don’t change frequently, virtual Items, and stuff like that it is less work to restoreOnStartup everything and update those Items that need it than it is to repopulate everything.
But I think we are actually on the same page.
As one example: even though I restoreOnStartup everything including presence, I have a System started rule that flips presence to OFF. Then the sensors need to report someone home to show the house as occupied. I do the same for my services monitors (simple pings of ports where various services run). When OH comes up, restoreOnStartup everything and have a System started rule mark all the services as OFFLINE and wait for the sensors to report ONLINE. For my MQTT stuff I wrote them so when OH comes up it publishes a “tell me everything” message and all the MQTT sensors report their current state.
I think we have all come to the same conclusion. There is no solid rule of thumb we can offer. The best approach depends on the detailed specifics of what you are doing.
Just checked and if I didn’t miss one, all my queries are using null.
I believe we are. I believe what others (who didn’t get lost already) could learn from this discussion, is that “state after startup” matters and has to be dealt with somehow.
Did you encounter any propagation errors with that approach? My experience taught me that it can be tricky to inject false valid data. Let’s say the restoreOnStartup presence is ON. After a few seconds the system started rule switches to OFF. After a few seconds the associated binding switches to ON. I’m seeing two valid Item state changes by which rules could be triggered - with unwanted effects. Is that wise?
@rlkoshak sorry for the incremental answer. The answer is complete now.
I suppose it depends on the nature of the Rules that get triggered, the timing, and/or whether you have any rules at all that get triggered. It could very well be a problem.
For me, it hasn’t been a problem. But most of the stuff that occurs in my rules are informational activities (e.g. generating alerts). I might get a stray pair of alerts, though I can’t remember that ever happening (actually that isn’t true, I have one Arduino that is slow to respond and when OH restarts and I get an “It’s Offline” followed by “It’s Online” messages. I might scrub my rules more thoroughly at some point just to be sure I’m not doing stuff that might cause problems in the long run. Maybe I’m just getting lucky with the timing.
Except for the lighting and alerts, my automation is such that a momentary toggle back and forth for these Items in question would not be noticed. Stuff like setting the Nest to Away mode, generating alerts when doors open, etc.
I’ve looked at all my System started rules.
admin.rules (monitors various sensors and services and reports when they go offline)
I generate an alert message for all of the services that were offline when OH went offline and restart the Expire based Timers for those that were online when OH went down. No problems identified.
entry.rules (generates alerts when a door opens/closes and no-one is home, a door is left open for more than an hour)
Reset the Expire binding timers that trigger the alerts when the Doors are left open more than an hour. Publishes an MQTT request to get the latest update from the sensors. Any changes in the states of the doors will get updated by the responses from the sensors. I have a known flaw here where if the sensor itself goes offline the door will get stuck in its last state and I have a solution for it, just not the time right now. The door sensors are very reliable so it hasn’t been a priority yet.
presence.rules (keeps track of presence)
Sets the vPresent Item used to indicate presence (proxy Item) as well as all of the members of gPresent to OFF. The BT sensors I have will update with the MQTT request already set out from entry.rules. The network based sensors will update almost immediately. I’ve looked at the impact this causes to other rules. The only rule that gets triggered by these changes do not cause any effects. The time differential between the System started rule firing and the latest “true” state is less than a second on average. Other rules that fire during this time may generate “false” alerts, but in practice, this has never happened.
weather.rules (Time of Day Design Pattern)
Recalculates the time of Day in the off chance that the time of day has changed while OH was down.
None of the rest of my rules are impacted by these changes so I’ve either consciously (and forgotten) or unconsciously designed my system to avoid this as a problem.