Reported Time vs. Actual Time of Event

Hey,
I am currently parsing my heater unit information into OpenHab. The problem is that the data reported in the heater unit is for the last 2 hours. So when the heater is working from 14.00 until 15.30 I get this information at 16.00. So I was wondering if there ever is an intent of being able to attach a timestamp to a reported value. So in my example I would prefer to have 14.00 being persisted rather than 16.00. I think this is also relevant for other events. For example if a sensor has a bad connection and reports a dangerous wind-speed 2h later than it actually happened, it doesn’t make sense to react on it anymore.

Has this ever been considered? If yes, my apologies for bringing that up.

Karsten

Not sure what you are looking for?
If an “information packet” arrives at openHAB with say a past timestamp of the reading, the binding could/should make that available as, say, a datetime type channel.
It’s up to the user what to do with that provided info.

There are a bunch of existing “last updated” features and common techniques for say a sensor reading Item.
These tell you about when the data was updated of course.
I cannot see that it would be a good idea to able to manipulate that timestamp - that would essentially be telling lies about when the OH Item was updated.
You’d need instead to maintain a separate record of the “time reading taken” part, is that what you mean?
At the moment that could be done with separate Items.
Are you thinking of some new property, a “time of validity” extension?
It could also play into other time related properties, “best before” sounds useful.

I think a separate channel is quite tricky, in particular for persistence. What I mean is to have a seperate record, and the “best before” record sounds also quite useful. It would also allow batched delivery of packets.
So a event would have 3 fields:

  • Arrival time (when did the event arrive at OH)
  • Time of event (by default the same as arrival time, but may be set to something different by the binding)
  • Validity (Defaults to forever, but indicates for what period can this event be used to act on it before a new measurement is required)

Not too clear what you mean by “event” here.
Bindings have channels as their link to rest of openHAB. In two flavours, static state or trigger event.
A validity/best before parameter has no meaning for a trigger event, it just happens when you know about it.
A root cause timestamp attached to a trigger event has some meaning, I suppose.

Based on what I understand you are asking for I don’t know if it has ever been considered to be made a part of the code. It shouldn’t be too hard to add a MyItem.persist(<db>, <timestamp>) to the Item class. This is in fact supported in the REST API.

But as rossko57 points out, the tricky part is getting that timestamp. That will require a separate Channel and a Rule to call that new persist method (or make the REST API call) to store the state with the reported timestamp.

It’s important to remember that Persistence needs to support all use cases which means that it ends up supporting the least common denominator. Your use case is a pretty rare one so I wouldn’t expect it to be supported directly. You will have to write Rules for this, or modify the binding.

And where does the Time of Event come from? For 99% of all the bindings, there is no such thing. There is only Arrival Time.

Persistence works on Item events. There is no way to pass this Time of Event to the Persistence engine because there is no concept of a Time of Event on the event bus.

The databases only currently support one time stamp. In some cases, like time series databases like InfluxDB, it isn’t even possible to have a row with three columns (two time stamps and a state).

So, to support this unusual use case in OH directly without needing you to write Rules would require:

  • changes to all bindings
  • changes to the event bus
  • changes to everything that works off of the event bus
  • changes to all of the databases people already have set up and configured for their persistence data
  • no longer being able to support time series databases like InfluxDB, one of the most popular databases

This really seems to be massively disruptive to support a pretty rare use case.