OH4: Retrieve range of historic states from persistence

Hi All

I am trying to calculate the “trend” or “slope” of the states stored for an item measuring the Voltage of my UPS/Inverter battery.

One of the suggestions on the forum was to use 1st derivative, another option would be to calculate the slope of the persisted data over fixed period. This would allow me to work out if the values are increasing, static or decreasing - which can be interpreted to charging, idle or discharging.

Both of these options require access to multiple values of persisted data along with their timestamps.

I have now setup persistence for this device everyMinute (Could not find how to do this every 30 seconds.)

What I am now trying to find is a way to retrieve all the persisted values between now and 5 minutes ago. I don’t know the exact timestamps of the persisted adata.

I then need to be able to access the value and timestamp of each of these entries.

I have not been able to find a simple way to do this - though I suspect it should be possible as that is surely how charts are created?

I would prefer to use Blockly (just moved all rules to Blockly).

Does anyone have any suggestions of how I would do this or at least where to start? My coding abilities are limited at best


Create a custom strategy in the .persist file with an every-30-seconds cron expression.

In a rule? Unfortunately there is no way to do this through OH Actions yet (there’s an issue open to add it). So you’ll need to either use the raw API provided by the database you are using (you don’t say which one) probably using executeCommandLine or the sendHttpXActions to query the database using the openHAB REST API. In both cases you’ll have to parse out the values you want from the results which will come to you as a String.

Rules run on the server side. Carts run on the client side (i.e. in the browser). Charts use the REST API to pull all the values and their corresponding timestamp as a big JSON string which it then parses out and passes to the charting library.

I’m not sure this is feasible in Blockly. Neither access to executeCommandLine nor the sendHttpXRequest Actions exist in Blockly and it’s ability to parse out text is relatively primitive. You’ll end up writing most of the rule as inline script blocks at which point you may as write the whole thing as JS.

About all I can recommend is:

  • wait for the PR to be submitted to return multiple results
  • file an issue to add first derivative as a persistence action
  • see if you can approach your end goal from a statistical perspective: many of the standard statistics calculations are supported including variance and evolutionRate.

I did try that with:

Strategies {
every30Seconds : "0/30 0 0 ? * * *"
everyMinute : "0 * * * * ?"
everyHour : "0 0 * * * ?"
everyDay : "0 0 0 * * ?"
default = everyMinute,everyChange,restoreOnStartup

Items {
Persistence_Group* : strategy=everyMinute,everyChange,restoreOnStartup
Shelly_UNI_Persist_1Min_Voltage_ADC	: strategy=every30Seconds,restoreOnStartup
Persistence_Group_MAPDB*	: strategy = everyChange, restoreOnStartup

In this case nothing gets persisted. Further googling seemed to indicate that the minimum is every minute?

I will have to look at you further suggestions in more detail - but I am starting to suspect that this may beyond my current abilities.

EDIT: No errors when loading the influxdb.persist file, but nothing gets persisted. If I change back to everyMinute I see results.

Comparing the two entries

every30Seconds : "0/30 0 0 ? * * *"
everyMinute : "0 * * * * ?"

it looks like every30Seconds is not correct.
While every30Seconds has 7 entries everyMinute has 6 entries.
Besides that only minute 0 and hour 0 would be trigger every 30 seconds.


every30Seconds : "0/30 * * * * ?"

Hi, the above suggestion is important else you miss the last datapoint ruining your average. I would even change the everyChange to everyUpdate to avoid missing points if the data doesn’t change.

I’v got also something like this running to get 0,1 and 2-nd order averages. Other important point. If you want to show the average and actual item in de graph. Remember you’re time shifting data, for instance if you take a 10 minutes average this fits to the point in the middle so 5 minutes ago. Of course you might use predictions but these can be very wrong for some situations (sudden peaks). I’ve also got a algorithm for that.

Anyway … as for you question. My old script used the persistence API te get and process the data. Maybe it will help you, it’s not blockly. Here’s the relevant part.

var ZonedDateTime = Java.type("java.time.ZonedDateTime");
var Instant = Java.type("java.time.Instant");
var PersistenceExtensions = Java.type("org.openhab.core.persistence.extensions.PersistenceExtensions");
var HttpUtil = Java.type("org.openhab.core.io.net.http.HttpUtil")

          var now = ZonedDateTime.now();

          var periodStart=now.minusSeconds(period+10)
          var periodEnd=now.minusSeconds(10)
          var result=HttpUtil.executeUrl("GET", "http://{your servername}:8080/rest/persistence/items/"+found.getName()+"?starttime="+periodStart.toLocalDateTime()+"&endtime="+periodEnd.toLocalDateTime(),2000)
          var data=JSON.parse(result);
          var n=data.datapoints

          //loop through datapoints
          for (var i=0;i<n;i++){
            var point=data.data[i]
            var px=(point.time-data.data[0].time)/1000.0
            var py=1.0*point.state;

 => use px and py for your logic


This is the old/default JS script. In the new ECMA2021 this is a bit easier. You don’t need to do all the imports. They’re already done and standard methods are available. But the setup is the same/

    let response=actions.HTTP.sendHttpGetRequest("http://{your servername}:8080/rest/persistence/items/"+brinePump.name+"?starttime="+start.toLocalDateTime()+"&endtime="+end.toLocalDateTime());

And to end my post. I switched to another setup. I use this quite a lot and all the persistence calls make a load on the raspberry reducing performance. So now I use a cached setup. So I read only actual values and keep the last “x” values in memory. And use that memory cache to perform calculations. Another advantage of this, you don’t need to change the persistence settings because you’re not using it.

Thank you. That works for the 30 seconds

Thank you for the suggestions. I will take a look at your scripts and see how to implement.

I have now managed to get blockly to retrieve the last 5 entries (persisted everyMinute) though I cannot seem to access the actual time. I don’t think the time is important to calculate the trend however - and the TREND is what I am trying to get to.

Will keep working at it.

Yeah, openHAB caches, so the actual value is usually missing whatever persistence strategy you use. If you only want the slope maybe the easiest approach is the use the historic state of the item (item.historicstate.averagebetween(t1,t2) I think). Ask for the average from 5 minutes ago to now. And for 10 minutes ago to 5 minutes ago. Substract them and divide by 5 minutes and you got the slope