[SOLVED] Persistence confusion

Hi All,

I have persistence set up for many items with strategy’s defined for charting and another for state restoration.

I amcurrently trying to addpower usage to a chart and then use the data to montor costs.
I have got rules setup that use .sinceDelta to look at daily,weekly,monthly and quarty (90days) values which I then calculate to costs over that period.

The issue I seem to be having is the duration covered by the data-points stored by the rrd4j service does not seem to get beyond 5 days. Currently when I check I see data with an epoch time-stamp of only two days yet yesterday it was upto 5 days.

Is there a way that I can add a logInfo that extracts the number of days worth of data that is available for an item? That way I can at least provide evidence of a change which I currently suspect but have not recorded well.

I am not sure what OH information is most useful here so please ask if you would like to see any particular area in order to assist.

Thanks

paul

You should definitely start with showing your RRD configuration.

Ok here goes with some details from my config

rrd4j.persist

Strategies {
  everyMinute   : "0 * * * * ?"
  every5Minutes : "0 */5 * * * ?"
  everyHour     : "0 0 * * * ?"
  everyDay      : "0 0 0 * * ?"
  default = everyChange		
}

Items {
// gTest			: strategy = everyMinute
// use restoreOnStartup to save the setting

 gBR1_Temp*		: strategy = everyMinute
 gBR2_Temp*		: strategy = everyMinute
 gBR3_Temp*		: strategy = everyMinute
 gBR4_Temp*		: strategy = everyMinute
 gLR_Temp*		: strategy = everyMinute
 gGR_Temp*		: strategy = everyMinute
 gHO_Temp*		: strategy = everyMinute
 gBR1_Humid*	: strategy = everyMinute
 gBR2_Humid*	: strategy = everyMinute
 gBR3_Humid*	: strategy = everyMinute
 gBR4_Humid*	: strategy = everyMinute
 gLR_Humid*		: strategy = everyMinute
 gGR_Humid*		: strategy = everyMinute
 gHO_Humid*		: strategy = everyMinute


 // global groups
 chartpersist*	: strategy = everyMinute
 statepersist*	: strategy = everyChange, restoreOnStartup
 }

here is the item definition

Number POWSO1_Total "Total Power" (chartpersist) { mqtt="<[MQTT:tasmota/POWSO1/tele/SENSOR:state:JSONPATH($.ENERGY.Total)]" }

And here is the rule example

val Number kWh_cost = 0.1779
val Number kWh_co2 = 0.1

rule "Midnight Freezer updates"
	when
	Time is midnight
	//Time cron "0 * * ? * *"
	then
	Thread::sleep(2000)
	logInfo("energy.rules", "Freezer")
	var POWSO1_TCd = POWSO1_Total.deltaSince(now.minusDays(1))
	var POWSO1_TCw = POWSO1_Total.deltaSince(now.minusDays(7))
	var POWSO1_TCm = POWSO1_Total.deltaSince(now.minusDays(30))
	var POWSO1_TCq = POWSO1_Total.deltaSince(now.minusDays(90))
	if (POWSO1_TCd != null) {
		postUpdate(POWSO1_Total_kwh_day,POWSO1_TCd)
		//logInfo("energy.rules", "Freezer:1d " + POWSO1_TCd)
		logInfo("energy.rules", "Total Power - yesterday " + POWSO1_Total_kwh_day.state)
	}
	if (POWSO1_TCw != null) {
		postUpdate(POWSO1_Total_kwh_week,POWSO1_Total.deltaSince(now.minusDays(7),"rrd4j"))
		//logInfo("energy.rules", "Freezer:7d " + POWSO1_TCw)
		logInfo("energy.rules", "Total Power - last week " + POWSO1_Total.deltaSince(now.minusDays(7),"rrd4j"))
	}
	if (POWSO1_TCm != null) {
		postUpdate(POWSO1_Total_kwh_month,POWSO1_Total.deltaSince(now.minusDays(30),"rrd4j"))
		//logInfo("energy.rules", "Freezer:30d " + POWSO1_TCm)
		logInfo("energy.rules", "Total Power - last month " + POWSO1_Total.deltaSince(now.minusDays(30),"rrd4j"))
	}
	if (POWSO1_TCq != null) {
		postUpdate(POWSO1_Total_kwh_year,POWSO1_Total.deltaSince(now.minusDays(90),"rrd4j"))
		//logInfo("energy.rules", "Freezer:90d " + POWSO1_TCq)
		logInfo("energy.rules", "Total Power - last quarter " + POWSO1_Total.deltaSince(now.minusDays(90),"rrd4j"))
	}

	if (POWSO1_Total_kwh_day.state != NULL) {
		POWSO1_Total_Cost_day.postUpdate((POWSO1_Total_kwh_day.state as DecimalType) * kWh_cost)
		logInfo("energy.rules", "Total Costs - yesterday " + POWSO1_Total_Cost_day.state)
	}
	if (POWSO1_Total_kwh_week.state != NULL) {
		POWSO1_Total_Cost_week.postUpdate((POWSO1_Total_kwh_week.state as DecimalType) * kWh_cost)
		logInfo("energy.rules", "Total Costs - last week " + POWSO1_Total_Cost_week.state)
	}
	if (POWSO1_Total_kwh_month.state != NULL) {
		POWSO1_Total_Cost_month.postUpdate((POWSO1_Total_kwh_month.state as DecimalType) * kWh_cost)
		logInfo("energy.rules", "Total Costs - last month " + POWSO1_Total_Cost_month.state)
	}
	if (POWSO1_Total_kwh_quarter.state != NULL) {
		POWSO1_Total_Cost_quarter.postUpdate((POWSO1_Total_kwh_quarter.state as DecimalType) * kWh_cost)
		logInfo("energy.rules", "Total Costs - last quarter " + POWSO1_Total_Cost_quarter.state)
	}
end

The number of data points seems to hover around 358 and 360 from what I have seen. here is a snippet of the output from the REST interface for the data

{
  "name": "POWSO1_Total",
  "datapoints": "359",
  "data": [
    {
      "time": 1517377440000,
      "state": "23.259666666666664269769171369262039661407470703125"
    },
    {
      "time": 1517377680000,
      "state": "23.263000000000001676880856393836438655853271484375"
    },
    {
      "time": 1517377920000,
      "state": "23.268000000000000682121026329696178436279296875"
    },
    {
      "time": 1517378160000,
      "state": "23.271333333333334536519032553769648075103759765625"
    },
    {
      "time": 1517378400000,
      "state": "23.272999999999999687361196265555918216705322265625"
    },
    {
      "time": 1517378640000,
      "state": "23.273333333333329875358685967512428760528564453125"
    },

Thanks for looking at this for me.

Regards

Paul

How does your rrd4j.cfg look like? If you don’t have one rrd4j should use the Defaults for the archives which are:
[consolidation function: AVERAGE, archives (covering time / resolution) 1 (8 hrs / 1 min), 2 (24 hrs / 4 min), 3 (150:16 hrs / 14 min), 4 (30 days / 1 hr), 5 ( 365 days / 12 hrs), 6 (3640 days / 7 days) ]

Looking at the time differnces you seem to have a Setup which does NOT store every Minute, however that is required for rrd4j! I do assume your rrd4j.cfg has set a heartbeat of 240 seconds instead of 60!
Concerning the number of datapoints you see in the REST return, I also assume that you used the Default Settings for the timeframe from which datapoints are requested, this Default is the last 24 houres (within 24 houres there are 360 4 Minute steps!).

Hi There the last paragragh certianly explains why I see around 360 datapoints when I chek the REST API.

I now see a weeks worth of data coming through but I think it is taking longer than a week to get the weeks data more like 9 days (guessing)

The rrd4j.cfg content is commented out.(guess that means I am using default). I must confess you have lost me with the initial paragraphs of your response, I will read it through a couple more times and see if I can get on the same page.

# please note that currently the first archive in each RRD defines the consolidation
# function (e.g. AVERAGE) used by OpenHAB, thus only one consolidation function is
# fully supported
#
# default_numeric and default_other are internally defined defnames and are used as
# defaults when no other defname applies

#<defname>.def=[ABSOLUTE|COUNTER|DERIVE|GAUGE],<heartbeat>,[<min>|U],[<max>|U],<step>
#<defname>.archives=[AVERAGE|MIN|MAX|LAST|FIRST|TOTAL],<xff>,<steps>,<rows>
#<defname>.items=<list of items for this defname> 

I thought having the strategy set as below, meant that I was supplying a value every mmnute, the value may not change but it is being provided. At least thats what I thought.
chartpersist* : strategy = everyMinute

High thee, I’m sorry for the confusion.
Let me try to explain in more detail.

A rrd4j database does consolidate older data in order to have a non growing size.
To do that rrd4j does only hold a fixed amount of data points.
The minute wise persisted data is held in the first archive ( and this one is not consolidated). Using the default setting the first archive holds data for 8 houres.
Each subsequent archive consolidates some datapoints of archive 1, for example an average of 4 consecutive data points ( archive 2 in the default).
Having only those two archives, every minute a data point in archive 1 would be filled and every fourth minute a point in archive 2. This second archive has a fixed size also ( in the default 360, covering 24 houres since each point represents 4 minutes).
All further archives are just using other numbers.
If you do not want any consolidation (average, max, min, etc…) you could just have an archive 1 . Just define the number of persisted data points according to your FIXED needs.

When displaying data from a rrd4j database in a chart you have to understand that each chart is made out of a single archive. The archive that covers the whole requested time is used. For example using the default settings for a chart covering the last 24 houres only archive 2 data will be displayed.

Thanks for the clarification.
Now that I understand the way the rrd is consolidation meaning less
accuracy as the average gets large.

The sonoff tasmota POW devices are providing measurements every 5 minutes
which throws the accuracy even further out of whack.

I am therefore looking for a persistent solution that will take in
measurements every 5 minutes and provide good costs for duration periods
such as daily,weekly,monthly,3 monthly or quarter.
it needs to be able to also support charting, and restoration of values
upon startup.

It seems I have chosen poorly with my requirements and selecting RRD? Is
that correct?
If so do you have a recommendation, I do need it to be lightweight as this
is running on a RPi3 and RRD seems to be causing problems when I change
things and it addapts around 30 minutes of high load, iowait occurs then it
seems to settle down to very low again.

Thanks

Paul

Rrd4j is lightweight !
If you need data for such a fixed duration (max was quarter year) I would use rrd4j without data consolidation(i.e. using archive one only) Storing data for a quarter year every minute (which is needed by rrd4j although you get a new reading only every 5 minutes) means storing 131400 values for each item you want to persist, that should not be to much (IMHO).

The suggested DB for restoreOnStartUp is still MAPDB.

Thanks!
How would I write up the rrd4j.cfg file to use archive 1 only and can I expect to start from scratch with the datra points?

Thanks

Paul

I would do it like:

TempLogger.def=GAUGE,90,0,100,60
TempLogger.archives=MAX,.5,1,132500
TempLogger.items=YourItem1, YourItem2,.....

I used min 0 and max 100 assuming you are using °C, if not please adjust.
The consolidation function is NOT used for Archive 1 (the max of one value is always this value!)
Put all your items into the .items-line.

Putting your examples and the wiki descriptions togethe, I think for a power measurement that keeps getting larger, I would use the following?

powerLogger.def=COUNTER,90,0,U,60
powerLogger.archives=MAX,.5,1,132500
powerLogger.items=YourItem1, YourItem2,…

Does that seem correct to you?

Thanks
Paul

Loks OK to me , although I have never used a Counter nor did I store a growing value.

It seems to be working. :smiley:

I have also updated my code to provide an estimmated value after the first days measurement that is refind with measured data when it becomes available.
I will look to posting in a few weeks when I see the month data become measured.

Regards

Paul