RRD4J strange values

I just started using RRD4J for storing my power consumption via some Xiaomi plugs.
I get some very strange values, here are some from REST API:

{
  "name": "intPlug_PC_Load",
  "datapoints": "9",
  "data": [
{
  "time": 1515586200000,
  "state": "14608731.0136054418981075286865234375"
},
{
  "time": 1515586500000,
  "state": "7.13999999999999968025576890795491635799407958984375"
},
{
  "time": 1515588600000,
  "state": "0"
},
{
  "time": 1515597600000,
  "state": "0"
},
{
  "time": 1515597900000,
  "state": "5.45999999999999996447286321199499070644378662109375"
},
{
  "time": 1515598200000,
  "state": "10.910000000000000142108547152020037174224853515625"
},
{
  "time": 1515598500000,
  "state": "14316542.96333333291113376617431640625"
},
{
  "time": 1515598800000,
  "state": "14316555.13333333469927310943603515625"
},
{
  "time": 1515599100000,
  "state": "14316556.813333332538604736328125"
}
  ]
}

first of all, Is there an easy way to interped the time stamp?
I can seem to reconize the state values, it don’t seem to reflect my item state at least. These should be in watt so my computer seems to use a lot of electricity sometimes. :smiley:

There are my setup, I’m compleatly new to this so I might have some strange settings here (feel free to suggest other)

rrd4j.persist:

// Persistence strategies have a name and a definition and are referred to in the "Items" section
Strategies {
everyMinute    : "0 * * * * ?"

// If no strategy is specified for an item entry below, the default list will be used.
default = everyMinute 
}

    /* 
    * Each line in this section defines for which item(s) which strategy(ies) should be applied.
    * You can list single items, use "*" for all items or "groupitem*" for all members of a group
    * item (excl. the group item itself).
    */ 
    Items {
    gPowerLoad* : strategy = everyMinute, everyUpdate, restoreOnStartup
    }


rrd4j.cfg:

# please note that currently the first archive in each RRD defines the consolidation
# function (e.g. AVERAGE) used by OpenHAB, thus only one consolidation function is
# fully supported
#
# default_numeric and default_other are internally defined defnames and are used as
# defaults when no other defname applies

#<defname>.def=[ABSOLUTE|COUNTER|DERIVE|GAUGE],<heartbeat>,[<min>|U],[<max>|U],<step>
#<defname>.archives=[AVERAGE|MIN|MAX|LAST|FIRST|TOTAL],<xff>,<steps>,<rows>
#<defname>.items=<list of items for this defname> 

#ctr5min.def=COUNTER,900,0,U,300
#ctr5min.archives=AVERAGE,0.5,1,365:AVERAGE,0.5,7,300
#ctr5min.items=Item1,Item2


ctr5min.def=COUNTER,900,0,U,300
ctr5min.archives=AVERAGE,0.5,1,365:AVERAGE,0.5,7,300
ctr5min.items=intKitchen_Media_Load,intPlug_Stue_Anleag_Load,intPlug_PC_Load,intPlug_Stue_LAMPE_Load

You did use a custom setup for your archives, however is it usefull?
Let’s start with the .Def settings, the last value it the time between consecutive readings, which SHOULD BE 60 SECONDS, you set that to 300 i.e. 5 minutes. That relates to the timedifference between readings ( calculated in ms ( hence the 300000 ms difference in some cases).
Looking at the number of values in your archives, the first one holds 365 values , that would be for a time-period of 365 * 5 minutes.
The second archives holds 300 values, which are made out 7 values of archive one. So you are covering 300 * 7 * 5 minutes. Do you really want this coverage?

Now to the observed values, the first one is really big, that is probably because it is the first one (calculated difference from previous one, which was nothing). The following are calculated as difference from the last one. I can’t say why there are those 0 readings, which also are taken after a real large time-gap ( more then the heartbeat value, so they are probably considered lost).

I would use settings like:

PreisLogger.def=GAUGE,90,0,U,60
PreisLogger.archives=MAX,.5,1,1440:MAX,.5,5,2016:MAX,.5,15,2668
PreisLogger.items=E10

I’m using such setting to record price readings, and I’m holding data in archive one for 24 hours, archive two for one week and archive three four weeks. If you want averages in the 2 and third archive use AVERAGE for all three archives ( the first will show actual values anyhow, because the average of a single reading is always that reading!)

1 Like

Thanks Opus!
I’ll try your setting later today.
some noobs questions:

  1. you use “PreisLogger” as you defname, where is that reflected in OH? on sitemap level?

  2. The items, is that the Items I want to recored? or it that the “log item” that I e.g. use for my sitemap?

Thanks again for you help

  1. No where! However you could have more databases, which would be identified by that name.
  2. All listed items will be persisted. If you don’t have that file at all, per default all items will be persisted, even if it doesn’t make sense (rrd4j can only handle numerical data)

Okay thanks, I updated my system with you recomendations and my graf is drwing much nicer, but I still have strange values. could it be my item file?

Number  intPlug_PC_Load 				"PC Power load[%.0f W]" 								(gPowerLoad)					{ channel="mihome:sensor_plug:158d000122dcc4:loadPower" }

are you date also in this strange format? like :

{
      "time": 1515951600000,
      "state": "28611367.811666667461395263671875"
    },

The time is always in that format (msec since a date, which I can’t remember)
Concerning this high value, I would assume that this is the first value, how did you set up the .def part?

ok; this is my def:

ctr5min.def=GAUGE,90,0,U,60
ctr5min.archives=MAX,.5,1,1440:MAX,.5,5,2016:MAX,.5,15,2668
ctr5min.items=intKitchen_Media_Load,intPlug_Stue_Anleag_Load,intPlug_PC_Load,intPlug_Stue_LAMPE_Load

In this case this big reading is really odd. I have no explanation for that, sorry.

ok, but thanks for trying!

For others I found a tread with a guy who had the same problem but solved it somehow… maybe this can help you:

Hi JĂĽrgen
do you know a way to inject data into the RRD4J data base somehow?

Short answer, No!

To the best of my (limited) knowledge the data for rrd4j has to be persisted in a timely manner. That way rrd4j can fill its archives. By injection data at a later date, that changed datapoint could only be changed in Archive 1 (if the desired date is still in Archive 1), since the datapoints in the other Archives are calculated from more then a single datapoint of Archive 1. Such an later inject would require a recalculation of all Archives, which is not (always) possible (all datapoints needed for that calculation must still be in the actual Archive 1) .

1 Like