Increase size of rrd4j database? Practical limits?

That is a bit of an over simplification. What it does is as the data ages it decimates the data by replacing ten entries with the average of those ten entries. I just want to avoid confusion on how it works for future readers.

The default configuration has five archives per Item. From the docs:

granularity of 10s for the last hour
granularity of 1m for the last week
granularity of 15m for the last year
granularity of 1h for the last 5 years
granularity of 1d for the last 10 years

That’s pretty good granularity for most purposes especially given that rrd4j isn’t well supported by external data analysis tools so the data is pretty much only going to be useful for generating OH charts and accessing historic data in rules. If you need more than that an external database will serve you better.

But if you want to change it you can. The add-on docs explain how to change the defaults as well as how to configure it on an Item by Item basis.

But not by default and that’s not the case for the average user. Also in a default openHABian config, rrd4j needs to fit into ZRAM which is very tiny by comparison.

Larger files, potentially slower response times when retrieving old data, bigger bursts of activity when the DB gets compressed since it’s doing more work less frequently when it compresses.

If the default config doesn’t work for you please do configure it as you desire. Everything you need to know is in the add-on docs and I recently helped someone with a config here which might be informative. And do come back if you run into trouble and we might be able to help.

2 Likes