No chart data after install new OpenHabian and restore oh3 backup

Hi,

during an update my OpenHabian 3 installation (v1.6.2) on a Raspberry 3 crashed. Boot partition was useless. This happend in April.
Then I, instead of making long repair attempts, downloaded the latest OpenHabian version 1.6.5 and installed it (a few days ago).
Everything works fine and after restoring the OH 3 backup file all items, things, addons, bindings and pages are available. I also restored the old rrd-files. After implementing some of my custom Phyton scripts to gather data it looks very good.
Yesterday I realized that the data are not written to persistence service rrd4j resp. no data shown in the graph. If I go back in the chart to April I saw the data until the crash in April.

What can I do to get my data written back to the rrd4j service and display the data in the charts.

Thanks and regards,

Check /etc/openhab/persistence/rrd4j.persist if it contains persistence statements for the items you’re looking for. If it didn’t exist earlier, it will have recorded all changes, but if it exists now it’ll only record what’s defined in there.
Enable debug level logging org.openhab.persistence.rrd4j (I think, search forum if needed) in Karaf console to see if changes get written/persisted.
Did you properly define the default persistence service ?

Hi mstormi,
thanks for the fast reply. I deleted the rrd4j.persist file (as it was after the installation of v1.6.5) and stop and start the rrd4j with bundle:stop ### and bundle:start ###. I also set log level DEBUG for org.openhab.persistence.rrd4j.
First I see this messages:

2021-06-18 18:12:57.931 [DEBUG] [d4j.internal.RRD4jPersistenceService] - Created default_quantifiable = GAUGE heartbeat = 600 min/max = NaN/NaN step = 10 5 archives(s) = [ AVERAGE xff = 0.5 steps = 1 rows = 360 AVERAGE xff = 0.5 steps = 6 rows = 10080 AVERAGE xff = 0.5 steps = 90 rows = 36500 AVERAGE xff = 0.5 steps = 360 rows = 43800 AVERAGE xff = 0.5 steps = 8640 rows = 3650] 0 items(s) = []
2021-06-18 18:12:57.934 [DEBUG] [d4j.internal.RRD4jPersistenceService] - Created default_other = GAUGE heartbeat = 3600 min/max = NaN/NaN step = 5 4 archives(s) = [ LAST xff = 0.5 steps = 1 rows = 720 LAST xff = 0.5 steps = 12 rows = 10080 LAST xff = 0.5 steps = 180 rows = 35040 LAST xff = 0.5 steps = 2880 rows = 21900] 0 items(s) = []
2021-06-18 18:12:57.936 [DEBUG] [d4j.internal.RRD4jPersistenceService] - Created default_numeric = GAUGE heartbeat = 600 min/max = NaN/NaN step = 10 5 archives(s) = [ LAST xff = 0.5 steps = 1 rows = 360 LAST xff = 0.5 steps = 6 rows = 10080 LAST xff = 0.5 steps = 90 rows = 36500 LAST xff = 0.5 steps = 360 rows = 43800 LAST xff = 0.5 steps = 8640 rows = 3650] 0 items(s) = []

Then a lot of these messages:

2021-06-18 18:13:00.968 [DEBUG] [d4j.internal.RRD4jPersistenceService] - Stored 'GN_ForecastHours02_Precipprobability' as value '0.33' in rrd4j database

At the end this message appears:

2021-06-18 18:13:01.443 [WARN ] [ore.internal.scheduler.SchedulerImpl] - Scheduled job failed and stopped
java.lang.IllegalArgumentException: newPosition > limit: (726468 > 376832)
	at java.nio.Buffer.createPositionException(Buffer.java:318) ~[?:?]
	at java.nio.Buffer.position(Buffer.java:293) ~[?:?]
	at java.nio.ByteBuffer.position(ByteBuffer.java:1094) ~[?:?]
	at java.nio.MappedByteBuffer.position(MappedByteBuffer.java:226) ~[?:?]
	at java.nio.MappedByteBuffer.position(MappedByteBuffer.java:67) ~[?:?]
	at org.rrd4j.core.RrdNioBackend.read(RrdNioBackend.java:172) ~[?:?]
	at org.rrd4j.core.RrdBackend.readInt(RrdBackend.java:270) ~[?:?]
	at org.rrd4j.core.RrdPrimitive.readInt(RrdPrimitive.java:38) ~[?:?]
	at org.rrd4j.core.RrdInt.get(RrdInt.java:35) ~[?:?]
	at org.rrd4j.core.Archive.<init>(Archive.java:45) ~[?:?]
	at org.rrd4j.core.RrdDb.<init>(RrdDb.java:290) ~[?:?]
	at org.rrd4j.core.RrdDb.<init>(RrdDb.java:204) ~[?:?]
	at org.rrd4j.core.RrdDb.<init>(RrdDb.java:233) ~[?:?]
	at org.openhab.persistence.rrd4j.internal.RRD4jPersistenceService.getDB(RRD4jPersistenceService.java:322) ~[?:?]
	at org.openhab.persistence.rrd4j.internal.RRD4jPersistenceService.store(RRD4jPersistenceService.java:140) ~[?:?]
	at org.openhab.core.persistence.internal.PersistItemsJob.run(PersistItemsJob.java:58) ~[?:?]
	at org.openhab.core.internal.scheduler.CronSchedulerImpl.lambda$0(CronSchedulerImpl.java:61) ~[bundleFile:?]
	at org.openhab.core.internal.scheduler.CronSchedulerImpl.lambda$1(CronSchedulerImpl.java:69) ~[bundleFile:?]
	at org.openhab.core.internal.scheduler.SchedulerImpl.lambda$12(SchedulerImpl.java:166) ~[bundleFile:?]
	at org.openhab.core.internal.scheduler.SchedulerImpl.lambda$1(SchedulerImpl.java:76) [bundleFile:?]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
	at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:829) [?:?]

I don’t know what that means but would guess your .rrd files are broken in one way or another.
Possibly the format has changed, too, but that’s only a guess. So all I can recommend is to delete them.

Hi Markus,
after deleting all *.rrd files, the data is written again and the charts are now displayed again.
I believe the scheduled job that consolidates the data cannot handle the large gap of over 8 weeks in the data. I guess I need to think about a DB switch to get more control over the data.

Thanks for your support.

I’d recommend not to switch. New DB or other new components - new problems.
And what for, for blinky charts ?
I vaguely recall there’s tools to import/export rrd data if that really is an issue to you.
But how often will this situation ever rise again ?
Hard to predict but probably on major OH upgrades only, so you’re likely safe for some years now.

Well, to me rrd4j is a black box. When everything works it’s good, when something doesn’t work there is little you can do to fix it. As just happened I had to throw away my dataset of 3 months to get the charts to show data again. With a DB solution, which I deal with more often professionally, I can view, export or process the raw data. This makes it easier in case of an error - also because I have more knowledge.

I will still try it with rrd4j. I hope that I have no more problems with it for now.

You can always run another persistence service (such as influxDB or mariaDB) in addition to rrd4j.

Hi,

I have openHAB 3.1.0.M2 Milestone Build installed

Data is not written to rrd4j service.
All settings are left by default.
I see errors for all items in the log:

2021-07-03 22:14:03.209 [ERROR] [d4j.internal.RRD4jPersistenceService] - Could not create rrd4j database file '/openhab/userdata/persistence/rrd4j/oh2_Fire10BattareyLevel.rrd': Invalid argument

File /openhab/userdata/persistence/rrd4j/oh2_Fire10BattareyLevel.rrd exists.

-rw-r--r-- 1 openhab openhab   5336 Jul  3 21:59 oh2_Fire10BattareyLevel.rrd

What can I do to get my data written to the rrd4j service?