during an update my OpenHabian 3 installation (v1.6.2) on a Raspberry 3 crashed. Boot partition was useless. This happend in April.
Then I, instead of making long repair attempts, downloaded the latest OpenHabian version 1.6.5 and installed it (a few days ago).
Everything works fine and after restoring the OH 3 backup file all items, things, addons, bindings and pages are available. I also restored the old rrd-files. After implementing some of my custom Phyton scripts to gather data it looks very good.
Yesterday I realized that the data are not written to persistence service rrd4j resp. no data shown in the graph. If I go back in the chart to April I saw the data until the crash in April.
What can I do to get my data written back to the rrd4j service and display the data in the charts.
Check /etc/openhab/persistence/rrd4j.persist if it contains persistence statements for the items you’re looking for. If it didn’t exist earlier, it will have recorded all changes, but if it exists now it’ll only record what’s defined in there.
Enable debug level logging org.openhab.persistence.rrd4j (I think, search forum if needed) in Karaf console to see if changes get written/persisted.
Did you properly define the default persistence service ?
Hi mstormi,
thanks for the fast reply. I deleted the rrd4j.persist file (as it was after the installation of v1.6.5) and stop and start the rrd4j with bundle:stop ### and bundle:start ###. I also set log level DEBUG for org.openhab.persistence.rrd4j.
First I see this messages:
2021-06-18 18:13:00.968 [DEBUG] [d4j.internal.RRD4jPersistenceService] - Stored 'GN_ForecastHours02_Precipprobability' as value '0.33' in rrd4j database
At the end this message appears:
2021-06-18 18:13:01.443 [WARN ] [ore.internal.scheduler.SchedulerImpl] - Scheduled job failed and stopped
java.lang.IllegalArgumentException: newPosition > limit: (726468 > 376832)
at java.nio.Buffer.createPositionException(Buffer.java:318) ~[?:?]
at java.nio.Buffer.position(Buffer.java:293) ~[?:?]
at java.nio.ByteBuffer.position(ByteBuffer.java:1094) ~[?:?]
at java.nio.MappedByteBuffer.position(MappedByteBuffer.java:226) ~[?:?]
at java.nio.MappedByteBuffer.position(MappedByteBuffer.java:67) ~[?:?]
at org.rrd4j.core.RrdNioBackend.read(RrdNioBackend.java:172) ~[?:?]
at org.rrd4j.core.RrdBackend.readInt(RrdBackend.java:270) ~[?:?]
at org.rrd4j.core.RrdPrimitive.readInt(RrdPrimitive.java:38) ~[?:?]
at org.rrd4j.core.RrdInt.get(RrdInt.java:35) ~[?:?]
at org.rrd4j.core.Archive.<init>(Archive.java:45) ~[?:?]
at org.rrd4j.core.RrdDb.<init>(RrdDb.java:290) ~[?:?]
at org.rrd4j.core.RrdDb.<init>(RrdDb.java:204) ~[?:?]
at org.rrd4j.core.RrdDb.<init>(RrdDb.java:233) ~[?:?]
at org.openhab.persistence.rrd4j.internal.RRD4jPersistenceService.getDB(RRD4jPersistenceService.java:322) ~[?:?]
at org.openhab.persistence.rrd4j.internal.RRD4jPersistenceService.store(RRD4jPersistenceService.java:140) ~[?:?]
at org.openhab.core.persistence.internal.PersistItemsJob.run(PersistItemsJob.java:58) ~[?:?]
at org.openhab.core.internal.scheduler.CronSchedulerImpl.lambda$0(CronSchedulerImpl.java:61) ~[bundleFile:?]
at org.openhab.core.internal.scheduler.CronSchedulerImpl.lambda$1(CronSchedulerImpl.java:69) ~[bundleFile:?]
at org.openhab.core.internal.scheduler.SchedulerImpl.lambda$12(SchedulerImpl.java:166) ~[bundleFile:?]
at org.openhab.core.internal.scheduler.SchedulerImpl.lambda$1(SchedulerImpl.java:76) [bundleFile:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
I don’t know what that means but would guess your .rrd files are broken in one way or another.
Possibly the format has changed, too, but that’s only a guess. So all I can recommend is to delete them.
Hi Markus,
after deleting all *.rrd files, the data is written again and the charts are now displayed again.
I believe the scheduled job that consolidates the data cannot handle the large gap of over 8 weeks in the data. I guess I need to think about a DB switch to get more control over the data.
I’d recommend not to switch. New DB or other new components - new problems.
And what for, for blinky charts ?
I vaguely recall there’s tools to import/export rrd data if that really is an issue to you.
But how often will this situation ever rise again ?
Hard to predict but probably on major OH upgrades only, so you’re likely safe for some years now.
Well, to me rrd4j is a black box. When everything works it’s good, when something doesn’t work there is little you can do to fix it. As just happened I had to throw away my dataset of 3 months to get the charts to show data again. With a DB solution, which I deal with more often professionally, I can view, export or process the raw data. This makes it easier in case of an error - also because I have more knowledge.
I will still try it with rrd4j. I hope that I have no more problems with it for now.