[SOLVED] Influxdb persistence does not save constant values every Minute

Tags: #<Tag:0x00007fc8f0f637e8> #<Tag:0x00007fc8f0f63608>

Hi,
I have the problem that openHAB does not save constant values in the influxDB.
I need this so that I can get beautiful Charts from Grafna.
I checked the entire configuration and can’t find any errors. In general, persistence works very well for me … as long as the value changes constantly.

Here is my setup:

openhabian installed on Raspberry Pi 4 Model B
Raspbian GNU/Linux 10 (buster)
Linux 4.19.75-v7l+
openHAB 2.5.0-1 (Release Build)
InfluxDB v1.7.9

influxdb.persist

Strategies {
    everyMinute : "0 * * * * ?"
    everyHour   : "0 0 * * * ?"
    everyDay    : "0 0 0 * * ?"
    default = everyChange
}

Items {
    * : strategy = everyChange, everyMinute, restoreOnStartup
}

influxdb.cfg

url=http://localhost:8086
user=openhab
password=openhab
db=openhab_db
retentionPolicy=autogen

Here are a few examples of what my problem is.

Haustuer state is missing:

Thermostat state is missing:

Here is the content of the database measurement (Keller1_Temp is working, Haustuer not):

SELECT * FROM Keller1_Temp ORDER BY time DESC LIMIT 15
name: Keller1_Temp
time                value
----                -----
1577655350712000000 23.1
1577655291647000000 23.2
1577655051422000000 23.1
1577654871241000000 23.2
1577654751109000000 23.1
1577654691036000000 23
1577654510804000000 23.1
1577654450723000000 23.2
1577654331595000000 23
1577654271531000000 23.1
1577654151420000000 23
1577654091375000000 23.1
1577654031325000000 23
1577653971276000000 23.1
1577653550817000000 23
SELECT * FROM Haustuer ORDER BY time DESC LIMIT 15
name: Haustuer
time                value
----                -----
1577640039302000000 0
1577639972208000000 1
1577639971212000000 0
1577639965169000000 1
1577639188954000000 0
1577639179944000000 1
1577638496204000000 0
1577638491203000000 1
1577626174273000000 0
1577626073193000000 1
1577623378172000000 0
1577623363091000000 1
1577612560473000000 0
1577612538451000000 1
1577584722308000000 0

Thank you for your support!

Did you restart openHAB after changing your persist file?

Hadn’t changed the persist file for a long time.
Still started the raspi yesterday(23:20) on suspicion.
Then it worked for about 2 hours (see picture).


Have not touched openhab since the restart.

Set the influxDB persistence on debug and find out

In the console find the package name:

bundle:list -s | grep influx
log:set DEBUG org.openhab.persistence.influxdb

And see what is going on when it stops

oha! I’ve never gotten that deep :sweat_smile:

here is the result:


8:12 last successful data point

In the log EVERY minute the following error. Until 8:12 a.m. Then no more!:

2019-12-31 07:38:01.315 [ERROR] [org.influxdb.impl.BatchProcessor    ] - Batch could not be sent. Data will be lost
java.lang.RuntimeException: {"error":"partial write: field type conflict: input field \"value\" on measurement \"gWnachten\" is type float, already exists as type string dropped=1"}

	at org.influxdb.impl.InfluxDBErrorHandler.handleError(InfluxDBErrorHandler.java:19) ~[influxdb-java-2.2.jar:?]
	at retrofit.RestAdapter$RestHandler.invoke(RestAdapter.java:242) ~[retrofit-1.9.0.jar:?]
	at org.influxdb.impl.$Proxy222.writePoints(Unknown Source) ~[?:?]
	at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:151) ~[influxdb-java-2.2.jar:?]
	at org.influxdb.impl.BatchProcessor.write(BatchProcessor.java:171) [influxdb-java-2.2.jar:?]
	at org.influxdb.impl.BatchProcessor$1.run(BatchProcessor.java:144) [influxdb-java-2.2.jar:?]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_222]
	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_222]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_222]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_222]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_222]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_222]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_222]

I’m not sure if that’s the trigger or a consequence.

I have now deleted the measurement from the DB so that he can create it again with the correct definition:

influx -database openhab_db

> auth
username: admin
password: ********

> SHOW MEASUREMENTS
name: measurements
name
----
Ankleide_Luft
Ankleide_PING
Ankleide_Reboot
Ankleide_Temp
Ankleide_TempSoll
Ankleide_Update
Ankleide_Uptime
Ankleide_WLAN
gWnachten
...

DROP MEASUREMENT gWnachten
>

> SHOW MEASUREMENTS
name: measurements
name
----
Ankleide_Luft
Ankleide_PING
Ankleide_Reboot
Ankleide_Temp
Ankleide_TempSoll
Ankleide_Update
Ankleide_Uptime
Ankleide_WLAN
...

then restarted openHAB but it no longer saves constant values
even not after a RasPi reboot :frowning:

following error in log:

2019-12-31 14:15:00.104 [WARN ] [ore.internal.scheduler.SchedulerImpl] - Scheduled job failed and stopped
java.lang.NullPointerException: null
	at org.openhab.persistence.influxdb.internal.InfluxDBPersistenceService.store(InfluxDBPersistenceService.java:243) ~[?:?]
	at org.openhab.core.persistence.internal.PersistenceServiceDelegate.store(PersistenceServiceDelegate.java:59) ~[?:?]
	at org.eclipse.smarthome.core.persistence.internal.PersistItemsJob.run(PersistItemsJob.java:58) ~[?:?]
	at org.eclipse.smarthome.core.internal.scheduler.CronSchedulerImpl.lambda$0(CronSchedulerImpl.java:61) ~[?:?]
	at org.eclipse.smarthome.core.internal.scheduler.CronSchedulerImpl.lambda$1(CronSchedulerImpl.java:69) ~[?:?]
	at org.eclipse.smarthome.core.internal.scheduler.SchedulerImpl.lambda$12(SchedulerImpl.java:163) ~[?:?]
	at org.eclipse.smarthome.core.internal.scheduler.SchedulerImpl.lambda$1(SchedulerImpl.java:75) ~[?:?]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_222]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_222]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_222]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [?:1.8.0_222]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_222]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_222]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_222]

I do not understand the error message

Clear the cache

Also, you are saving an enormous amount a data in your database
Do you really need ALL your items saved every minutes
Try reducing the load

I save ALL my items in mapDB for restore on startup on every change
I store in InfluxDB only the items I am interested in charting and/or the history

Sit down and think about your database strategy a bit

Thanks for your quick reply!

How do i clear the cache?

So far i have never had any problems with the performance. There is more than enough CPU, RAM and DB-Storagespace(SSD) available during operation.

Clear cache didn’t help

It works for a few days now! :partying_face:

Tried a few more:

  • temporarily all rules deactivated
  • almost all bindings uninstalled

nothing helped!

I revised my complete persistence strategy and switched to mapDB in connection with InfluxDB, as vzorglub wrote.

thx for the support!