Influx not receiving any data

Hi,

I freshly installed OH 3.2.0 natively (so not dockerized) on a Raspberry Pi 4 running on Raspbian Bullseye and my problem is: I cannot get any data into my Influx database running on my NAS.

I recently used OH dockerized on a Raspberry Pi 2 where everything worked without issues. After installing it on the Pi 4 I re-used all configuration files from the previous installation. However, OH does not even seem to try to send any data to the NAS.

A little bit of detailed info:

Here is what I did on the NAS:

root@d9e6961e2059:/# influx -username openhab -password thePassword
Connected to http://localhost:8086 version 1.8.9
InfluxDB shell version: 1.8.9
> SHOW DATABASES
name: databases
name
----
_internal
shelly
openhab_db
> USE openhab_db
Using database openhab_db
> SHOW MEASUREMENTS
name: measurements
name
----
v_TempAkt_BadDG
a_Heizung_BadDG
...

So, everything looks OK I would say. We even see the old measurements from the previous installation.

Let’s have a look into the influxdb.cfg inside /etc/openhab/services:

# The database URL, e.g. http://127.0.0.1:8086 or https://127.0.0.1:8084 .
# Defaults to: http://127.0.0.1:8086
# url=http(s)://<host>:<port>

# The name of the database user, e.g. openhab.
# Defaults to: openhab
# user=<user>

# The password of the database user.
# password=

# The name of the database, e.g. openhab.
# Defaults to: openhab
# db=<database>
url=http://192.168.188.22:8086
user=openhab
password=thePassword
db=openhab_db

Then let us check the influxdb.persist inside /etc/openhab/persistence

Strategies
{
everyMinute : "0 * * * * ?"
everyHour : "0 0 * * * ?"
everyDay : "0 0 0 * * ?"
}
Items
{
	v_TempAkt_BadDG, a_Heizung_BadDG, v_Heizung_Abschalt_BadDG, v_Heizung_Einschalt_BadDG : strategy = everyMinute
	// There is more but it is not necessary here I guess.
}

The openhab.log shows that it takes the persistence configuration without any errors.
/var/log/syslog does not show any influx related messages. It is as if OH does not even try to use it.
The addon is installed of course:

Pinging the NAS from the Pi4 works without issues:

 $ ping 192.168.188.22
PING 192.168.188.22 (192.168.188.22) 56(84) bytes of data.
64 bytes from 192.168.188.22: icmp_seq=1 ttl=64 time=0.268 ms
64 bytes from 192.168.188.22: icmp_seq=2 ttl=64 time=0.226 ms

I restarted both the system and OH. I removed the influx addon and restarted then installed it again. All without success. I am a little bit lost now. Any ideas?

Best,
Oliver

hmm, so you selected the right persistence service in the configuration?

What if you try to get date form the item via API Explorer?

This shouldn’t make any difference: Persistence | openHAB

Why? I don’t see it in the docs.

So as far as I unterstood he reinstalled OH on a new system. So the default would be RRD4j.
Now he wants change to influxdb where all his histroical data is.
So he has to change the service.

No, he doesn’t have to.

It’s in the link:

The default persistence service is used to provide data for the UI charting features and rules

Every persistence service listed in your OH will persist data according to its configuration. The default service only defines which service is used for the charts, and by rules (if left undefined in the rule).

This part is true … the thing is that OH can run both services at the same time. Both services can persist the same Items, and/or different Items, at the same time.

The system ‘default’ setting is about which one to use for charts.

Can we see that?

Some updates: When changing the retention policy to something which does not exist, I got every minute an error message in the openhab.log:

2022-05-02 14:10:01.296 [ERROR] [org.influxdb.impl.BatchProcessor    ] - Batch could not be sent. Data will be lost
org.influxdb.InfluxDBException: retention policy not found: doesNotExist
        at org.influxdb.InfluxDBException.buildExceptionFromErrorMessage(InfluxDBException.java:161) ~[bundleFile:?]
        at org.influxdb.InfluxDBException.buildExceptionForErrorState(InfluxDBException.java:173) ~[bundleFile:?]
        at org.influxdb.impl.InfluxDBImpl.execute(InfluxDBImpl.java:827) ~[bundleFile:?]
        at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:460) ~[bundleFile:?]
        at org.influxdb.impl.OneShotBatchWriter.write(OneShotBatchWriter.java:22) ~[bundleFile:?]
        at org.influxdb.impl.BatchProcessor.write(BatchProcessor.java:340) [bundleFile:?]
        at org.influxdb.impl.BatchProcessor$1.run(BatchProcessor.java:287) [bundleFile:?]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
        at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
        at java.lang.Thread.run(Thread.java:829) [?:?]
2022-05-02 14:11:01.286 [ERROR] [org.influxdb.impl.BatchProcessor    ] - Batch could not be sent. Data will be lost
org.influxdb.InfluxDBException: retention policy not found: doesNotExist
        at org.influxdb.InfluxDBException.buildExceptionFromErrorMessage(InfluxDBException.java:161) ~[bundleFile:?]
        at org.influxdb.InfluxDBException.buildExceptionForErrorState(InfluxDBException.java:173) ~[bundleFile:?]
        at org.influxdb.impl.InfluxDBImpl.execute(InfluxDBImpl.java:827) ~[bundleFile:?]
        at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:460) ~[bundleFile:?]
        at org.influxdb.impl.OneShotBatchWriter.write(OneShotBatchWriter.java:22) ~[bundleFile:?]
        at org.influxdb.impl.BatchProcessor.write(BatchProcessor.java:340) [bundleFile:?]
        at org.influxdb.impl.BatchProcessor$1.run(BatchProcessor.java:287) [bundleFile:?]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
        at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
        at java.lang.Thread.run(Thread.java:829) [?:?]
2022-05-02 14:15:02.269 [INFO ] [el.core.internal.ModelRepositoryImpl] - Loading model 'influxdb.persist'

So, OH is indeed trying to send data.

What I tried next: Create a new database inside influx called openhab.
I updated the influxdb.cfg accordingly:

url=http://192.168.188.22:8086
user=openhab
password=openhab
db=openhab
retentionPolicy=autogen

and guess what?
Now I can see the data points inside influx.

When changing back to openhab_db I do not receive any data. The latest data point is always shortly before the migration eof April.

No idea what is going on here but for me this is fixed for now.