[ERROR] [org.influxdb.impl.BatchProcessor ] - Batch could not be sent. Data will be lost java.lang.NullPointerException: null

echo bundle:restart org.openhab.persistence.influxdb | /usr/bin/openhab-cli console -p habopen

thanks a lot for this, it seems to work fine now! :grinning:
How can we go towards fixing this issue in the influx bundle? Any ideas on how to debug on it?
A few days ago I changed my influxDB from the localhost to another instance on a remote server, did not improve the situation… (currently using remote InfluxDB v1.8.6 running in docker on a Synology Diskstation)

latest version in the 1.8 tree that I see being available on the influxdb download page is 1.8.10.
Is that also available for Synology Diskstation ?

i changed the docker image to 1.8.10 but i don’t expect the behaviour to improve because i already changed the Version of Influxdb before (downgrade from v1.8.9 on local openHab instance to remote v1.8.6, completely empty database)

Im on 2.0.7 on a remote server, so i don’t think the version matters, my nephew is running 2.0.7 aswell, but is still on 2.5.12 with no problems at all.

Same problem of “Batch could not be send. Data will be lost.” on current version of openhabian on a raspberry with openhab 3.1.0 which I keep updated using the openhabian update itself.
InfluxDB is version 1.8.9.

Is there any chance to get more attention to this? When I’m using the “official standard version” of openhab with a standard persistence layer like influxdb receiving such a message is not a good marketing for openhab.

I receive these messages about 1-4 times per hour > without restarting the bundle.

1 Like

I’m facing the same issue on 3.2 M3. once a week data are not shown in openhab anymore (in the graphs). The following issue is logged many times in openhab.log:

[ERROR] [org.influxdb.impl.BatchProcessor    ] - Batch could not be sent. Data will be lost
java.lang.NullPointerException: null................

After a reboot of the system it works again like before the problem arised.

Thanks Norbert

I‘m facing the same issue. I keep restarting the influx binding every 2h, but it still happens sometimes that nothing is written anymore to influxdb

Has anybody solved the issue?

Nope, still restarting the binding every 2 hours.

I am now more experienced. The error in fact means inability to submit data to the database. Your influxd is likely not running, examine influx’ log. IIRC ot has no own file, just dumps things into the syslog

1 Like

Just stumbled over this thread… similar issue.
Restarting InfluxDB and restarting the binding would not help.

Then I found out something strange: in my case, Openhab seems to forget the user name for influxDB when rebooting…
Reboot as in Restarting Openhabian (on Raspi)

I experience this error to and found out what happened in my situation.

However not sure if the following is related but please check how many (unwanted) series are in your influx database! I notice i had a lot. Not all unwanted series I could explain but some where caused by a (momentarily) error yesterday in my influx.persist. Due to this error for all items series were created. Possible we overload influx with this error as result.

Is there an easy way to check what is unwanted and clean them up?

stefaanbolle

12m

Is there an easy way to check what is unwanted and clean them up?

Easiest method is with putty (or preferred SSH-client) open your Openhab
Command to access your Influx:

influx -username 'yours' -password 'yours' -host localhost

Commands in Influx:

use openhab
show series

Your name of the database may be different.
Any series what you do not want for future reference you can drop:

DROP SERIES <name>

It is your call what is unwanted, maybe not required is a better. Influx is a database so if you do not report on it (for example) with Grafana you do not need to store it in your database. So if you have anywhere graphs or tables you likely use some series.

And again I want to emphasis it looked like the root cause in my case; I am just an simple user (mostly NOOB / Dummy) who got this error and also noticed a lot of series not required by me in Influx; at least one of them was created roundish the time of the error. But maybe this is just not related.

F.Y.I. My default persistence setting in Openhab is on ‘MapDB’ (not ‘InfluxDB persistence layer’) so does not explain the new series.

Thanks!
I use influxdb for grafana charts

That setting is purely a pointer for OH core to know which service to use as default for charts, rules etc., when nothing else is specified. It never affects what any service is actually doing.

Each individual service has its own default strategy (some variation on “persist everything that you can”) that will be used when a thisservice.persist file cannot be found/parsed.

Sounds like you ran with default persist strategy here, for a while.

Likely, what happened; I added a new strategy ‘every15min : “0 0/15 * * * ?”’ managed to delete ‘every15min’ before I saved ‘influxdb.persist’. Bit stupid and also unnoticed which was a bit more stupid. Because I noticed the next day in the log twice the error of this subject:

2022-02-27 23:24:45.198 [ERROR] [org.influxdb.impl.BatchProcessor             ] - Batch could not be sent. Data will be lost
2022-02-28 01:20:25.945 [ERROR] [org.influxdb.impl.BatchProcessor             ] - Batch could not be sent. Data will be lost

I investigate and corrected my mistake.
In putty I noticed extra Influx series for almost every item in my OH. (indication deleted 173 of them)
Some of them received new entries every 10s (DSMR), others just 1.

As indicated it might be unrelated.

Sorry for my confusion. Did the last posts approached “series” solves the starting problem? Because I have the still problem, that storing the values in the influxdb stopped after some hours. Restarting openhab (systemctl restart openhab) solves the problem for me, but just for a few hours.

I will try restarting just the influx binding, because restarting openhab will stop my installation for about 30 sec. but is restarting the binding the actual workaround?

It looks, that this workaround is needed also in my case. I have changed using a clear password to using sskey authentication. for some instructions look at this post.

After that you can use the examples here without “-p PASSWORD”