Persistence with InfluxDB not storing data?

Continuing the discussion from [SOLVED] Persistence with InfluxDB not storing data?:

Man thanks for your comprehensive explanation. I tried the command with and without subshell - nothing works. I followed exactly the steps you mentioned. Here is the result:

  ____  ____  ___  ____  / / / /   |  / __ )
 / __ \/ __ \/ _ \/ __ \/ /_/ / /| | / __  |
/ /_/ / /_/ /  __/ / / / __  / ___ |/ /_/ /
\____/ .___/\___/_/ /_/_/ /_/_/  |_/_____/
    /_/                        2.3.0
                               Release Build

Hit '<tab>' for a list of available commands
and '[cmd] --help' for help on a specific command.
Hit '<ctrl-d>' or type 'system:shutdown' or 'logout' to shutdown openHAB.

openhab> log:set DEBUG org.openhab.persistence.influxdb
Error executing command: Unrecognized configuration
openhab> feature:list | grep flux
openhab-persistence-influxdb                │ 1.12.0           │ x        │ Star               ted     │ openhab-addons-2.3.0    │ InfluxDB (v 1.0) Persistence
openhab> feature:info openhab-persistence-influxdb
Feature openhab-persistence-influxdb 1.12.0
  InfluxDB (v 1.0) Persistence
Feature has no configuration
Feature configuration files:
Feature depends on:
  openhab-runtime-base 0.0.0
  openhab-runtime-compat1x 0.0.0
Feature contains followed bundles:
  mvn:org.openhab.persistence/org.openhab.persistence.influxdb/1.12.0 start-level=80
Feature has no conditionals.
openhab> log:set DEBUG org.openhab.persistence.influxdb
Error executing command: Unrecognized configuration
openhab> log:list
Error executing command: Unrecognized configuration
openhab> log:list
Error executing command: Unrecognized configuration

No idea what I can do else …
I wish you a merry Christmas - Ulrich

I’m pretty sure that your installation is faulty, log:set and log:list should definitely work.
Did you manipulate the file



No, I didn’t touch that file. But it is empty, except 2 comment lines. I just re-installed openhab and looked again to that file. Last modified date is today - so far ok. But it remained empty. What should it contain?
Here is the file:

# Common pattern layout for appenders
#log4j2.pattern = %d{ISO8601} | %-5p | %-16t | %-32c{1} | %X{} - %X{} - %X{bundle.version} | %m%n

I have seen after re-installation that all bindings, items etc. still work (of course, I have stored addons, conf and userdata before in a backup…). It means, re-installation doesn’t overwrite or clear any user data. Is it worth to delete ALL files of the openhab directory manually and then build up everything from scratch?

There should be a file /var/lib/openhab2/etc/org.ops4j.pax.logging.cfg.dpkg-old or /var/lib/openhab2/etc/org.ops4j.pax.logging.cfg.dpkg-dist which should contain the complete configuration. please rename the files or copy the correct file:

cd /var/lib/openhab2/etc
sudo mv org.ops4j.pax.logging.cfg org.ops4j.pax.logging.cfg.old
sudo mv org.ops4j.pax.logging.cfg.dpkg-[dist|old] org.ops4j.pax.logging.cfg 

or take mine:

# Common pattern layout for appenders
#log4j2.pattern = %d{ISO8601} | %-5p | %-16t | %-32c{1} | %X{} - %X{} - %X{bundle.version} | %m%n

# Root logger
log4j2.rootLogger.level = WARN
log4j2.rootLogger.appenderRefs = out, osgi
log4j2.rootLogger.appenderRef.out.ref = LOGFILE
log4j2.rootLogger.appenderRef.osgi.ref = OSGI

# Karaf Shell logger = = OFF = stdout = STDOUT

# Security audit logger = org.apache.karaf.jaas.modules.audit
log4j2.logger.audit.level = INFO
log4j2.logger.audit.additivity = false
log4j2.logger.audit.appenderRefs = audit
log4j2.logger.audit.appenderRef.audit.ref = AUDIT

# openHAB specific logger configuration = org.openhab
log4j2.logger.openhab.level = INFO = org.eclipse.smarthome
log4j2.logger.smarthome.level = INFO = smarthome.event.ItemStateEvent
log4j2.logger.smarthomeItemStateEvent.level = ERROR = smarthome.event.ItemAddedEvent
log4j2.logger.smarthomeItemAddedEvent.level = ERROR = smarthome.event.ItemRemovedEvent
log4j2.logger.smarthomeItemRemovedEvent.level = ERROR = smarthome.event.ThingStatusInfoEvent
log4j2.logger.smarthomeThingStatusInfoEvent.level = ERROR = smarthome.event.ThingAddedEvent
log4j2.logger.smarthomeThingAddedEvent.level = ERROR = smarthome.event.ThingRemovedEvent
log4j2.logger.smarthomeThingRemovedEvent.level = ERROR = smarthome.event.InboxUpdatedEvent
log4j2.logger.smarthomeInboxUpdatedEvent.level = ERROR = smarthome.event = INFO = false = event = EVENT = OSGI = org.jupnp
log4j2.logger.jupnp.level = ERROR = javax.jmdns
log4j2.logger.jmdns.level = ERROR

# This suppresses all Maven download issues from the log when doing feature installations
# as we are logging errors ourselves in a nicer way anyhow. = org.ops4j.pax.url.mvn.internal.AetherBasedResolver
log4j2.logger.paxurl.level = ERROR

# Filters known issues of pax-web (issue link to be added here).
# Can be removed once the issues are resolved in an upcoming version. = org.ops4j.pax.web.pax-web-runtime
log4j2.logger.paxweb.level = OFF

# Filters known issues of lsp4j, see
# Can be removed once the issues are resolved in an upcoming version. = org.eclipse.lsp4j
log4j2.logger.lsp4j.level = OFF

# Filters known issues of KarServiceImpl, see
# Can be removed once the issues are resolved in an upcoming version. = org.apache.karaf.kar.internal.KarServiceImpl
log4j2.logger.karservice.level = ERROR

# Filters warnings about small thread pools.
# The thread pool is kept small intentionally for supporting resource constrained hardware. = org.eclipse.jetty.util.thread.ThreadPoolBudget
log4j2.logger.threadpoolbudget.level = ERROR

# Appenders configuration

# Console appender not used by default (see log4j2.rootLogger.appenderRefs)
log4j2.appender.console.type = Console = STDOUT
log4j2.appender.console.layout.type = PatternLayout
log4j2.appender.console.layout.pattern = %d{HH:mm:ss.SSS} [%-5.5p] [%-36.36c] - %m%n

# Rolling file appender
log4j2.appender.out.type = RollingRandomAccessFile = LOGFILE
log4j2.appender.out.fileName = ${openhab.logdir}/openhab.log
log4j2.appender.out.filePattern = ${openhab.logdir}/openhab.log.%i
log4j2.appender.out.immediateFlush = true
log4j2.appender.out.append = true
log4j2.appender.out.layout.type = PatternLayout
log4j2.appender.out.layout.pattern = %d{yyyy-MM-dd HH:mm:ss.SSS} [%-5.5p] [%-36.36c] - %m%n
log4j2.appender.out.policies.type = Policies
log4j2.appender.out.policies.size.type = SizeBasedTriggeringPolicy
log4j2.appender.out.policies.size.size = 16MB

# Event log appender
log4j2.appender.event.type = RollingRandomAccessFile = EVENT
log4j2.appender.event.fileName = ${openhab.logdir}/events.log
log4j2.appender.event.filePattern = ${openhab.logdir}/events.log.%i
log4j2.appender.event.immediateFlush = true
log4j2.appender.event.append = true
log4j2.appender.event.layout.type = PatternLayout
log4j2.appender.event.layout.pattern = %d{yyyy-MM-dd HH:mm:ss.SSS} [%-26.26c] - %m%n
log4j2.appender.event.policies.type = Policies
log4j2.appender.event.policies.size.type = SizeBasedTriggeringPolicy
log4j2.appender.event.policies.size.size = 16MB

# Audit file appender
log4j2.appender.audit.type = RollingRandomAccessFile = AUDIT
log4j2.appender.audit.fileName = ${openhab.logdir}/audit.log
log4j2.appender.audit.filePattern = ${openhab.logdir}/audit.log.%i
log4j2.appender.audit.append = true
log4j2.appender.audit.layout.type = PatternLayout
log4j2.appender.audit.layout.pattern = %d{yyyy-MM-dd HH:mm:ss.SSS} [%-5.5p] [%-36.36c] - %m%n
log4j2.appender.audit.policies.type = Policies
log4j2.appender.audit.policies.size.type = SizeBasedTriggeringPolicy
log4j2.appender.audit.policies.size.size = 8MB

# OSGi appender
log4j2.appender.osgi.type = PaxOsgi = OSGI
log4j2.appender.osgi.filter = *

should be more or less in original state :wink:

1 Like

first of all: A Happy New Year to you.
Thanks for the config file. It works! Obviously, openhab cannot connect to Influxdb. Here are the relevant log lines after an openhab restart:

2019-01-02 11:28:48.641 [ERROR] [.internal.InfluxDBPersistenceService] - database connection failed
retrofit.RetrofitError: Connection refused (Connection refused)
	at retrofit.RetrofitError.networkError( ~[241:org.openhab.persistence.influxdb:1.12.0]
	at retrofit.RestAdapter$RestHandler.invokeRequest( [241:org.openhab.persistence.influxdb:1.12.0]
	at retrofit.RestAdapter$RestHandler.invoke( [241:org.openhab.persistence.influxdb:1.12.0]
	at org.influxdb.impl.$ Source) [241:org.openhab.persistence.influxdb:1.12.0]
	at [241:org.openhab.persistence.influxdb:1.12.0]
	at org.openhab.persistence.influxdb.internal.InfluxDBPersistenceService.checkConnection( [241:org.openhab.persistence.influxdb:1.12.0]
	at org.openhab.persistence.influxdb.internal.InfluxDBPersistenceService.activate( [241:org.openhab.persistence.influxdb:1.12.0]

and a whole bunch of “at”-lines more.
I checked username and password both in openhab and influxdb - ok. Influxdb knows my database named openhab.

I re-started Influxdb and grafana in the docker and then restarted openhab again. The log result changed:

2019-01-02 11:59:40.546 [ERROR] [.internal.InfluxDBPersistenceService] - database connection failed
retrofit.RetrofitError: No route to host (Host unreachable)

in influxdb.cfg the appropriate line is:


this is the same what Cronograf admin shows in influxdb…

Please take a look at the karaf console

myuser@openhab2:~$ openhab-cli console

Logging in as openhab

                          __  _____    ____
  ____  ____  ___  ____  / / / /   |  / __ )
 / __ \/ __ \/ _ \/ __ \/ /_/ / /| | / __  |
/ /_/ / /_/ /  __/ / / / __  / ___ |/ /_/ /
\____/ .___/\___/_/ /_/_/ /_/_/  |_/_____/
    /_/                        2.5.0-SNAPSHOT
                               Build #1466

Hit '<tab>' for a list of available commands
and '[cmd] --help' for help on a specific command.
Hit '<ctrl-d>' or type 'system:shutdown' or 'logout' to shutdown openHAB.

openhab> config:edit org.openhab.influxdb
openhab> config:property-list
   db = openhab2
   password = thepassword
   retentionPolicy = default
   url =
   user = openhab

I changed the URL by “localhost:8086”. The reason is most probably that the Synology has two URL’S - in my case. 38 and .27 When installing programs it is not every time clear which URL is chosen. Up to now it made no difference for all programs I have running on my server - I can reach them anyway. In this special case - combing openhab and Influx - it seems to play a role.
Anyway - many, many thanks for your help :grinning::grinning::grinning::grinning: I really learned a lot about openhab

Hi @Udo_Hartmann
I have a simular problem. i just installed a fresh openhab 2.4, used openhabian-config to install influxdb and grafana… I believe everything is set up correctly. But I get no data in grafana. And I cant tell if it´s openhab, influxdb or grafana doing something wrong.
I took a copy of your org.ops4j.pax.logging.cfg above… But I get no errors or anything…
I have turned on DEBUG logging for influxdb in Karaf:

openhab> log:set DEBUG org.openhab.persistence.influxdb

But still nothing…
I have a feeling influxdb isn´t running at all. But I dont know how to check that…

I know my cfg and persistense is okay.

password=password <- (edited by me)


Strategies {
 everyMinute : "0 * * * * ?"
 everyHour : "0 0 * * * ?"
 everyDay : "0 0 0 * * ?"
 every15min : "0 0/15 * * * ?"
 default = everyMinute

Items {
stortBadTemperature, stortBadFugt, SoveTemperature, koekkenTemperature : strategy = restoreOnStartup, everyChange, everyMinute 


Grafana is set up like the screendump. I get no error when testing the connection…

I really got no idea how to fix this, since I actually dont get an error or anything simular.
Any advice is apreciated.

Just an idea to locate the error source: Go to Cronograf (localhost:3004), chose Queries and type
SELECT <your_persistent_item> FROM <your_database>.
If Cronograf starts searching something it means influxdb knows your item, i.e. it runs and is connected to openhab.
Any hints in openhab.log when restarting openhab? (this is what I’ve learned from Udo…:grinning:)
I didn’t click the Basic Auth checkbox in Grafana but I am not sure whether it makes a difference.

I dont know what Cronograf is… Is it Grafana, shell ?? I have no idea how to go to it.

No, nothing in the openhab.log beside, it´s loading the persistens file.

I just removed it… Made no difference

I don’t know which version of influxdb you use. Chronograf (sorry for misspelling) is the WebUI of influxdb. I have version 1.7.2, running on a Synology docker. With raspi I have no experience. Try port 3004.

I know what Cronograf is, now :slight_smile:

good luck.

Thx… I cant even install the damn thing! :frowning:
Got the file allright, (version 1.7.5) for armhf (using an Rpi 3B+)… But have no idea how to install it. And the docs are useless for this situation, heck not even updated…

Gave up this Chronograf… I´m too stupid for Linux… The file I got was a zip file, not a deb file… It´s only available for 64bit, (no arm support)…

So I´m back to scratch.

Got a step futher…
Influxdb is running allright… :

> [00:11:52] openhabian@openHABianPi:/$ influx -database openhab_db -password xxxxxx -username admin
Connected to http://localhost:8086 version 1.7.2
InfluxDB shell version: 1.7.2
Enter an InfluxQL query
> show databases
name: databases
> show users
user    admin
----    -----
admin   true
openhab false
grafana false

Database and user seems to be created as well…
Something seems to be missing between openhab and influxdb… Hmm… the binding??

It was the binding missing… Totally forgot about that :frowning:
Things seems to be running just fine now…