InfluxDB+Grafana persistence and graphing


I will ask that question in HABPanel topic, I just thought it might be Grafana problem.

Regarding retention policies, since I’m not so fluent in English, it will take some time to write a meaningful post on that. I’m at work right now, so, I will probably take some time and write it over the weekend. I haven’t done much research, but, from what I’ve read, the only way to use DURATION is to delete rows from the database. I haven’t found any way to shrink database (especially to shrink it partially). I will take a more detailed look at InfluxDB documentation and various posts about this topic, and if I find any way to do it, I will post it here.

Best regards,

1 Like

No problem. Any help would be appreciated. If you can only dig up a few details, that would already be appreciated. So with the weekend idea in mind, let us know what you found in the time available. the next user will come along and add further details. Anything is better than nothing :wink:

Guys, simple question, how to add value from the past in InfluxDB1.3 ?
Ie in CLI:

INSERT Gaz value=200 1490371853000000

I got:

ERR: {"error":"partial write: points beyond retention policy dropped=1"}

I have modified retention:

> show retention POLICIES
> name    duration  shardGroupDuration replicaN default
> ----    --------  ------------------ -------- -------
> autogen 9600h0m0s 168h0m0s           1        false
> default 9600h0m0s 480h0m0s           1        true

but it not allow me to add older data … why ?

Hi all,
I’m trying to find a way to send data from OH to InfluxDB to build a table in Grafana with multiple columns.
I mean something like this:

Timestamp Status User
29-10-17 19:35 Armed John
29-10-17 20:12 Disarmed William

I noticed that you can use JSON data in the Table Panel of Grafana. So I tried to save information in an item link this:

postUpdate(Tex_Log_Alarm_Status_V3, "{ \"status\":\"Armed\", \"User\":\"John\" }")

But I overlooked the fact that the item is not stored as “pure” JSON, since timestamp information is added.
The end result is not what I hoped for:

So did anyone figure out how to create a table with multiple columns in Grafana with OH data?

Unfortunately I cannot get an image displayed as an item, that´s what I have:

Image KzChart <line> url="" label=Chart refresh=60000 (FF_Kz)

but it does not show up :frowning:

This is so annyoing, once again… After setting up Grafana and an InfluxDB, which is working very fine now, success again lacks in cooperation of openhab(2)…

I only want to integrate an image item to one of my groups, shouldn´t be a difficult task. But now I´m sitting here trying to figure out how to achieve that longer than setting up grafana and InfluxDB…! Thats all I want to achieve (photoshopped):

So, I finally managed to display the image on the first view by entering the below line into my home.sitemap:

Image item=ChartKz url="" refresh=60000

So how can I now add this picture to one of my groups in my home.items file because it makes no sense to display the chart on the the first view when I open my Dashboard? Now it looks like this:

but should be displayed here:

\Group Küche
\Group Schlafzimmer
\Group Wohnzimmer
\Group Kinderzimmer
-> here

Hello @ThomDietrich!

Sorry for the delay, I really haven’t had any free time earlier to write this.

I’ve done some research on InfluxDB database shrinking, and, if I understood the stuff I’ve read correctly, InfluxDB used numbers of storage engines over time (different from version to version - LevelDB, RocksDB, BoltDB), but the latest version (1.3 at the time of writing this) is using Time Structured Merge Tree (their own storage engine). This storage engine is shrinking data automatically, but only shrinking shard groups that are already expired, and haven’t had any new data written to, or deleted from.

Now, let’s get to the basics. InfluxDB database is the collection of the following things:

  • data points
  • series
  • measurements
  • tags
  • retention policies

Besides that, there are shards, and shard groups - you never actually deal with them directly, they are just a way data is stored within a database. Shard is one set of data (with all the stuff that defines it - data points, series, measurements…), and a shard group is collection of separate shards, that has it’s expiration, redundancy and distribution defined by a retention policy.

From what I’ve read in the documentation, database could have multiple retention policies defined (besides the one that’s default, and applicable to the database as whole), but those retention policies are defined while writing or reading data, so, I guess this can not be applied to OpenHAB InfluxDB persistence.

Retention policy, in OpenHAB use-case scenario should be defined when creating a database. The example of creating database with a custom retention policy is:


In this example, we create OpenHABDatabase that stores data in shard groups with 7 days expiration, and will retain data for 180 days. Shard expiration means that, after creating database, new shard group will be created, database will store data (shards) in it for 7 days, and after that period (SHARD DURATION) is over, it will create a new shard group, and start storing data in it. Parameter DURATION (180d) means that all data within 180 days will be kept in database. After that period is over, database will start dropping (deleting) oldest shard groups (first 7 days shard group, then second 7 days shard group etc.). If DURATION parameter is not supplied, duration is set to infinity (all data is being kept).

The important thing to mention here is influxdb.cfg parameter retentionPolicy, which should be set to the retention policy name you’ve created while creating database, instead of autogen.

@ThomDietrich I hope this post makes sense. Of course, you can change it in any way you like it, if you wish to adapt it to the tutorial in the first post. If you have any additional questions, or some parts of this post aren’t making sense, feel free to ask, and I will try to explain it more thoroughly. There are also redundancy (for data safety - in case of hard disk or something else fails) and distribution (for speeding up reads and writes - in case you hardware can’t handle them without slowing everything else down) options while creating a retention policy, but right now I don’t think I need them, so, I haven’t put much effort in researching those stuff.

Best regards,


Wondering if anyone here can help me with a small problem.

i recently installed OH2 on my Synology as SPK package and added influxdb and grafana as docker images to be able to build some nice graphics. I followed the tutorial in this post and everything is working so far.

now i wanted to be able to access the grafana.ini from my desktop. So i tried to map /etc/grafana from the docker to a folder on my NAS
but if i restart the container afterwards it starts toggeling and the log files is flooded with

 Failed to parse /etc/grafana/grafana.ini, open /etc/grafana/grafana.ini: permission denied%!(EXTRA []interface {}=[]) 

i think this is because grafana is trying to access the folder with the standard admin/admin account but i found no solution to change this.

So anyone can help me?

I’m not sure how to execute the script to inject data into the influxdb database. How do I access my openHAB ‘host’. Can you provide a description or example of an openHAB host?
Thanks much,

The host is the machine you are running openHAB on.

1 Like

Thank you for the reply. I was getting hung up on thinking I had to run the script from within the Influx web interface but I could no longer access that GUI… it wasn’t working. I realize now that the 1.3 version of influxdb does not support the web interface any longer. I also reread the python tutorial and figured out how to run the python script from within the SSH terminal interface to my host. So thanks again for taking the time to reply.

1 Like


i have some problems with OH2, influxDB and Grafana.

My Setup:

  • newest Stable OH2
  • influx 1.3.7
  • grafana 4.6.2

I can connect via influx to my db ( openhab_db ), but the changed values wont write into the db!

I have the following cfgs


# The database URL, e.g. or .
# Defaults to:

# The name of the database user, e.g. openhab.
# Defaults to: openhab

# The password of the database user.

# The name of the database, e.g. openhab.
# Defaults to: openhab



Strategies {
everyMinute : "0 * * * * ?"
everyHour   : "0 0 * * * ?"
everyDay    : "0 0 0 * * ?"

Items {
schlafzimmer_Temperatur : strategy = everyMinute, everyChange


Number schlafzimmer_Temperatur "Temperatur" <temperature> ( gSchlafzimmer, gSchlafzimmerTemperatur) {channel="homematic:HG-MiSensorHT:1ecedb21:MI019CE235:1#TEMPERATURE"}

I avtivate the log for influxDB and it shows the following lines:

2017-11-19 18:34:07.855 [DEBUG] [.InfluxDBPersistenceServiceActivator] - InfluxDB persistence bundle has been started.
2017-11-19 18:34:07.950 [DEBUG] [.internal.InfluxDBPersistenceService] - influxdb persistence service activated
2017-11-19 18:34:10.137 [DEBUG] [.internal.InfluxDBPersistenceService] - database status is OK, version is 1.3.7
2017-11-19 18:34:10.185 [DEBUG] [org.openhab.persistence.influxdb    ] - ServiceEvent REGISTERED - {org.openhab.core.persistence.PersistenceService, org.openhab.core.persistence.QueryablePersistenceService}={$
2017-11-19 18:34:10.193 [DEBUG] [org.openhab.persistence.influxdb    ] - BundleEvent STARTED - org.openhab.persistence.influxdb

In grafana i create a influx datasource like all the tutorials. After taht, i create a new graph, but in the query section i cannot find the temperature item :frowning:
i hope you can help me, i sit here since round about 7 hours …

As the service is called influxdb, the persistence file has to be named


Is anybody seeing incredibly high CPU usage from these two? I need to do some more diagnostics to see whether it’s Influx or Grafana causing the spike. I’m running on a Ubuntu Linux VM.

not really. I persist ~200 item states on every change and I don’t see more than 1% CPU utilization on either of these 2 processes. Running on Debian 8 on a medium spec laptop.


Everything will depend on hardware. I am running on a RPi and I do see spikes. InfluxDB does some data storage optimization an regular intervals that causes this. A bigger concern would be memory though.

I’ve just created a new VM for another task and that’s also using max CPU constantly. I can’t connect to either through SSH and the console is unresponsive. It must be an environment thing

IIrc influxdb is writing log messages for every single metric it received which puts heavyy load on the SD cards of rPI’s. Theres a setting to disable this which helped me to reduce load on the system (iowait).

please specify, how did you do it?

I think that he is referring to something like:

(not sure if @waitz_sebastian is using the same settings)


Thank you!

1 Like