InfluxDB+Grafana persistence and graphing

Tags: #<Tag:0x00007fd30da48db8> #<Tag:0x00007fd30da48c78> #<Tag:0x00007fd30da48b38>

(Ham Wong) #407

HI Josar,

Thanks for your response, did your grafana also do port forward in router and be able access by public ip?
I am looking for a way that when connect by openhab cloud to my home openhab server, which may also show grafana, until now I still didn’t get it work yet.

for the local access by mobile which I have solved. of cuz your way is better.

(joris) #408

It would be great if myopenhab could function as a proxy not only for the pages directly served by openhab but also for some of the most used supporting softwares people use, like grafana. But it seems that this is not in scope for myopenhab.

(davorf) #409


I’m not sure if this is the right place to ask this question, but I was wondering why HABPanel iFrame refreshes Grafana graphs every 5 seconds, even though I’ve set refresh interval inside a widget to 60 seconds (I’ve tried setting it to a larger interval too).

@ThomDietrich : I’ve seen you were discussing retention policies in this topic, but haven’t found any example (it’s pretty big topic - honestly - haven’t read every post). Since I want to display CPU and memory usage, and I’m persisting it on every change, my database grows approximately 1.2 MB per hour, so, it would be 10 GB after one year of usage. I was looking into Retention policy examples, and this is the way I’m using it right now (it looks like everything is working OK):

When creating InfluxDB database, I’ve created it with non-default Retention Policy and Shard Group Duration:


After that, you need to change retentionPolicy=autogen to retentionPolicy=OpenHAB in influxdb.cfg (and, of course, use OpenHAB instead of autogen/default in Grafana queries).

The way I understood this, database will grow for 180 days (DURATION), and after that period, it will start deleting oldest Shard Groups (since Shard Group Duration is 1 day, it will delete first day data on 181st day, 2nd day data on 182nd day etc).

Best regards,

( ) #410

The question regarding Habpanel might be better placed in the habpanel category. I for one can’t comment on that.

Regarding retention policies: I don’t use any and no one posted a good example till now. As you’ve obviously done some research, would you be able to give a short overview? Something that I then can copy to the first posting? A short intro and 2-3 examples?
What I don’t “like” about your policy is, that you are just dropping values. I would love to give a strategy as an example, that reduces data size over time…

(davorf) #411


I will ask that question in HABPanel topic, I just thought it might be Grafana problem.

Regarding retention policies, since I’m not so fluent in English, it will take some time to write a meaningful post on that. I’m at work right now, so, I will probably take some time and write it over the weekend. I haven’t done much research, but, from what I’ve read, the only way to use DURATION is to delete rows from the database. I haven’t found any way to shrink database (especially to shrink it partially). I will take a more detailed look at InfluxDB documentation and various posts about this topic, and if I find any way to do it, I will post it here.

Best regards,

( ) #412

No problem. Any help would be appreciated. If you can only dig up a few details, that would already be appreciated. So with the weekend idea in mind, let us know what you found in the time available. the next user will come along and add further details. Anything is better than nothing :wink:

(Marcin) #413

Guys, simple question, how to add value from the past in InfluxDB1.3 ?
Ie in CLI:

INSERT Gaz value=200 1490371853000000

I got:

ERR: {"error":"partial write: points beyond retention policy dropped=1"}

I have modified retention:

> show retention POLICIES
> name    duration  shardGroupDuration replicaN default
> ----    --------  ------------------ -------- -------
> autogen 9600h0m0s 168h0m0s           1        false
> default 9600h0m0s 480h0m0s           1        true

but it not allow me to add older data … why ?

(Dries) #414

Hi all,
I’m trying to find a way to send data from OH to InfluxDB to build a table in Grafana with multiple columns.
I mean something like this:

Timestamp Status User
29-10-17 19:35 Armed John
29-10-17 20:12 Disarmed William

I noticed that you can use JSON data in the Table Panel of Grafana. So I tried to save information in an item link this:

postUpdate(Tex_Log_Alarm_Status_V3, "{ \"status\":\"Armed\", \"User\":\"John\" }")

But I overlooked the fact that the item is not stored as “pure” JSON, since timestamp information is added.
The end result is not what I hoped for:

So did anyone figure out how to create a table with multiple columns in Grafana with OH data?

(John Doe) #415

Unfortunately I cannot get an image displayed as an item, that´s what I have:

Image KzChart <line> url="" label=Chart refresh=60000 (FF_Kz)

but it does not show up :frowning:

This is so annyoing, once again… After setting up Grafana and an InfluxDB, which is working very fine now, success again lacks in cooperation of openhab(2)…

I only want to integrate an image item to one of my groups, shouldn´t be a difficult task. But now I´m sitting here trying to figure out how to achieve that longer than setting up grafana and InfluxDB…! Thats all I want to achieve (photoshopped):

So, I finally managed to display the image on the first view by entering the below line into my home.sitemap:

Image item=ChartKz url="" refresh=60000

So how can I now add this picture to one of my groups in my home.items file because it makes no sense to display the chart on the the first view when I open my Dashboard? Now it looks like this:

but should be displayed here:

\Group Küche
\Group Schlafzimmer
\Group Wohnzimmer
\Group Kinderzimmer
-> here

(davorf) #416

Hello @ThomDietrich!

Sorry for the delay, I really haven’t had any free time earlier to write this.

I’ve done some research on InfluxDB database shrinking, and, if I understood the stuff I’ve read correctly, InfluxDB used numbers of storage engines over time (different from version to version - LevelDB, RocksDB, BoltDB), but the latest version (1.3 at the time of writing this) is using Time Structured Merge Tree (their own storage engine). This storage engine is shrinking data automatically, but only shrinking shard groups that are already expired, and haven’t had any new data written to, or deleted from.

Now, let’s get to the basics. InfluxDB database is the collection of the following things:

  • data points
  • series
  • measurements
  • tags
  • retention policies

Besides that, there are shards, and shard groups - you never actually deal with them directly, they are just a way data is stored within a database. Shard is one set of data (with all the stuff that defines it - data points, series, measurements…), and a shard group is collection of separate shards, that has it’s expiration, redundancy and distribution defined by a retention policy.

From what I’ve read in the documentation, database could have multiple retention policies defined (besides the one that’s default, and applicable to the database as whole), but those retention policies are defined while writing or reading data, so, I guess this can not be applied to OpenHAB InfluxDB persistence.

Retention policy, in OpenHAB use-case scenario should be defined when creating a database. The example of creating database with a custom retention policy is:


In this example, we create OpenHABDatabase that stores data in shard groups with 7 days expiration, and will retain data for 180 days. Shard expiration means that, after creating database, new shard group will be created, database will store data (shards) in it for 7 days, and after that period (SHARD DURATION) is over, it will create a new shard group, and start storing data in it. Parameter DURATION (180d) means that all data within 180 days will be kept in database. After that period is over, database will start dropping (deleting) oldest shard groups (first 7 days shard group, then second 7 days shard group etc.). If DURATION parameter is not supplied, duration is set to infinity (all data is being kept).

The important thing to mention here is influxdb.cfg parameter retentionPolicy, which should be set to the retention policy name you’ve created while creating database, instead of autogen.

@ThomDietrich I hope this post makes sense. Of course, you can change it in any way you like it, if you wish to adapt it to the tutorial in the first post. If you have any additional questions, or some parts of this post aren’t making sense, feel free to ask, and I will try to explain it more thoroughly. There are also redundancy (for data safety - in case of hard disk or something else fails) and distribution (for speeding up reads and writes - in case you hardware can’t handle them without slowing everything else down) options while creating a retention policy, but right now I don’t think I need them, so, I haven’t put much effort in researching those stuff.

Best regards,

(Raven) #417

Wondering if anyone here can help me with a small problem.

i recently installed OH2 on my Synology as SPK package and added influxdb and grafana as docker images to be able to build some nice graphics. I followed the tutorial in this post and everything is working so far.

now i wanted to be able to access the grafana.ini from my desktop. So i tried to map /etc/grafana from the docker to a folder on my NAS
but if i restart the container afterwards it starts toggeling and the log files is flooded with

 Failed to parse /etc/grafana/grafana.ini, open /etc/grafana/grafana.ini: permission denied%!(EXTRA []interface {}=[]) 

i think this is because grafana is trying to access the folder with the standard admin/admin account but i found no solution to change this.

So anyone can help me?

(EH) #418

I’m not sure how to execute the script to inject data into the influxdb database. How do I access my openHAB ‘host’. Can you provide a description or example of an openHAB host?
Thanks much,

(Rich Koshak) #419

The host is the machine you are running openHAB on.

(EH) #420

Thank you for the reply. I was getting hung up on thinking I had to run the script from within the Influx web interface but I could no longer access that GUI… it wasn’t working. I realize now that the 1.3 version of influxdb does not support the web interface any longer. I also reread the python tutorial and figured out how to run the python script from within the SSH terminal interface to my host. So thanks again for taking the time to reply.

(Patrik) #421


i have some problems with OH2, influxDB and Grafana.

My Setup:

  • newest Stable OH2
  • influx 1.3.7
  • grafana 4.6.2

I can connect via influx to my db ( openhab_db ), but the changed values wont write into the db!

I have the following cfgs


# The database URL, e.g. or .
# Defaults to:

# The name of the database user, e.g. openhab.
# Defaults to: openhab

# The password of the database user.

# The name of the database, e.g. openhab.
# Defaults to: openhab



Strategies {
everyMinute : "0 * * * * ?"
everyHour   : "0 0 * * * ?"
everyDay    : "0 0 0 * * ?"

Items {
schlafzimmer_Temperatur : strategy = everyMinute, everyChange


Number schlafzimmer_Temperatur "Temperatur" <temperature> ( gSchlafzimmer, gSchlafzimmerTemperatur) {channel="homematic:HG-MiSensorHT:1ecedb21:MI019CE235:1#TEMPERATURE"}

I avtivate the log for influxDB and it shows the following lines:

2017-11-19 18:34:07.855 [DEBUG] [.InfluxDBPersistenceServiceActivator] - InfluxDB persistence bundle has been started.
2017-11-19 18:34:07.950 [DEBUG] [.internal.InfluxDBPersistenceService] - influxdb persistence service activated
2017-11-19 18:34:10.137 [DEBUG] [.internal.InfluxDBPersistenceService] - database status is OK, version is 1.3.7
2017-11-19 18:34:10.185 [DEBUG] [org.openhab.persistence.influxdb    ] - ServiceEvent REGISTERED - {org.openhab.core.persistence.PersistenceService, org.openhab.core.persistence.QueryablePersistenceService}={$
2017-11-19 18:34:10.193 [DEBUG] [org.openhab.persistence.influxdb    ] - BundleEvent STARTED - org.openhab.persistence.influxdb

In grafana i create a influx datasource like all the tutorials. After taht, i create a new graph, but in the query section i cannot find the temperature item :frowning:
i hope you can help me, i sit here since round about 7 hours …

(Udo Hartmann) #422

As the service is called influxdb, the persistence file has to be named


(Rob Pope) #423

Is anybody seeing incredibly high CPU usage from these two? I need to do some more diagnostics to see whether it’s Influx or Grafana causing the spike. I’m running on a Ubuntu Linux VM.

(Angelos) #424

not really. I persist ~200 item states on every change and I don’t see more than 1% CPU utilization on either of these 2 processes. Running on Debian 8 on a medium spec laptop.

(Mark Herwege) #425

Everything will depend on hardware. I am running on a RPi and I do see spikes. InfluxDB does some data storage optimization an regular intervals that causes this. A bigger concern would be memory though.

(Rob Pope) #426

I’ve just created a new VM for another task and that’s also using max CPU constantly. I can’t connect to either through SSH and the console is unresponsive. It must be an environment thing