InfluxDB + Grafana vs. MySQL + rrd4j

Tags: #<Tag:0x00007f212f5802f8> #<Tag:0x00007f212f582f08> #<Tag:0x00007f212f57fec0>

(beard_lionel) #8

Do you know if Influx DB & Grafana are compatible with OH1?


( ) #9

It does but wait a few more weeks and upgrade to openHAB 2.0 final. :wink:

(beard_lionel) #10

I will do, even if I am scared of extra works to make my rules working again on OH 2 :slight_smile:

(Rich Koshak) #11

I’ll point you this this as it may be of some interest:

The crux of the matter is use which ever database is the best for what you want to do with the data.

I use a blanket “store everything” into MapDB for restoreOnStartup. I don’t see the need for saving more than the most recent value for the bulk of my Items but I do like the default option to be to restore everything to its previous state and then I have rules which update sensor values or recalculate state where needed.

I used to use rrd4j for keeping historic data because it is fixed size and therefore will never need maintenance. However, I’ve since refactored all of those cases out so no longer use rrd4j at all.

For charting I use InfluxDB and Grafana (dockerized). If find they do not consume many resources at all and the few Items I have which get charted or for which I track historic data is small so I’m not worried about having to do maintenance on the DB if it grows too large for many many years.

I’m in the middle of a ground up rebuild of my entire home automation system (for the regulars, this is why I’ve been so absent lately) so I may change my mind as I go.

But if you are running on anything larger than a board computer performance should not be an issue for you. For comparison, I’m running the following on an old Asus Bamboo laptop with an i7 Intel CPU but only 4 GB RAM, running Ubuntu 16

  • nginx
  • samba shares: docker
  • gogs: docker
  • plexmediaserver: docker
  • mosquitto: docker
  • influxdb: docker
  • grafana: docker
  • sensorReporter
  • openhab: docker
  • zoneminder: docker (not yet reinstalled)
  • crashplan: (not yet reinstalled)
  • OPNsense: VirtualBox (not yet reinstalled, probably wont)

The only time I ever saw performance problems was when I tried to watch something streamed from Plex that required transcoding (e.g. from one of the channels) at the same time CrashPlan was receiving a backup from one of my other machines.

And in relation to the rebuild: always make backups and make them frequently! Though this is giving me the opportunity to automate the build and config of my main server (I’m really quite enjoying Ansible) and I’m finding as I rewrite my rules I’m being more consistent and concise.

Once I get back to where I was functionally I may post some or all of them as an examples.

(David Masshardt) #12

I just installed and configured InfluxDB and Grafana on my Openhabian and I like it very much. But I don’t know much about the InfluxDB. My main concern is that the database would grow to big on the small sd card of my Raspberry when I add to much items or when items are storing a lot of values. How exactly is this database managing the space that is uses? Is there someething (like in rrd4j) that deletes older values or that reduces older values?

(Jürgen Baginski) #13

You should search for “retention policiies” in the docs. Using those you can configure InfluxDB to work like rrd4j in terms of growing.

(Hallo Ween) #14

is it a good idea to run openhab and persistence (influxdb or mysql) on the same machine or install this on separate machines? I have an esxi server, running virtual machine with Debian for openhab.

(Angelos) #15

From my experience (storing about 50 items that update very often for the past 3 months): My InfluxDB is only 2,2Megs:

root@homer:~# influxd backup -database openhab_db /backup/InfluxDB/openhab_db
root@homer:~# du -sk /backup/InfluxDB/openhab_db/
2216	/backup/InfluxDB/openhab_db/
root@homer:~# du -sk /var/lib/influxdb/data/openhab_db/autogen/
2248	/var/lib/influxdb/data/openhab_db/autogen/

The data are not deleted over time (based on the default Retention Policy = autogen).

They are stored as measurements:

root@homer:~# influx
Connected to http://localhost:8086 version 1.2.0
InfluxDB shell version: 1.2.0
> USE openhab_db
Using database openhab_db
> show measurements
name: measurements

> select * from FibEye01_Temp
name: FibEye01_Temp
time                value
----                -----
1479468237218000000 25
1479469139051000000 25.9
1479475461634000000 20.3
1479482692444000000 20.1
1479487210137000000 20
1479489921677000000 19.8
1479504384743000000 19.6
1479511618438000000 19.4

(Angelos) #16

It’s fine so far for me (OH2+InfluxDB+Grafana+other stuff on same host).

influxd uses less than 500Megs of RAM and less than 2% CPU (on a quad core intel laptop).
A Raspberry Pi 3 may struggle a bit with this setup…

(David Masshardt) #17

I’ve taken a look at the “retention policies” of Influx DB and found a very good manual for this:

Basically combining retention policies and continuous query’s could result in a storage like the one in the round robin databases.

Did someone already build retention policies and continuous query’s that are matching the default OpenHAB RRD policy? If not I will try to build them myself.

( ) #18

If your home is smaller than a palace and you are not blindly persisting every item minutely, memory usage is not a problem. InfluxDB+Grafana persistence and graphing

Hence, my answer: No, no experience with retention policies.

(Jürgen Baginski) #19

I did a trial with influxDB and grafana (if you look through the thread posted by @ThomDietrich above, you’ll find all my knowledge :wink:). I stopped using it and went back to rrd4j since to graphing provided by habpanel (n3) is good enough for me.

(David Masshardt) #20

I’ve now added 42 items to my InfluxDB. Now I’am getting a lot of those errors in the log:

2017-02-05 11:00:31.958 [ERROR] [org.influxdb.impl.BatchProcessor    ] - Batch could not be sent. Data will be lost
java.lang.RuntimeException: {"error":"timeout"}

	at org.influxdb.impl.InfluxDBErrorHandler.handleError([224:org.openhab.persistence.influxdb:1.9.0]
	at retrofit.RestAdapter$RestHandler.invoke([224:org.openhab.persistence.influxdb:1.9.0]
	at org.influxdb.impl.$Proxy135.writePoints(Unknown Source)[224:org.openhab.persistence.influxdb:1.9.0]
	at org.influxdb.impl.InfluxDBImpl.write([224:org.openhab.persistence.influxdb:1.9.0]
	at org.influxdb.impl.BatchProcessor.write([224:org.openhab.persistence.influxdb:1.9.0]
	at org.influxdb.impl.BatchProcessor$[224:org.openhab.persistence.influxdb:1.9.0]
	at java.util.concurrent.Executors$[:1.8.0_121]
	at java.util.concurrent.FutureTask.runAndReset([:1.8.0_121]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301([:1.8.0_121]
	at java.util.concurrent.ScheduledThreadPoolExecutor$[:1.8.0_121]
	at java.util.concurrent.ThreadPoolExecutor.runWorker([:1.8.0_121]
	at java.util.concurrent.ThreadPoolExecutor$[:1.8.0_121]

Any ideas how to solve this?

(Angelos) #21

It seems that the connection from OH2 to the InfluxDB is timing out…
Post your /etc/openhab2/services/influxdb.cfg to check it
Does anything gets stored and sometimes you get timeouts or it simply doesn’t work at all?

(David Masshardt) #22

I’ve got data in the InfluxDB, but it seams as if the persistence service regularly runs into a timeout. (About every 10 to 30 minutes, ~80 times last night)

This is my influxdb.cfg:

# The database URL, e.g. or .
# Defaults to:

# The name of the database user, e.g. openhab.
# Defaults to: openhab

# The password of the database user.

# The name of the database, e.g. openhab.
# Defaults to: openhab

Any idea what could cause this problem?

Edit: I just found the corresponding error in the InfluxDB log:

Feb  6 10:40:27 OpenHabian01 influxd[402]: [httpd] - openhab [06/Feb/2017:10:40:17 +0100] "POST /write?consistency=one&db=openhab_db&p=%5BREDACTED%5D&precision=n&rp=autogen&u=openhab HTTP/1.1" 500 44 "-" "okhttp/2.4.0" 44d639f5-ec50-11e6-9ddb-000000000000 10003606

Could the problem be the slow sd card disk speed of my Raspberry PI? Here is the output of my disk speed write test:

dd if=/dev/zero of=~/test.tmp bs=500K count=1024
1024+0 records in
1024+0 records out
524288000 bytes (524 MB) copied, 70.9448 s, 7.4 MB/s

(SiHui) #23

Did you ever solve that? Have the same problem …

(David Masshardt) #24

I think I solved the problem with a faster sd card, but I’m not sure anymore.

(SiHui) #25

Thx. I’m running from an USB SSD, maybe it’s to slow … :joy:

(Mark Herwege) #26

@TheNetStriker Did you end up building these retention policies and continuous query’s? While disk space is not an issue, I run into memory issues on my RPi as the dataset grows. I am logging a very limited number of items (like 30 to 50), but at a fairly high frequency. After running for a few days, my RPi becomes unresponsive and only a hard reset will bring it back. Rather then moving influxDB to a server that will also consume power I want to see if I can solve the issue with some intelligent data pruning.

(David Masshardt) #27

I’ve currently have no retention policy. My raspberry currently has a ram usage of 85%. The influxdb process is currently using 15% of the memory. I’am also tracking the memory usage of the raspberry and I also saw a few times that the memory usage was at this level, but before it hits 100% it dropped every time and then began to rise again. I will monitor the memory usage over the next few days.