InfluxDB + Grafana vs. MySQL + rrd4j

You can visualize string items in Grafana using the Table panel :slight_smile:

I’m also/still a big fan of the InfluxDB+Grafana combo.
I’m not using anything else, not even mapdb for RestoreOnStartup, I prefer to initialize items with fresh readings from sensors or through a startup rule.

One more aspect I want to point to: If you want to access your data with other tools, be sure it supports the database you are using. InfluxDB is great but if your statistical analysis tool only supports mySQL it’s pointless…

HI,
Do you know if Influx DB & Grafana are compatible with OH1?

Thanks

It does but wait a few more weeks and upgrade to openHAB 2.0 final. :wink:

:slight_smile:
I will do, even if I am scared of extra works to make my rules working again on OH 2 :slight_smile:

I’ll point you this this as it may be of some interest:

The crux of the matter is use which ever database is the best for what you want to do with the data.

I use a blanket “store everything” into MapDB for restoreOnStartup. I don’t see the need for saving more than the most recent value for the bulk of my Items but I do like the default option to be to restore everything to its previous state and then I have rules which update sensor values or recalculate state where needed.

I used to use rrd4j for keeping historic data because it is fixed size and therefore will never need maintenance. However, I’ve since refactored all of those cases out so no longer use rrd4j at all.

For charting I use InfluxDB and Grafana (dockerized). If find they do not consume many resources at all and the few Items I have which get charted or for which I track historic data is small so I’m not worried about having to do maintenance on the DB if it grows too large for many many years.

I’m in the middle of a ground up rebuild of my entire home automation system (for the regulars, this is why I’ve been so absent lately) so I may change my mind as I go.

But if you are running on anything larger than a board computer performance should not be an issue for you. For comparison, I’m running the following on an old Asus Bamboo laptop with an i7 Intel CPU but only 4 GB RAM, running Ubuntu 16

  • nginx
  • samba shares: docker
  • gogs: docker
  • plexmediaserver: docker
  • mosquitto: docker
  • influxdb: docker
  • grafana: docker
  • sensorReporter
  • openhab: docker
  • zoneminder: docker (not yet reinstalled)
  • crashplan: (not yet reinstalled)
  • OPNsense: VirtualBox (not yet reinstalled, probably wont)

The only time I ever saw performance problems was when I tried to watch something streamed from Plex that required transcoding (e.g. from one of the channels) at the same time CrashPlan was receiving a backup from one of my other machines.

And in relation to the rebuild: always make backups and make them frequently! Though this is giving me the opportunity to automate the build and config of my main server (I’m really quite enjoying Ansible) and I’m finding as I rewrite my rules I’m being more consistent and concise.

Once I get back to where I was functionally I may post some or all of them as an examples.

4 Likes

I just installed and configured InfluxDB and Grafana on my Openhabian and I like it very much. But I don’t know much about the InfluxDB. My main concern is that the database would grow to big on the small sd card of my Raspberry when I add to much items or when items are storing a lot of values. How exactly is this database managing the space that is uses? Is there someething (like in rrd4j) that deletes older values or that reduces older values?

You should search for “retention policiies” in the docs. Using those you can configure InfluxDB to work like rrd4j in terms of growing.

is it a good idea to run openhab and persistence (influxdb or mysql) on the same machine or install this on separate machines? I have an esxi server, running virtual machine with Debian for openhab.

From my experience (storing about 50 items that update very often for the past 3 months): My InfluxDB is only 2,2Megs:

root@homer:~# influxd backup -database openhab_db /backup/InfluxDB/openhab_db
root@homer:~# du -sk /backup/InfluxDB/openhab_db/
2216	/backup/InfluxDB/openhab_db/
root@homer:~# du -sk /var/lib/influxdb/data/openhab_db/autogen/
2248	/var/lib/influxdb/data/openhab_db/autogen/

The data are not deleted over time (based on the default Retention Policy = autogen).

They are stored as measurements:

root@homer:~# influx
Connected to http://localhost:8086 version 1.2.0
InfluxDB shell version: 1.2.0
> USE openhab_db
Using database openhab_db
> show measurements
name: measurements
name
----
FibEye01_Temp
FibEye02_Temp
[...]


> select * from FibEye01_Temp
name: FibEye01_Temp
time                value
----                -----
1479468237218000000 25
1479469139051000000 25.9
1479475461634000000 20.3
1479482692444000000 20.1
1479487210137000000 20
1479489921677000000 19.8
1479504384743000000 19.6
1479511618438000000 19.4
[...]
1 Like

It’s fine so far for me (OH2+InfluxDB+Grafana+other stuff on same host).

influxd uses less than 500Megs of RAM and less than 2% CPU (on a quad core intel laptop).
A Raspberry Pi 3 may struggle a bit with this setup…

I’ve taken a look at the “retention policies” of Influx DB and found a very good manual for this: https://docs.influxdata.com/influxdb/v1.2/guides/downsampling_and_retention/

Basically combining retention policies and continuous query’s could result in a storage like the one in the round robin databases.

Did someone already build retention policies and continuous query’s that are matching the default OpenHAB RRD policy? If not I will try to build them myself.

If your home is smaller than a palace and you are not blindly persisting every item minutely, memory usage is not a problem. InfluxDB+Grafana persistence and graphing

Hence, my answer: No, no experience with retention policies.

I did a trial with influxDB and grafana (if you look through the thread posted by @ThomDietrich above, you’ll find all my knowledge :wink:). I stopped using it and went back to rrd4j since to graphing provided by habpanel (n3) is good enough for me.

I’ve now added 42 items to my InfluxDB. Now I’am getting a lot of those errors in the log:

2017-02-05 11:00:31.958 [ERROR] [org.influxdb.impl.BatchProcessor    ] - Batch could not be sent. Data will be lost
java.lang.RuntimeException: {"error":"timeout"}

	at org.influxdb.impl.InfluxDBErrorHandler.handleError(InfluxDBErrorHandler.java:19)[224:org.openhab.persistence.influxdb:1.9.0]
	at retrofit.RestAdapter$RestHandler.invoke(RestAdapter.java:242)[224:org.openhab.persistence.influxdb:1.9.0]
	at org.influxdb.impl.$Proxy135.writePoints(Unknown Source)[224:org.openhab.persistence.influxdb:1.9.0]
	at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:151)[224:org.openhab.persistence.influxdb:1.9.0]
	at org.influxdb.impl.BatchProcessor.write(BatchProcessor.java:171)[224:org.openhab.persistence.influxdb:1.9.0]
	at org.influxdb.impl.BatchProcessor$1.run(BatchProcessor.java:144)[224:org.openhab.persistence.influxdb:1.9.0]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)[:1.8.0_121]
	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)[:1.8.0_121]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)[:1.8.0_121]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)[:1.8.0_121]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)[:1.8.0_121]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)[:1.8.0_121]
	at java.lang.Thread.run(Thread.java:745)[:1.8.0_121]

Any ideas how to solve this?

It seems that the connection from OH2 to the InfluxDB is timing out…
Post your /etc/openhab2/services/influxdb.cfg to check it
Does anything gets stored and sometimes you get timeouts or it simply doesn’t work at all?

I’ve got data in the InfluxDB, but it seams as if the persistence service regularly runs into a timeout. (About every 10 to 30 minutes, ~80 times last night)

This is my influxdb.cfg:

# The database URL, e.g. http://127.0.0.1:8086 or https://127.0.0.1:8084 .
# Defaults to: http://127.0.0.1:8086
url=http://127.0.0.1:8086

# The name of the database user, e.g. openhab.
# Defaults to: openhab
user=openhab

# The password of the database user.
password=openhab

# The name of the database, e.g. openhab.
# Defaults to: openhab
db=openhab_db

Any idea what could cause this problem?

Edit: I just found the corresponding error in the InfluxDB log:

Feb  6 10:40:27 OpenHabian01 influxd[402]: [httpd] 127.0.0.1 - openhab [06/Feb/2017:10:40:17 +0100] "POST /write?consistency=one&db=openhab_db&p=%5BREDACTED%5D&precision=n&rp=autogen&u=openhab HTTP/1.1" 500 44 "-" "okhttp/2.4.0" 44d639f5-ec50-11e6-9ddb-000000000000 10003606

Could the problem be the slow sd card disk speed of my Raspberry PI? Here is the output of my disk speed write test:

dd if=/dev/zero of=~/test.tmp bs=500K count=1024
1024+0 records in
1024+0 records out
524288000 bytes (524 MB) copied, 70.9448 s, 7.4 MB/s

Did you ever solve that? Have the same problem …

I think I solved the problem with a faster sd card, but I’m not sure anymore.

Thx. I’m running from an USB SSD, maybe it’s to slow … :joy: