InfluxDB + Grafana vs. MySQL + rrd4j

I’m novice with OpenHAB (2) and therefore would like to hear other opinions on persistence services.

After read the tutorial of InfluxDB + Grafana:

…in my mind raise a question why would somebody still use MySQL or rrd4j as a persistence service WITHOUT any burden of history? In other words, if you are completely new who would like to save item data for later analysis purposes but have clean table from start of, could InfluxDB/Grafana combo be all you need? Would MySQL or rrd4j give anything extra on top of those two?

Regarding the processor demand, could it be the InfluxDB/Grafana combo is much more heavier for processor compared to rrd4j? In my case I’m going to run OpenHAB in HP Mini 5102 laptop (Linux Mint), so there should be some processor power available. But if you create same charts with those two, would the rrd4j be better in performance point of view?

As per my understanding, the Grafana is much more flexible in variety of different options etc. Also you can use the InfluxDB to create restoreOnStartup feature, if understood correctly.

All comments are highly appreciated. Thank you.

-Juha

1 Like

Yes the combo of InfluxDB and Grafana is more powerful, flexible etc compared to the “native” DB’s from OH. And yes this combo takes more of the system capacity, for a RaspPi that is IMHO of concern (because of that I reverted to RRD4J).
For the restore on startup OH has especially the Mapdb which is just keeping one value of each persited item, it can’t get any easier!

I may be wrong, but InfluxDB may not support strings (I did not try this).

Just to let you know, I recently also started experimenting with InfluxDB/Grafana, and can confirm that InfluxDB supports strings.
I migrated to OH2 last week, and the only persistence service I kept is InfluxDB, where I used Mysql and RRD4 on OH1.

Cool!

You can visualize string items in Grafana using the Table panel :slight_smile:

I’m also/still a big fan of the InfluxDB+Grafana combo.
I’m not using anything else, not even mapdb for RestoreOnStartup, I prefer to initialize items with fresh readings from sensors or through a startup rule.

One more aspect I want to point to: If you want to access your data with other tools, be sure it supports the database you are using. InfluxDB is great but if your statistical analysis tool only supports mySQL it’s pointless…

HI,
Do you know if Influx DB & Grafana are compatible with OH1?

Thanks

It does but wait a few more weeks and upgrade to openHAB 2.0 final. :wink:

:slight_smile:
I will do, even if I am scared of extra works to make my rules working again on OH 2 :slight_smile:

I’ll point you this this as it may be of some interest:

The crux of the matter is use which ever database is the best for what you want to do with the data.

I use a blanket “store everything” into MapDB for restoreOnStartup. I don’t see the need for saving more than the most recent value for the bulk of my Items but I do like the default option to be to restore everything to its previous state and then I have rules which update sensor values or recalculate state where needed.

I used to use rrd4j for keeping historic data because it is fixed size and therefore will never need maintenance. However, I’ve since refactored all of those cases out so no longer use rrd4j at all.

For charting I use InfluxDB and Grafana (dockerized). If find they do not consume many resources at all and the few Items I have which get charted or for which I track historic data is small so I’m not worried about having to do maintenance on the DB if it grows too large for many many years.

I’m in the middle of a ground up rebuild of my entire home automation system (for the regulars, this is why I’ve been so absent lately) so I may change my mind as I go.

But if you are running on anything larger than a board computer performance should not be an issue for you. For comparison, I’m running the following on an old Asus Bamboo laptop with an i7 Intel CPU but only 4 GB RAM, running Ubuntu 16

  • nginx
  • samba shares: docker
  • gogs: docker
  • plexmediaserver: docker
  • mosquitto: docker
  • influxdb: docker
  • grafana: docker
  • sensorReporter
  • openhab: docker
  • zoneminder: docker (not yet reinstalled)
  • crashplan: (not yet reinstalled)
  • OPNsense: VirtualBox (not yet reinstalled, probably wont)

The only time I ever saw performance problems was when I tried to watch something streamed from Plex that required transcoding (e.g. from one of the channels) at the same time CrashPlan was receiving a backup from one of my other machines.

And in relation to the rebuild: always make backups and make them frequently! Though this is giving me the opportunity to automate the build and config of my main server (I’m really quite enjoying Ansible) and I’m finding as I rewrite my rules I’m being more consistent and concise.

Once I get back to where I was functionally I may post some or all of them as an examples.

4 Likes

I just installed and configured InfluxDB and Grafana on my Openhabian and I like it very much. But I don’t know much about the InfluxDB. My main concern is that the database would grow to big on the small sd card of my Raspberry when I add to much items or when items are storing a lot of values. How exactly is this database managing the space that is uses? Is there someething (like in rrd4j) that deletes older values or that reduces older values?

You should search for “retention policiies” in the docs. Using those you can configure InfluxDB to work like rrd4j in terms of growing.

is it a good idea to run openhab and persistence (influxdb or mysql) on the same machine or install this on separate machines? I have an esxi server, running virtual machine with Debian for openhab.

From my experience (storing about 50 items that update very often for the past 3 months): My InfluxDB is only 2,2Megs:

root@homer:~# influxd backup -database openhab_db /backup/InfluxDB/openhab_db
root@homer:~# du -sk /backup/InfluxDB/openhab_db/
2216	/backup/InfluxDB/openhab_db/
root@homer:~# du -sk /var/lib/influxdb/data/openhab_db/autogen/
2248	/var/lib/influxdb/data/openhab_db/autogen/

The data are not deleted over time (based on the default Retention Policy = autogen).

They are stored as measurements:

root@homer:~# influx
Connected to http://localhost:8086 version 1.2.0
InfluxDB shell version: 1.2.0
> USE openhab_db
Using database openhab_db
> show measurements
name: measurements
name
----
FibEye01_Temp
FibEye02_Temp
[...]


> select * from FibEye01_Temp
name: FibEye01_Temp
time                value
----                -----
1479468237218000000 25
1479469139051000000 25.9
1479475461634000000 20.3
1479482692444000000 20.1
1479487210137000000 20
1479489921677000000 19.8
1479504384743000000 19.6
1479511618438000000 19.4
[...]
1 Like

It’s fine so far for me (OH2+InfluxDB+Grafana+other stuff on same host).

influxd uses less than 500Megs of RAM and less than 2% CPU (on a quad core intel laptop).
A Raspberry Pi 3 may struggle a bit with this setup…

I’ve taken a look at the “retention policies” of Influx DB and found a very good manual for this: https://docs.influxdata.com/influxdb/v1.2/guides/downsampling_and_retention/

Basically combining retention policies and continuous query’s could result in a storage like the one in the round robin databases.

Did someone already build retention policies and continuous query’s that are matching the default OpenHAB RRD policy? If not I will try to build them myself.

If your home is smaller than a palace and you are not blindly persisting every item minutely, memory usage is not a problem. InfluxDB+Grafana persistence and graphing

Hence, my answer: No, no experience with retention policies.

I did a trial with influxDB and grafana (if you look through the thread posted by @ThomDietrich above, you’ll find all my knowledge :wink:). I stopped using it and went back to rrd4j since to graphing provided by habpanel (n3) is good enough for me.

I’ve now added 42 items to my InfluxDB. Now I’am getting a lot of those errors in the log:

2017-02-05 11:00:31.958 [ERROR] [org.influxdb.impl.BatchProcessor    ] - Batch could not be sent. Data will be lost
java.lang.RuntimeException: {"error":"timeout"}

	at org.influxdb.impl.InfluxDBErrorHandler.handleError(InfluxDBErrorHandler.java:19)[224:org.openhab.persistence.influxdb:1.9.0]
	at retrofit.RestAdapter$RestHandler.invoke(RestAdapter.java:242)[224:org.openhab.persistence.influxdb:1.9.0]
	at org.influxdb.impl.$Proxy135.writePoints(Unknown Source)[224:org.openhab.persistence.influxdb:1.9.0]
	at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:151)[224:org.openhab.persistence.influxdb:1.9.0]
	at org.influxdb.impl.BatchProcessor.write(BatchProcessor.java:171)[224:org.openhab.persistence.influxdb:1.9.0]
	at org.influxdb.impl.BatchProcessor$1.run(BatchProcessor.java:144)[224:org.openhab.persistence.influxdb:1.9.0]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)[:1.8.0_121]
	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)[:1.8.0_121]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)[:1.8.0_121]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)[:1.8.0_121]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)[:1.8.0_121]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)[:1.8.0_121]
	at java.lang.Thread.run(Thread.java:745)[:1.8.0_121]

Any ideas how to solve this?