InfluxDB+Grafana persistence and graphing

This is how I’d do it, I’m not sure of another solution as I’m not using tables too much.

Is there a way to uninstall influxdb and grafana completely from the system as I have caused an issue somewhere. I’d really like to simply remove and reinstall from the openhabian menu rather than reflashing my SD card.

I’m using RPi 3 with openhabian

First of all @ThomDietrich , thank you for adding the InfluxDb and Grafana setup into OpenHabian-config. It’s made life a lot easier for beinners like me! Cheers.

I’m stuck on adding the Sine wava data injection though. I’ve checked that I have Python installed, but when I download your program I don’t see the complete code. Is this how it should look?..

#!/usr/bin/python

# A little script to send test data to an influxdb installation
# Attention, the non-core library 'requests' is used. You'll need to install it first:
# http://docs.python-requests.org/en/master/user/install/

import json
import math
import requests
import sys
from time import sleep

IP = "192.168.0.2"        # The IP of the machine hosting your influxdb instance
DB = "test"               # The database to write to, has to exist
USER = "user"             # The influxdb user to authenticate with
PASSWORD = "password123"  # The password of that user
TIME = 1                  # Delay in seconds between two consecutive updates
STATUS_MOD = 5            # The interval in which the updates count will be printed to your console

n = 0
while True:
    for d in range(0, 360):
        v = 'sine_wave value=%s' % math.sin(math.radians(d))
        ## without autentication
        #r = requests.post("http://%s:8086/write?db=%s" %(IP, DB), data=v)
        ## with autentication
        r = requests.post("http://%s:8086/write?db=%s" %(IP, DB), auth=(USER, PASSWORD), data=v)
        if r.status_code != 204:
            print 'Failed to add point to influxdb (%d) - aborting.' %r.status_code
            sys.exit(1)
        n += 1
        sleep(TIME)
        if n % STATUS_MOD == 0:
            print '%d points inserted.' % n

Looks like my setup is ok so far (on the back of OpenAHABian of course :slight_smile: )

I am having problems with the configuration

Openhab 2 works fine.
Influx installed and tested with the sine.py and Grafana, works fine

I am getting following error :

2017-03-25 18:13:58.743 [DEBUG] [org.openhab.persistence.influxdb    ] - BundleEvent STARTING - org.openhab.persistence.influxdb
2017-03-25 18:13:58.745 [DEBUG] [.InfluxDBPersistenceServiceActivator] - InfluxDB persistence bundle has been started.
2017-03-25 18:13:58.749 [DEBUG] [org.openhab.persistence.influxdb    ] - BundleEvent STARTED - org.openhab.persistence.influxdb
2017-03-25 18:13:58.796 [DEBUG] [org.openhab.persistence.influxdb    ] - ServiceEvent REGISTERED - {org.openhab.core.persistence.PersistenceService, org.openhab.core.persistence.QueryablePersistenceService}={component.name=org.openhab.persistence.influxdb, url=http://192.168.1.61:8086, IP="192.168.1.61"        # The IP of the machine hosting your influxdb instance, retentionPolicy=oneday, USER=test, DB=test, service.pid=org.openhab.influxdb, component.id=185, PASSWORD=test, service.id=323, service.bundleid=192, service.scope=bundle} - org.openhab.persistence.influxdb
2017-03-25 18:13:58.801 [DEBUG] [.internal.InfluxDBPersistenceService] - influxdb persistence service activated
2017-03-25 18:13:58.803 [ERROR] [.internal.InfluxDBPersistenceService] - influxdb:password
2017-03-25 18:13:58.998 [DEBUG] [org.openhab.core.compat1x           ] - ServiceEvent REGISTERED - {org.eclipse.smarthome.core.persistence.PersistenceService}={service.id=324, service.bundleid=184, service.scope=singleton} - org.openhab.core.compat1x
2017-03-25 18:14:00.050 [WARN ] [.internal.InfluxDBPersistenceService] - Configuration for influxdb not yet loaded or broken.
2017-03-25 18:14:00.060 [WARN ] [.internal.InfluxDBPersistenceService] - Configuration for influxdb not yet loaded or broken.

In Karaf I have

Pid:            org.openhab.influxdb
BundleLocation: null
Properties:
   DB = test
   IP = "192.168.1.61"        # The IP of the machine hosting your influxdb instance
   PASSWORD = test
   USER = test
   retentionPolicy = oneday
   service.pid = org.openhab.influxdb
   url = http://192.168.1.61:8086

And

Pid:            org.eclipse.smarthome.persistence
BundleLocation: null
Properties:
   default = influxdb
   service.pid = org.eclipse.smarthome.persistence

Is it normal that bundlelocation is null ?

I always get this error

2017-03-25 19:11:00.064 [WARN ] [.internal.InfluxDBPersistenceService] - Configuration for influxdb not yet loaded or broken.
2017-03-25 19:11:00.066 [WARN ] [.internal.InfluxDBPersistenceService] - Configuration for influxdb not yet loaded or broken.

influxdb.persist file is:

Strategies {
	everyHour : "0 0 * * * ?"
	everyDay : "0 0 0 * * ?"
	everyMinute : "0 * * * * ?"
	default = everyMinute
}

Items {
	Xiaomi_temp_sensor_1_Temperature : strategy = everyChange, everyMinute
	WeatherInformation_Temperature : strategy = everyChange, everyMinute
}

Can anyone help me ?

Hi there,

yesterday I installed Grafana and InfluxDB via openhabian-config and basicly it works fine.
I turned on https with my self-signed-certificate but grafana didn’t have the permission to access these certificates so I turned back to http and tried it again today.

The part in my grafana.ini looks like this:

[server]
# Protocol (http or https)
protocol = https

...

# https certs & key file
cert_file = /etc/ssl/private/local-grafana.pem
cert_key = /etc/ssl/private/local-grafana.key
t=2017-03-26T09:32:37+0200 lvl=eror msg="Fail to start server" logger=server error="open /etc/ssl/private/local-grafana.pem: permission denied"

This is the error log I get from the grafana log.

Does anyone know what I have to change (file ownership?) in order to get it working correctly?

Hey Craig,
adding the configuration of InfluxDB+Grafana as an automated part is long overdue :frowning:

The sine wave script is only there to test the InfluxDB->Grafana connection. You do not have to make it work. Still it’s nice and should be pretty easy. Did you install python (2.7) and the requests package?

Should be similar to this:

sudo apt update
sudo apt install python
wget https://bootstrap.pypa.io/get-pip.py
sudo python get-pip.py
sudo pip install requests

@Nathan_Wilcox sure, the same way you’d remove every installed package on Linux. Use apt remove to remove the program, use apt purge to remove the program and all configuration files. The second line will remove dependencies

sudo apt purge grafana influxdb
sudo apt-get autoremove

@Daniel_Dom you didn’t post your configuration file, which is exactly what seems to be the problem “Configuration for influxdb not yet loaded or broken.”. What is “oneday”? I’d suggest to try with the default/autogen retention policy first.

@Chris_si shame on me, my grafana instance doesn’t run on https (local installation). The error is however clear. The path to your pem file is not permitted for your grafana user. Execute the following command to learn about permissions. You’ll see that only root can access private. change permissions or move the file to another location. Please do not carelessly make your cert/key readable to all :wink: e.g. below

namei -molv /etc/ssl/private/local-grafana.pem

chmod o+rx /etc/ssl/private
chmod o-r /etc/ssl/private/*
chown grafana /etc/ssl/private/local-grafana.pem
2 Likes

@ThomDietrich

Ok, so I was unaware of the purge command. Completed the steps you listed, rebooted Pi.

Installed Influx & Grafana via sudo openhabian-config
Started with your instructions again and got this:

[14:04:10] openhabian@openHABianPi:~$ influx
Connected to http://localhost:8086 version 1.2.1
InfluxDB shell version: 1.2.1
> CREATE USER admin WITH PASSWORD 'SuperSecretPassword123+' WITH ALL PRIVILEGES
ERR: user already exists
Warning: It is possible this error is due to not setting a database.
Please set a database with the command "use <database>".

Any ideas why this is? I thought the remove commands would completely wipe the users out too?

Purge should have removed the database as well. Are you sure you didn’t see any error while executing sudo apt-get purge influxdb ?
You could always just remove database content the “normal” way:

Unfortunately no errors to explain.

Old passwords seemed to have worked so I simply carried on.

I cannot get any information to load onto grafana though. My logs tell me:

2017-03-27 16:07:07.252 [INFO ] [el.core.internal.ModelRepositoryImpl] - Loading model '._influxdb.persist'
2017-03-27 16:07:07.304 [INFO ] [el.core.internal.ModelRepositoryImpl] - Loading model 'influxdb.persist'
2017-03-27 16:07:33.715 [ERROR] [.internal.InfluxDBPersistenceService] - database connection failed
	at org.influxdb.impl.$Proxy118.ping(Unknown Source)
	at org.influxdb.impl.InfluxDBImpl.ping(InfluxDBImpl.java:114)
	at org.openhab.persistence.influxdb.internal.InfluxDBPersistenceService.checkConnection(InfluxDBPersistenceService.java:171)
	at org.openhab.persistence.influxdb.internal.InfluxDBPersistenceService.activate(InfluxDBPersistenceService.java:144)
2017-03-27 16:07:33.796 [ERROR] [.internal.InfluxDBPersistenceService] - database connection error unexpected url: http(s)://****MY IP****:8086/ping
2017-03-27 16:07:33.797 [ERROR] [.internal.InfluxDBPersistenceService] - database connection does not work for now, will retry to use the database.

My IP is correct. Where else should I look?

You should start by deleting ._influxdb.persist, then restart openHAB. Creation of these temporary files on a samba share by your Mac (I suppose) are a roblem and are blocked by samba in one of the latest openHABian iterations. If you are interested in this, execute Update, Basic Setup and Samba from the openhabian-config menu.

I hadn’t realised there was an extra file inside the folder. It wasn’t visible either but it removed successfully as when I rebooted the new logs didn’t mention the file.

I had ran the new samba updates the other day after receiving a commit update alert. I’ve also just ran the updates again following the reboot noticing there was 4 new updates.

After all this I now have the same errors, minus the ‘._influxdb.persist’ error

2017-03-27 18:36:22.484 [INFO ] [el.core.internal.ModelRepositoryImpl] - Loading model 'influxdb.persist'
2017-03-27 18:37:07.965 [ERROR] [.internal.InfluxDBPersistenceService] - database connection failed
	at org.influxdb.impl.$Proxy120.ping(Unknown Source)
	at org.influxdb.impl.InfluxDBImpl.ping(InfluxDBImpl.java:114)
	at org.openhab.persistence.influxdb.internal.InfluxDBPersistenceService.checkConnection(InfluxDBPersistenceService.java:171)
	at org.openhab.persistence.influxdb.internal.InfluxDBPersistenceService.activate(InfluxDBPersistenceService.java:144)
2017-03-27 18:37:08.110 [ERROR] [.internal.InfluxDBPersistenceService] - database connection error unexpected url: http(s)://192.168.1.125:8086/ping
2017-03-27 18:37:08.112 [ERROR] [.internal.InfluxDBPersistenceService] - database connection does not work for now, will retry to use the database.

Sorry to keep pestering you with these errors :confused:

The error with the permissions is eliminated now but I have a new problem grafana can’t parse my *.key file?!

t=2017-03-28T19:10:18+0200 lvl=eror msg="Fail to start server" logger=server error="tls: failed to parse private key"

Do you coincidentally know how to eliminate this problem as well? :blush:

Hi, @ThomDietrich

I recently keep have timeout 500 regardind influx from time to time, any idea why?
also, is there anyway that I can set influxdb store all data in buffer (ram) and only write when buffer is full / every 5 minutes? as it’s run on a RPi3B and a flash card, i rather influx not writing it every seconds and only few B to 1KB each write


18:44:42.666 [WARN ] [ore.internal.events.OSGiEventManager] - Dispatching event to subscriber 'org.eclipse.smarthome.io.monitor.internal.EventLogger@155f96b' takes more than 5000ms.
18:44:47.132 [ERROR] [org.influxdb.impl.BatchProcessor    ] - Batch could not be sent. Data will be lost
java.lang.RuntimeException: {"error":"timeout"}

        at org.influxdb.impl.InfluxDBErrorHandler.handleError(InfluxDBErrorHandler.java:19)[212:org.openhab.persistence.influxdb:1.9.0]
        at retrofit.RestAdapter$RestHandler.invoke(RestAdapter.java:242)[212:org.openhab.persistence.influxdb:1.9.0]
        at org.influxdb.impl.$Proxy117.writePoints(Unknown Source)[212:org.openhab.persistence.influxdb:1.9.0]
        at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:151)[212:org.openhab.persistence.influxdb:1.9.0]
        at org.influxdb.impl.BatchProcessor.write(BatchProcessor.java:171)[212:org.openhab.persistence.influxdb:1.9.0]
        at org.influxdb.impl.BatchProcessor$1.run(BatchProcessor.java:144)[212:org.openhab.persistence.influxdb:1.9.0]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)[:1.8.0_112]
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)[:1.8.0_112]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)[:1.8.0_112]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)[:1.8.0_112]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)[:1.8.0_112]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)[:1.8.0_112]
        at java.lang.Thread.run(Thread.java:745)[:1.8.0_112]
18:44:43.848 [WARN ] [ore.internal.events.OSGiEventManager] - Dispatching event to subscriber 'org.eclipse.smarthome.io.monitor.internal.EventLogger@155f96b' takes more than 5000ms.

Hey,

  1. My best bet would be performance restrictions by your RPi. Please check your system (e.g. with htop)

  2. Strategies for this are already in place by InfluxDB, I couldn’t find too many technical details though. https://docs.influxdata.com/influxdb/v1.2/concepts/storage_engine

@theo would you be able to create a PR to handle the timeout error seen above properly?

HI @ThomDietrich,

Thanks for the quick response, I did try top for montior the CPU usage and now I am running htop with another terminal in full time for monitoring,
my CPU average usage 1-3% from CPU1-4, and peak is 2x-3x%, so far no timeout popup yet, I will update once I see CPU loading while timeout. so Thom, before you turn to SSD on your pi, you didn’t do any optimize on buffer setting?

1 off-topic question, I really like the openhab pi case you showed on forum, where can I buy one?

Edited: I just got a timeout hit and 4 CPU is around 3-5% loading

I did not turn to SSD. I’m not enthusiastic about that topic.
On the other side I’ve got InfluxDB and Grafana running on a big company backend server…

I didn’t show the case, it’s freely available, e.g. https://www.thingiverse.com/thing:1859604
What you can do is download the 3d printing files and upload them to one of the commerical 3D printing services: e.g. https://www.shapeways.com

1 Like

I didn’t aware it’s build by 3D print, looks 3D print turns very popular at your end, thanks for your info, I will check it out the price also delivery detial.

I got some trouble getting openHAB2 to put anything into InfluxDB, the only thing I can see in the openhab.log is

2017-04-11 23:12:26.894 [INFO ] [el.core.internal.ModelRepositoryImpl] - Loading model 'influxdb.persist'

nothing else.

I got the

InfluxDB (v 1.0) Persistence
persistence-influxdb - 1.9.0.RC1

installed through paperui, am I missing something? I am not sure where to look, or what to look for, when not getting any error messages.

Using the sine wave script, I see that going into the database, but openHAB2 does not put anything into it.

Did you configure $CONF/services/influxdb.cfg ?

I guess there may have error/mistake/typo in influxdb.persist