InfluxDB+Grafana persistence and graphing

There is a menu option in openhabian-config I believe where you can change the passwords for InfluxDB, Mosquitto, etc all in one go.

Ah, Great - I will check it out.
Thanks Rich.
I am almost (at least the half way through with my start from scratch) :slight_smile:

is there a way how to limit rendered picture at height? eg set height to 20px ? I havenā€™t seen this option in grafana nor openhab

For sitemaps, yes if youĀ“re using the image type:

       Image refresh=60000 url="http://10.4.28.3:3000/render/d-solo/ipYJIpRRz/amanda-temperatur?panelId=2&orgId=1&from=now-12h&to=now&refresh=30s&width=1000&height=500"

Notice the two last arguments in the line.

EDIT - I havnĀ“t tried 20px thoughā€¦ Seems very small to me.

ah thanks
havent noticed that parameter, working

About strategies section:

Note that a strategies section must be included (with a default defined), or the persistence services will not work.

That did prevent it working for me. Maybe OP could update first post :slight_smile: You need to have that default strategy :slight_smile:

btw this type of entry in the sitemap is ignored by the mobile/ipad app.
its that intentional or its known bug?
eg this is working only when viewed via webpage

No, it works fine on my phone using the openhab app, (Android).

hmm on either ios app it does not work

Works fine on my iPhone as wellā€¦ I just made a test and it went fine.

I down-sample values in two-steps to a mid-term and a long-term retention policy. Here is a short explanation how to do this:

First, create the retention policies (RP):

CREATE RETENTION POLICY "7days"  on openhab_db DURATION 7d REPLICATION 1 DEFAULT
CREATE RETENTION POLICY "oneyear" on openhab_db duration 8784h REPLICATION 1
CREATE RETENTION POLICY  "forever" on openhab_db duration INF REPLICATION 1

This will create a default RP where data will get deleted after a week, another RP where data will get deleted after one year (366 days) and another one where data is never deleted.

I also deleted the ā€œautogenā€ RP - you can also keep it and reduce the DURATION. Before doing so and deleting data: Below is explained how to backfill old data in your new RPs.

Sampling down values by using different RP and continuous_queries (CQ) is explained here https://docs.influxdata.com/influxdb/v1.7/guides/downsampling_and_retention/ - however it does not explain using regular expressions to down-sample all measurements.

Regular expressions and back references are explained here https://docs.influxdata.com/influxdb/v1.7/query_language/continuous_queries/ - however the examples there does not work with the (too) simple scheme that OpenHab uses for influxdb persistence.

Problem with the OpenHAB scheme is that OpenHAB creates a new influxdb-ā€œmeasurementā€ for each item" and stores the value in a field named ā€œvalueā€. The datatype of this field is the datatype of the openhab item.

If you also store items of DataType string, this leads to a problem with Influxdb queries: The SELECT statement can only return values of same type for fields having the same name - even accors different measurements. (see details)

However, SELECT can include a type filter (that tries a cast, but ignores the value if it cannot be casted). This type filter does not work with a ā€œ*ā€, so you need to explicitly state the field name - which is quite simple for openhab, because itā€™s always ā€œvalueā€:

So, to create the correct CQ, I used the following for downsampling to 15minute intervals:

CREATE CONTINUOUS QUERY cq_sevendays ON openhab_db BEGIN SELECT mean(value::float) AS value, min(value::float), max(value::float) INTO openhab_db.oneyear.:MEASUREMENT FROM openhab_db.sevendays./.*/ GROUP BY time(15m), * END

This selects the MEAN, but also MIN and MAX values for each 15min interval.
With ā€œAS valueā€ the mean-value is stored with field name ā€œvalueā€ again, which is useful for your grafana queries that does not be changed to read from new RP. (Except for selecting the correct RP - as for now, Iā€™m not aware that grafana is able to auto-select the RP depending on current time scale)

In a next step, I down-sample data even further:

CREATE CONTINUOUS QUERY cq_oneyear ON openhab_db BEGIN SELECT mean(value::float) AS value, min(min::float), max(max::float) INTO openhab_db.forever.:MEASUREMENT FROM openhab_db.oneyear./.*/ GROUP BY time(1d), * END

By using the min(min::float) or max(max::float) selector, the absolut minimums/maximums are retained forever.

The CQ work from now on, but not on past data. So to backfill old data, you can run the queries once. Therefore use the SELECT as shown in the CQ, but add a WHERE time < now() to select all past data:

SELECT mean(value::float) as value, min(value::float), max(value::float) INTO openhab_db.oneyear.:MEASUREMENT FROM openhab_db.sevendays./.*/ WHERE TIME <= now() GROUP BY time(15m), *
SELECT mean(value::float) AS value, min(min::float), max(max::float) INTO openhab_db.forever.:MEASUREMENT FROM openhab_db.oneyear./.*/ WHERE time <= now() GROUP BY time(1d), *
4 Likes

hmm can you please provide your part of the sitemap where you have it?
if i do have whole frame correctly or smthingā€¦ thanks

There really isnt much into itā€¦ But here is the frame:

    Frame label="Statistik" {
       Text item=amanda_Temperature {
       Image refresh=60000 url="http://10.4.28.3:3000/render/d-solo/ipYJIpRRz/amanda-temperatur?panelId=2&orgId=1&from=now-12h&to=now&refresh=30s&width=1000&height=500"
    }

Hey guys, recently had my openHAB install Corrupted (long story, whole sever went caput) and am rebuilding my OpenHAb install from scratch. Can someone please send me the latest instructions on how to get MapDB and InfluxDB up and running? I came across several guides but i think they might have been out of date because they instructed me to run commands on ubuntu to configure Influxdb and i dont remember doing that when i had this setup before.

I am using Ubuntu 18.04 Server. I have already edited the addons.cfg file to install the persistence modules, installed Influx + Graphana via OpenHABian config, as well as made and edited the mapdb.persist and influxdb.persist files. I cant seem to get any items to show on Graphana.

Any help is appreciated!

The OP above is, as far as I know, up to date.

I still get the error and donā€™t know how to identify it.
I read about too many updates in the DB by items changing very often (persisting everyChange), but which could that be?

2020-04-15 09:05:20.912 [ERROR] [org.influxdb.impl.BatchProcessor    ] - Batch could not be sent. Data will be lost
java.lang.RuntimeException: {"error":"timeout"}

        at org.influxdb.impl.InfluxDBErrorHandler.handleError(InfluxDBErrorHandler.java:19) ~[influxdb-java-2.2.jar:?]
        at retrofit.RestAdapter$RestHandler.invoke(RestAdapter.java:242) ~[retrofit-1.9.0.jar:?]
        at org.influxdb.impl.$Proxy207.writePoints(Unknown Source) ~[?:?]
        at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:151) ~[influxdb-java-2.2.jar:?]
        at org.influxdb.impl.BatchProcessor.write(BatchProcessor.java:171) [influxdb-java-2.2.jar:?]
        at org.influxdb.impl.BatchProcessor$1.run(BatchProcessor.java:144) [influxdb-java-2.2.jar:?]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_222]
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_222]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_222]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_222]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_222]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_222]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_222]

If you think that might be the cause, watch events.log. If you have an Item changing a lot really fast, or if you have several persisted Items changing very close together you should be able to see that in events.log. Unless you are persisting on everyUpdate. Updates do not get logged out in events.log.

Again, you triggered the right synapse:

I remember, that I have cleaned up months ago the eventlog from annoying messages (CPU Load and such every second AFAIR) by using the org.ops4j.pax.logging.cfg.
I will revert this back and check whether these are persisted and might be the reasonā€¦
Thanks for pushing me into the right direction.

The OP. Is up to date other than that OpenHABian creates the database and you set the usernames and passwords in that installation automatically.

I just had to copy that password and put it in the Graphana Data Source configuration.

Hi everyone,

I am facing since the update to 7.0.0 the following issue:

[21:15:18] XX@XX:~$ sudo systemctl status grafana-server.service
ā— grafana-server.service - Grafana instance
   Loaded: loaded (/usr/lib/systemd/system/grafana-server.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Sat 2020-05-23 21:13:22 CEST; 1min 59s ago
     Docs: http://docs.grafana.org
  Process: 782 ExecStart=/usr/sbin/grafana-server --config=${CONF_FILE} --pidfile=${PID_FILE_DIR}/grafana-server.pid --pac
 Main PID: 782 (code=exited, status=1/FAILURE)
      CPU: 0

May 23 21:13:22 openHABianPi systemd[1]: grafana-server.service: Unit entered failed state.
May 23 21:13:22 openHABianPi systemd[1]: grafana-server.service: Failed with result 'exit-code'.
May 23 21:13:22 openHABianPi systemd[1]: grafana-server.service: Service hold-off time over, scheduling restart.
May 23 21:13:22 openHABianPi systemd[1]: Stopped Grafana instance.
May 23 21:13:22 openHABianPi systemd[1]: grafana-server.service: Start request repeated too quickly.
May 23 21:13:22 openHABianPi systemd[1]: Failed to start Grafana instance.
May 23 21:13:22 openHABianPi systemd[1]: grafana-server.service: Unit entered failed state.
May 23 21:13:22 openHABianPi systemd[1]: grafana-server.service: Failed with result 'exit-code'.

I am running Openhabian and did an update yesterday - since which I am facing the issue.
I did a restart, but still the same issue messages. The problem is as mentioned since the upgrade of the version. Does anybody else is facing the problem?

Kindly,
Woogi