InfluxDB+Grafana persistence and graphing

I still get the error and don’t know how to identify it.
I read about too many updates in the DB by items changing very often (persisting everyChange), but which could that be?

2020-04-15 09:05:20.912 [ERROR] [org.influxdb.impl.BatchProcessor    ] - Batch could not be sent. Data will be lost
java.lang.RuntimeException: {"error":"timeout"}

        at org.influxdb.impl.InfluxDBErrorHandler.handleError(InfluxDBErrorHandler.java:19) ~[influxdb-java-2.2.jar:?]
        at retrofit.RestAdapter$RestHandler.invoke(RestAdapter.java:242) ~[retrofit-1.9.0.jar:?]
        at org.influxdb.impl.$Proxy207.writePoints(Unknown Source) ~[?:?]
        at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:151) ~[influxdb-java-2.2.jar:?]
        at org.influxdb.impl.BatchProcessor.write(BatchProcessor.java:171) [influxdb-java-2.2.jar:?]
        at org.influxdb.impl.BatchProcessor$1.run(BatchProcessor.java:144) [influxdb-java-2.2.jar:?]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_222]
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_222]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_222]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_222]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_222]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_222]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_222]

If you think that might be the cause, watch events.log. If you have an Item changing a lot really fast, or if you have several persisted Items changing very close together you should be able to see that in events.log. Unless you are persisting on everyUpdate. Updates do not get logged out in events.log.

Again, you triggered the right synapse:

I remember, that I have cleaned up months ago the eventlog from annoying messages (CPU Load and such every second AFAIR) by using the org.ops4j.pax.logging.cfg.
I will revert this back and check whether these are persisted and might be the reason…
Thanks for pushing me into the right direction.

The OP. Is up to date other than that OpenHABian creates the database and you set the usernames and passwords in that installation automatically.

I just had to copy that password and put it in the Graphana Data Source configuration.

Hi everyone,

I am facing since the update to 7.0.0 the following issue:

[21:15:18] XX@XX:~$ sudo systemctl status grafana-server.service
● grafana-server.service - Grafana instance
   Loaded: loaded (/usr/lib/systemd/system/grafana-server.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Sat 2020-05-23 21:13:22 CEST; 1min 59s ago
     Docs: http://docs.grafana.org
  Process: 782 ExecStart=/usr/sbin/grafana-server --config=${CONF_FILE} --pidfile=${PID_FILE_DIR}/grafana-server.pid --pac
 Main PID: 782 (code=exited, status=1/FAILURE)
      CPU: 0

May 23 21:13:22 openHABianPi systemd[1]: grafana-server.service: Unit entered failed state.
May 23 21:13:22 openHABianPi systemd[1]: grafana-server.service: Failed with result 'exit-code'.
May 23 21:13:22 openHABianPi systemd[1]: grafana-server.service: Service hold-off time over, scheduling restart.
May 23 21:13:22 openHABianPi systemd[1]: Stopped Grafana instance.
May 23 21:13:22 openHABianPi systemd[1]: grafana-server.service: Start request repeated too quickly.
May 23 21:13:22 openHABianPi systemd[1]: Failed to start Grafana instance.
May 23 21:13:22 openHABianPi systemd[1]: grafana-server.service: Unit entered failed state.
May 23 21:13:22 openHABianPi systemd[1]: grafana-server.service: Failed with result 'exit-code'.

I am running Openhabian and did an update yesterday - since which I am facing the issue.
I did a restart, but still the same issue messages. The problem is as mentioned since the upgrade of the version. Does anybody else is facing the problem?

Kindly,
Woogi

@Woogi, what OS version is installed on your system - stretch ?
See this post and following.

1 Like

Hi Wolfgang,

thx for your reply.

I am running

##   Release = Raspbian GNU/Linux 9 (stretch)
##    Kernel = Linux 4.14.79-v7+
##  Platform = Raspberry Pi 3 Model B Rev 1.2

I’m pretty sure you have to upgrade to buster.

Hello,

I am new in grafana. I need small help.
I can insert values into influxDB, and draw graphics in realtime.
how to implement this requirement.

Example, not a real life.
I have temperature sensor, which I can read any time, and insert value into influx db.
but then I have another temperature which come with date. And I want to insert is with date an draw accurate graphic. because when I insert second sensor into influxDB, my graph is not accurate, as data I receive is little bit from the past.

Is it possible with openhab?

Ed

Follow the tutorial in the original post.

openHAB does not support this directly. You will have to insert the value yourself into the database or by interacting with the openHAB REST API.

1 Like

Hello,

that’s was my assumption. where I can find example in original post?

maybe it’s a good feature to implement. in persistence file add not just a value, but date:value ???

Ed

The first post in this thread is a complete example for how to set up InfluxDB and Grafana.

1 Like

hello.

I setup it. it’s works. but my problem is time synchronization. where I can find this exaple?

Ed

This is a bit special and openHAB does not support this. It’s more a SQL job, I think. So maybe another forum is a better (i.e. there are more people to help with this problem)

I’m using a simple bash script to import daily measurements from my photovoltaic system, but I do it to store in MariaDB and I don’t use this measurements in openHAB or Grafana…

I don’t think anyone has posted an example of this because it’s usually handled outside of openHAB. As Udo indicates, you’d write a script that inserts the data into InfluxDB instead of relying on openHAB to do it.

To clarify a bit, the openHAB persistence services are not intended for general purpose database access, and do not allow arbitrary timestamping.
For openHAB, persistence is about recording (and later reading) Item states only, a snapshot of the state “now”, if you will.

You can send directly to InfluxDB using the sendHttpPostRequest and bypass the persistence. Then you can send anything you want including the timestamp. Example here of a different retention policy.

hi,
i installed via openhabian and changed /etc/grafana/grafana.ini according to the first post.
i can open the graph from a “not logged in” browser, but habpanel (frame with url = http://ip-address:3000/d-solo/3L-5BqzRk/openhab-dashboard?tab=query&orgId=1&from=now-48h&to=now&panelId=2) doesn’t show the graph:

ip-address refused the connection

as i can see the graph in another browser where i’m not logged in i think the error is with the habpanel frame, right?

edit: oh, and another thing:

To render a panel image, you must install the Grafana Image Renderer plugin. Please contact your Grafana administrator to install the plugin.

iirc the image renderer was installed on a previous openhabian setup, wasn’t it?

1 Like

Grafana deprecated that plugin quite some time ago. Now there is a separate service. See Grafana Image Renderer for a way to install and use the new separate service using Docker.

ok, i didn’t know this. thanks for the link!
i tried the simple approach, but

Error: ✗ plugin is not supported on your architecture and OS.

i have not tried the manual installation but at the moment i don’t necessarily need the image renderer.

apparently this i needed in grafana.ini so habpanel can embed graphs:
allow_embedding = true

maybe @ThomDietrich could add it in the first post?

1 Like