Perf. problem with a large diagram

  • Platform information:
    • Hardware: 4 CPU / 6 GB Memory
    • OS: Debian 12 (Proxmox LXC)
    • Java Runtime Environment:
      openjdk version “17.0.13” 2024-10-15 LTS
      OpenJDK Runtime Environment Zulu17.54+21-CA (build 17.0.13+11-LTS)
      OpenJDK 64-Bit Server VM Zulu17.54+21-CA (build 17.0.13+11-LTS, mixed mode, sharing)
    • openHAB version: 4.3.1
    • influxDB version: 2.7.11 (dedicated LXC on the same host)

Problem Description
Hello, I have a problem with a chart that calls a relatively large amount of data from the influxDB and causes the openHAB instance to hangs completely the first or at least second time the chart is called. And after about 4-5 minutes at max. memory utilization, the OH service restarts. The DB instance gets bored, only the OH instance escalates CPU and memory usage. I previously had org.openhab and org.openhab.persistence.influxdb on log: DEBUG, but unfortunately there are no errors or anything in the 24MB log.

  • Can I somehow optimize the chart and still display the data for the entire year?
  • Is there a different problem here?
  • Or is there just too much data and I have to reduce the chart to 6 months or less?

Systeminfo Chart

Hardware - normal operations
image
image
Hardware - problem operations
image
image

Chart

config:
  chartType: year
  label: Mtl - Strom
  order: "301"
  sidebar: true
slots:
  grid:
    - component: oh-chart-grid
      config: {}
  legend:
    - component: oh-chart-legend
      config:
        orient: vertical
        right: "10"
        top: "100"
        type: scroll
  series:
    - component: oh-aggregate-series
      config:
        aggregationFunction: last
        dimension1: month
        gridIndex: 0
        item: Oh_Item_Power_Home_Consumption_Monthly
        name: Verbrauch
        type: bar
        xAxisIndex: 0
        yAxisIndex: 0
    - component: oh-aggregate-series
      config:
        aggregationFunction: last
        dimension1: month
        gridIndex: 0
        item: Oh_Item_Power_Grid_Consumption_Monthly
        name: Bezug
        type: bar
        xAxisIndex: 0
        yAxisIndex: 0
    - component: oh-aggregate-series
      config:
        aggregationFunction: last
        dimension1: month
        gridIndex: 0
        item: Oh_Item_Power_Production_Monthly
        name: Produktion
        type: bar
        xAxisIndex: 0
        yAxisIndex: 0
    - component: oh-aggregate-series
      config:
        aggregationFunction: last
        dimension1: month
        gridIndex: 0
        item: Oh_Item_Power_OwnConsumption_Monthly
        name: Eigenverbrauch
        type: bar
        xAxisIndex: 0
        yAxisIndex: 0
    - component: oh-aggregate-series
      config:
        aggregationFunction: last
        dimension1: month
        gridIndex: 0
        item: Oh_Item_Power_Surplus_Monthly
        name: Überschuss
        type: bar
        xAxisIndex: 0
        yAxisIndex: 0
  tooltip:
    - component: oh-chart-tooltip
      config:
        confine: true
        orient: vertical
  xAxis:
    - component: oh-category-axis
      config:
        categoryType: year
        gridIndex: 0
        monthFormat: short
        weekdayFormat: short
  yAxis:
    - component: oh-value-axis
      config:
        gridIndex: 0
        name: kWh

This.

Decimate the data used to generate the chart in place (Influxdb has ways to do this) or use a shorter time period. I suspect the problem is OH needs to download each and every point in the DB for the full year, and then it needs to process each and every value to generate the chart. If you have something like one entry per minute or more, that’s a lot of data and you really don’t need much more than one entry per day to generate a year long chart…

Alternatively you can use rrd4j for this data and use rrd4j to generate the chart instead of influxdb. rrd4j decimates the data automatically. That will help you going forward but not do much for you for the existing data.

Okay, then it can be solved relatively quickly and yes, there really will be a lot of data. So far I have only set the retention policy to 27m. What would be the solution to reduce the data in influx?

I haven’t used InfluxDB in years. All I know is there are (or at least were) ways to thin out the data as it gets older. Beyond that you’ll need to look at the InfluxDB docs,

Yes, I have to admit that I simply record too much data . :see_no_evil: :laughing:

I have solved it and reduced the affected measurements.

Item = Total OLD -> Total NEW
Oh_Item_Power_Home_Consumption_Monthly = 410800 -> 35640
Oh_Item_Power_Grid_Consumption_Monthly = 320700 -> 2733
Oh_Item_Power_Production_Monthly = 297000 -> 30820
Oh_Item_Power_OwnConsumption_Monthly = 291700 -> 30830
Oh_Item_Power_Surplus_Monthly = 202700 -> 22050

Influx web console:
This allowed me to measure the number of records

from(bucket: "openHAB")
  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |> filter(fn: (r) => r["_measurement"] == "Oh_Item_Power_Home_Consumption_Monthly")
  |> count()
  |> yield(name: "count")

Then I downsampled the data to every 10 minutes and wrote it to a new bucket

from(bucket: "openHAB")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => (r["_measurement"] == "Oh_Item_Power_Home_Consumption_Monthly"))
|> aggregateWindow(every: 10m, fn: mean, createEmpty: false)
|> to(bucket: "openHAB2025", org: "MyOrg")

Influx CLI
I created a config so that I don’t always have to specify the toke

influx config create --config-name <config-name> \
--host-url http://localhost:8086 \
--org <your-org> \
--token <your-api-token> \
--active

And then delete the old _measurement

influx delete --bucket openHAB --start 1970-01-01T00:00:00Z --stop $(date +"%Y-%m-%dT%H:%M:%SZ") --predicate '_measurement="Oh_Item_Power_Home_Consumption_Monthly"'

Influx web console:
And then transfer the data from the new bucket back to the old bucket

from(bucket: "openHAB")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => (r["_measurement"] == "Oh_Item_Power_Home_Consumption_Monthly"))
|> aggregateWindow(every: 10m, fn: mean, createEmpty: false)
|> to(bucket: "openHAB2025", org: "MyOrg")

openHAB
And for the future, I have created a new persistence policy for the affected items, which means that items are only saved every 10 minutes.