Perf. problem with a large diagram

Yes, I have to admit that I simply record too much data . :see_no_evil: :laughing:

I have solved it and reduced the affected measurements.

Item = Total OLD -> Total NEW
Oh_Item_Power_Home_Consumption_Monthly = 410800 -> 35640
Oh_Item_Power_Grid_Consumption_Monthly = 320700 -> 2733
Oh_Item_Power_Production_Monthly = 297000 -> 30820
Oh_Item_Power_OwnConsumption_Monthly = 291700 -> 30830
Oh_Item_Power_Surplus_Monthly = 202700 -> 22050

Influx web console:
This allowed me to measure the number of records

from(bucket: "openHAB")
  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |> filter(fn: (r) => r["_measurement"] == "Oh_Item_Power_Home_Consumption_Monthly")
  |> count()
  |> yield(name: "count")

Then I downsampled the data to every 10 minutes and wrote it to a new bucket

from(bucket: "openHAB")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => (r["_measurement"] == "Oh_Item_Power_Home_Consumption_Monthly"))
|> aggregateWindow(every: 10m, fn: mean, createEmpty: false)
|> to(bucket: "openHAB2025", org: "MyOrg")

Influx CLI
I created a config so that I don’t always have to specify the toke

influx config create --config-name <config-name> \
--host-url http://localhost:8086 \
--org <your-org> \
--token <your-api-token> \
--active

And then delete the old _measurement

influx delete --bucket openHAB --start 1970-01-01T00:00:00Z --stop $(date +"%Y-%m-%dT%H:%M:%SZ") --predicate '_measurement="Oh_Item_Power_Home_Consumption_Monthly"'

Influx web console:
And then transfer the data from the new bucket back to the old bucket

from(bucket: "openHAB")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => (r["_measurement"] == "Oh_Item_Power_Home_Consumption_Monthly"))
|> aggregateWindow(every: 10m, fn: mean, createEmpty: false)
|> to(bucket: "openHAB2025", org: "MyOrg")

openHAB
And for the future, I have created a new persistence policy for the affected items, which means that items are only saved every 10 minutes.