Status update and caching issues with HTTP binding

I am running openHAB 4.0.3 on openHABian on a RPI4 / 8GB. With this, I connect to a Rademacher HomePilot using the HTTP binding to communicate with the JSON interface. This is described in various posts (e.g. here) and is generally working.

Nevertheless, I encountered an issue I don’t have an explanation for.
My HomePilot and its attached smarthome system is… huge. 80+ physical and 200+ virtual devices. This gives it quite a job to read, but it’s not too bad, as I found out that there are two status addresses which contain more or less all measurement data I need. Each of these addresses answer with a large JSON file. I’d like to update this every 5 seconds. As I got HTTP timeouts, I added a delay of 100ms which made it working well at first. Here is the config:

UID: http:url:HomePilot
label: HomePilot
thingTypeUID: http:url
configuration:
  authMode: BASIC
  ignoreSSLErrors: true
  baseURL: http://<HomePilotURL>
  delay: 100
  stateMethod: GET
  refresh: 5
  commandMethod: PUT
  contentType: application/json
  timeout: 3000
  bufferSize: 2048

Here is an example of some channel configurations:

  - id: Heizung_Arbeitszimmer_Istwert
    channelTypeUID: http:number
    label: Heizung Arbeitszimmer Istwert
    description: null
    configuration:
      mode: READONLY
      escapedUrl: false
      stateExtension: /v4/devices
      stateTransformation: JSONPATH:$.devices[?(@.did==<myDID>)].statusesMap.acttemperatur∩JS:toDegrees.js
  - id: Heizung_Arbeitszimmer_Sollwert
    channelTypeUID: http:number
    label: Heizung Arbeitszimmer Sollwert
    description: null
    configuration:
      mode: READONLY
      escapedUrl: false
      stateExtension: /v4/devices
      stateTransformation: JSONPATH:$.devices[?(@.did==<myDID>)].statusesMap.Position∩JS:toDegrees.js

with this kind of configurations growing, especially one time when I saw that this is generally working and copy-pasted over 100 of such configs into my .things file, I suddenly ended up with an error in the logfile:

2023-09-23 21:45:05.444 [WARN ] [nding.http.internal.HttpThingHandler] - 134 channels in thing http:url:HomePilot with a delay of 100 incompatible with the configured refresh time. Refresh-Time increased to the minimum of 14

This kind of surprised me heavily. There is one thing which is common to all of these many, many configurations: they all have the stateExtension: /v4/devices.
From all I have read in the forum and the docs, I would have expected that this would not happen due to the automatic caching of the HTTP binding. It should query http://<HomePilotURL>/v4/devices once every five seconds, store it in the cache and work through the parsing work for all channels with this stateExtension using the cached data.

I additionally checked that adding n channels with stateExtension: /v4/devices leads to the number in the log warning growing by n as well.

Of course I cannot tell if either I am doing something wrong, misunderstood something, the pre-check which issues the warning is wrong or the caching as such is really not working as expected. Therefore I would like to ask for help here.

If you are configuring the /v4/devices on the Channels instead of on the Thing then no, it’s going to hit that end point individually for each Channel. You’d need to configure the Thing’s URL to the full http://<HomePilotURL>/v4/devices URL. At least that’s my understanding of how it works.

Hmmm… to me it looks like it actually should work. At least according to what’s written in this post “for each stateExtension”.

Actually, I already posted a similar question a bit later in the linked thread already some time ago, but as there was no reply, I considered it lost as the thread was quite old. Well, now there is an answer there :wink: sorry for being impatient.

Seems like the warning does not check if stateExtensions match and just counts the channels. If it would be done with this, okay, but it does not only warn, but also changes the timing (sampling interval) parameter, and yet I don’t know how to stop it from doing this…