4G Data wasting on openhab Cloud connection

I have openhab 3 on raspberry pi, running as access server to mqtt devices in my loghouse.

I can say it was partially my fault for messing with the sitemap, but the question is still valuable.

Why the openhab cloud on rare ocasion keeps the connection open after the sitemap file is reloaded as I added only one switch and some text fields? I may need to dig some logs to prove the cause, but the behavior is bad, wasting limited 4G data.

Let me clear out the question by that, I restarted the openhab, and the data hogging stopped. I did that after one day and luckily only wasted 550MB of data. But it took me some hours to stop everything else like NVR and some other networked devices to finaly dig into OH server with iftop.

Is there a known bug i trigered in sitemap reload that keeps the connection going?
I am kind of revisiting that question i had once with OH 2 at home here Constant 4Mb/s upload traffic to openhab cloud

There was the issue i had camera feed on main page, that also lacaly sometimes triggered page to buffer and not finish loading even thoug everithing shoewd up and worked, but here i don’t have any camera feed in OH.

I used iftop to confirm the trafic was going as well as the chart in OH that swows my dataplan was going down constantly by 25 MB every hour as usually is less than 1 MB / hour.
I have only 20GB for 180 days if nothing wastes it earlier. So i am able to use 111MB / day.

I may get more meaningfull data in tomorow as i have more time to get some logs.

I am just trying to learn what to do and not to do to avoid wasting 4G data and to limit the interventions needed to prevent internet disconnects by data runaways.

That limited dataplans are rubbish, but 120€ / year seems a bit much for unlimited data for only that OpenHab access and NVR.
Once i had the same issue with NVR, i forgot it loged in at home all day as i had autologin and it wassted 1GB not playing any video.

Thanks a lot

Why the openhab cloud on rare ocasion keeps the connection open

your openhab will always keep a websocket connection open to openHAB cloud, there is always going to be some constant data usage to keep that connection alive (heartbeat) which is part of the socket.io libraries that we use.

If you turn the the logging to TRACE on the cloud binding, you can see any requests and responses being shared between your openHAB and the client. To see the heartbeat messages, you would probably need to enable logging for the io.socket library, but i have not tried doing that yet.


Just to add some more to it:

  1. if openhab cloud closes the connection it’s not working as it should
  2. main purpose of openhab cloud is a “ever ready” live connection, which can trigger your local openHAB from “the internet”

It’s literally within the first two paragraphs of the docs:

That being said, the only “surprise” to me is half a gig of data for one day. But then again, there’s constant heartbeats and perhaps some overhead and stuff within the socket…

Thanks for responses,
I have to add that on the normal day it uses about 35 MB.
I guess that hartbeat with few notifications should use that.
Is it posible that sintactic error in sitemap could trigger that sitemap connection is kept open all day where all the fields change every few seconds that would net maybe 450MB in this case?

I use VS Code that seems to recognise errors but somtimes erorr recognition does not work and i may make mistake with frames or text fields and not noticing it.

I like that i dont have to restart OH for every change as HA users say they need to, it is nice to reduce downtime.
But it is interesting that restart stopped the data wasting in my case, so how can my config be the cause if it solved itsef, only with restart?

I cant complain about it fixing itself, but why it couldn’t without restart.

I may copy the config to the test Pi at home and enable the log sugested, than try to mess with sitemap and see if i get any patern of bad behaviour out of it.

@Matej_Kotnik i don’t think you can make any more assumptions until you get TRACE logging enabled, this will definitely show you what data is being passed back and forth.

I have to add that on the normal day it uses about 35 MB.

That sounds about right for heart beat messages + TCP overhead and any reconnects that may happen