Openhab filling up Memory and Swap

Perhaps somebody needs to drill that in to the devs over at Home Assistant :smiley:

Hi,
My pi runs openhabian and openhabian only with an Mqtt server mosquito, so that isn’t the problem. And like I said in the beginning i’m Not an expert I only try to solve my problem. I had to reboot my pi after 3 or 4 days because of the memory. Also when I deleted every rule except for one to turn on and off a light or after a clean install or 2.
That I didn’t had to overclock Ok, but the is my test and if it breaks so be it… I bought a pi 3b+ and a 4b so I going to keep on testing and hope that it will go on for years without rebooting :slightly_smiling_face:.

If you have a memory leak see my post. If you don’t there’s no need to reboot. Eventually adjust Xms/Xmx settings.

I recommend to move as much out of the jvm as possible. Use the mosquitto c++ mqtt broker instead of the java embedded one. Use c++ deconz instead of java zigbee. Use v8/javascript nodered instead of the java rule engine. Use c++/rust/go based mqtt bridges for other protocols.

Cheers, David

This begs the question, then what is left? MQTT replaces the event bus. There are no bindings. No Rules. So OH is just a UI and maybe persistence? With an approach like this, what value does OH bring?

1 Like

The value of openHAB are the addons, not the core IMO. I once thought different, but tbh, my openHAB crashes often because of memory leaks. I have seen almost all of core code. A lot of leaks and high memory consumption is caused by dependencies, not openHAB code itself. (Think of xtend, jupnp, the old rest interface library we use and so on.)

But the openHAB code is also spaghetti sometimes and not well understood. As seen by the recent huge memory leak that was unintentionally caused by Markus patch to fix a Thing-always-offline bug.
A binding can nowadays register actions for the NRE. I scratched my head when I saw the corresponding code in core.

It would be so much easier if openHAB would be a composition of different processes and we could exactly pinpoint where memory goes and cpu time is spend.

Cheers, David

Easier for developers? Yes. Easier for testers? Yes. Easier for average users? Unless all of this composition of separate processes is accomplished completely transparently, no.

1 Like

But Rich: It is. Docker compose for example. Everything is started up, isolated, secure with a single command.

I’m Joe user and I want to add a binding. Where is that docker compose thingie? What do I need to type in there? What was that command again?

I’m joe user and I want to back up my config files. What the heck is a Docker volume? Where are my files? Where do I look? Why are they all owned by random users?

Getting a suit of processes started up is only a fraction of usability concerns.

You forgot to add the overhead load of Docker on a limited system such as the Pi.

It’s really not that much overhead. And especially with the expanded RAM options on the RPi 4 I think the benefits to running in containers outweigh the slightly more overhead.

My main concern is that user’s need an interface that looks like a single thing they are interacting with. I can be convinced that such an interface is possible with a composable series of processes like is proposed, but so far I’m not convinced.

1 Like

A C++, Go, Rust (or Java-GraalVM) process in a docker (I rather should say OCI) container uses way-way-way less memory than an equivalent java-osgi-jvm process without a container (and at the same time provide process isolation that we can only dream of right now). Containers do not have much impact on the CPU, remember that containers are not virtual machines. It’s just a really good process isolation.

I’m not really taking this argument.

Bindings would be docker images with a run-label (so that they know how to start themselves and which ports to expose etc) or with another meta-data file attached (like a docker-compose file). I don’t see much difference to distributing a .jar file (which at the moment actually causes more issues that a docker image would).

Of course you can’t throw images at users and expect them to start them somehow. I agree.

You just describes home Assistant Hass.io and its addons.

I’d take it one step farther and you can’t just have users editing Docker-compose files by hand and expect them to be successful. And running everything in Docker is only a tiny part of the usability problem. You also need to give them a unified interface for accessing their data, configuring everything to include installing and removing “bindings” and all the rest.

You previously said that openHAB bindings are it’s advantage and now you are saying they all need to be rewritten in another language. So there is no benefit in the core, all the addons need to be rewritten… what’s left? Some UIs that are written to use a REST API you want to replace too?

As best as I can tell, you are proposing starting over completely from scratch, in which case is it really openHAB any more? It doesn’t seem like you’d be using anything in openHAB as it exists right now. I’m not saying that a project can’t scrap it all and start over anew. But that requires a pretty hefty buy in from the developers and the users. Without that you are really just creating a new project.

Yes I know. Nice architecture, wouldn’t you say?

Why would they? I think you misunderstood. Addons are composed of a container image and a docker-compose file. A user does not need to edit that file, it is written by the addon developer. A user, at best, just sees what ports are required and services are provided by an addon. A user just downloads such an addon bundle and runs it.

Not saying that. You can compile openHAB addons with graalvm down to machine binaries. A shim is required that translates openHAB service calls to interprocess-calls.

New addons might as well be written in other languages. I don’t see many advantages in the programming language java. Allowing a heterogeneous addon environment is superiour imo.

Each container running its own Linux OS, many times Alpine Linux (minus the kernel).
Of course, you will say that dies not add to the load on my Pi 3B+ either. :roll_eyes:

I cannot comment on the architecture except to say in spite of having a full-time paid developer, they cannot keep it running well.

This is half the truth. What happens is that a binary, started from within a container, uses the containers user-land libraries. No deamons, no services, no kernel.

Like a statically linked executable. That means it cannot share dynamic libraries with other processes. That’s why it uses more memory.

This is an issue for cloud computing companies, where hundreds of users might execute the same binary (like the V8 javascript runtime for nodejs). They would need maybe half the memory without containers (if it’s all about efficiency and not security).

But for a home automation system, why would you execute similar or same services multiple times? The mosquitto broker does not share much with the openzwave deamon, except libc.

If openHAB would run well on a PI 3, this topic would not exist, I guess.

I didn’t want to drift away too much here. Just wanted to back my recommendation from above and provide some backgrounds.

Unlike you, apparently, I ran Hass.io on my Pi 3 B+. It took linger to start up and appeared to hang on shutdown. Overall, it was less reliable than just running in a Python venv, even with the same code versions.

BTW, there are daemons running in the container to mount volumes and shell processes to execute code.

Why you mention openzwave, I do not know. OH does not use openzwave. Home Assistant includes their own fork of it in their Docker container, should you wish to run it that way.

I can’t comment on Hass.io, but from what I know of containers in general, I agree with David. The increased amount of RAM and CPU is minimal. Do you know that the problems you encountered were with containers in specific, or how Hass.io was using containers? There are more ways to do containers badly than there are to do them well.

This is an example of using containers badly. Each container should have only one running process in it. Mounting volumes is handled by the Docker daemon (or whatever container engine you are using). There shouldn’t be any other shell processes running except the main process. If you need more than one you are supposed to spin that out to a separate container.

That’s not to say that doing stuff like this is uncommon. Heck, I even saw a Calibre Docker image once that included a full X environment and RDP server to access it’s GUI. But that’s abusing containers and trying to use them like they are VMs.

Overall, I don’t necessarily have a problem with the overall architectural approach proposed, as long as it is implemented well. My concerns are:

  • all the container stuff is hidden from the users who get one unified interface
  • community and maintainer buy in since this would pretty much be a throw it all out and start from scratch effort.

It would have to be something that more than one developer pushes for. In fact the bulk of developers and maintainers would have to be behind it, or there would have to be a fork which would fracture the community.

From what I know of the Devs there I would expect it to be implemented badly.
They also have their own custom hypervisor image they are pushing heavily. When I ran it in a VM it kept losing its network connectivity every few days, so I gave up on that hypervisor early on