I’m running a few Raspberry Pi’s 3B for friends and family, and I was wondering if I could reduce the memory footprint of Openhab?
I see there is not much room in memory (30 MB free, 101 MB available as reported by free -m, and a little bit of swap (50-100 MB) is in use (as reported by FireMotD at login). On a SD card, that is not a good idea.
My environment:
Openhabian 32 bit image (recent version of januari or so),
OS: Trixie 32 bits
latest Openhab 4.3.9 release
Only 1 or 2 Zigbee sensors.
Zram in use(default config from Openhabian)
Everything configured using configuration files
No UI in use except OH Android/iOS app
I reduced the system services for Samba (smbd, nmbd, winbond)
I also once tried an Openhab container and that is only 124 MB (virtual 797 MB), as reported by docker container list. That sounds like a lot less (or is this comparison not fair?).
Yes, I know, this is already an old environment with hardly support. 64 bit and OH 5.0 is the future. I have to switch at some time. In the meantime, these old Pi’s run quite reliable. But I was also curious from a technical point of view.
The very question you should answer before is: why?
Is there any problem with your system(s) you think you would resolve that way?
And long story short, the answer is no.
openHABian already optimizes RAM usage to the possible extent that it’s still reliable.
If it had been possible to reduce current OH’s 64bit footprint into the 1 GB of a RPi3, we would have done it.
I think it’s an apples to oranges comparison. Typically, a container is going to require more RAM over all because it needs to run a bunch of stuff in the container environment which is already available and running on the host. And that doesn’t even include the resources used by running docker’s daemon and other support services (note Podman should be lighter weight as it’s daemonless but it still has some overhead).
Don’t touch it, you’ll break it.
Everything is going to be a trade off. As @mstormi indicates, openHABian is pretty optimized for most openHAB deployments. However, if you wanted to optimize everything for your deployments here is what I would do.
Review your ZRAM usage. If they have a lot of free space consider reducing their size. This won’t necessarily save RAM, but it will limit how much these file systems can grow before the system crashes, becuase the virtual file systems are full, ensuring the rest of the system has more RAM. Of course OH won’t be very function when that happens.
Review your ZRAM usage and reduce how much is written out. Every byte put into a file in ZRAM will consume some amount of RAM. Change the loggers to not log as much, don’t keep as many old log files (default is 7) and reduce the max size of a log file before rolling over. I’d turn off events.log at a minimum. Of course, with less logging one has less information to diagnose when things go wrong.
Also optimize your persistence. Only save exactly what you need to, exactly as often as you need to. Where possible don’t save the same data in more than one place (e.g. a number Item in rrd4j and mapdb).
Turn off ZRAM entirely. Of course then that means your SD card is at a much greater risk of wearing out because OH does a lot of writes under normal operating conditions. 1 and 2 above can reduce the number of writes.
Use Rules DSL. Fewer add-ons means less RAM. It won’t save much though.
Turn off unused bundles in the karaf console. For example, if you are not actively developing Rules DSL with the VSCode openHAB extension, you likely are not using the LSP service. I’m sure there are other similar such bundles active in OH, but it’s going to take a lot of research to figure out what each does and whether you need it. And it’s not going to save a lot.
You can install The Doctor Binding helps find issues with your system [4.0.0.0;5.0.0.0). This addon will monitor and report on your heap usage. If you are not using much of the allocated heap, you could reduce the amount Java acquires by adding the -Xms<size> and -Xmx<size> properties to the EXTRA_JAVA_OPTS field in /etc/default/openhab. This will cause OH to initially acquire less RAM when it starts up (-Xms) and throw out of memory exceptions when OH needs more than allowed (-Xmx).
You definitely have things you can do to really optimize your installs, but the cost is pretty high. It’ll be a lot of work and trial and error. In the end, you’ll definitely end up with a system that is more brittle, harder to maintain, and harder to support.
Thanks for all your idea’s and expertise here. I will certainly try the Doctor Binding, was not aware of it. For learning purpose, it is very interesting. And I’ll keep the old Pi’s running, and for new projects leave the memory balancing to Openhabian (who’s actually doing a very stable job, imo!).