Openhabian-config resets etc/default/openhab

Whenever I update openhab using openhabian-config, it keeps overwriting my existing etc/default/openhab.

This is actually quite an issue for me, because with the standard settings openhab does not have enough memory and the GC uses all the resources at disposal, making OH very unresponsive.
I am now used to immediately restore the “correct” configuration and then restart openhab, but it’s so unresponsive that it might take several minutes before it will eventually shutdown.

Is there a way to preserve at least the memory settings when installing openhab using openhabian-config?

Would also be interested if there is a way cause I face the exact same issue.

I think linux.parameters could do what you want:

2 Likes

@sihui - Thank you for pointing me to linux.parameters! This is exactly what I needed.

@Benjy - Many thanks for implementing this in both OH4 and OH5!

Test Results - CONFIRMED WORKING :white_check_mark:

I just tested the override mechanism and can confirm it works perfectly:

Test Setup:

  1. Set /etc/default/openhab to default values: -Xmx768m
  2. Created /etc/openhab/linux.parameters with custom values: -Xmx4096m
  3. Restarted openHAB

Results:

# /etc/default/openhab contains:
EXTRA_JAVA_OPTS="-Xms192m -Xmx768m -XX:-TieredCompilation -XX:TieredStopAtLevel=1"

# /etc/openhab/linux.parameters contains:
EXTRA_JAVA_OPTS="-Xms1024m -Xmx4096m -XX:+UseG1GC ..."

# Running process uses:
$ ps aux | grep java | grep -oP -- '-Xmx\d+m'
-Xmx4096m

:white_check_mark: The override works! The value from linux.parameters takes precedence over /etc/default/openhab.

1 Like

Great, that was exactly what I was looking for!

Thanks!

I don’t think so.
The -Xmx parameter, if exceeded, will make Java crash, and openHAB service will restart it, which then takes quite some time and may feel to be sluggish.
But actually it’s not GC but a full restart.

768M is A LOT you should not ever exceed unless you have some memory leak. But if so, that’s what you should fix. Sure, raising the limit will make it crash later - but it’s not the right remedy.

What hardware are you on? When did you install openHABian and which image did you use?

Hi @mstormi Thanks for the input! Let me provide some context about my setup:

Hardware:

  • Raspberry Pi 5 (8 GB RAM)
  • openHABian installation

Installation size:

  • ~950 threads
  • 50,000+ classes loaded
  • 8 IP cameras (Frigate NVR binding)
  • 20+ active bindings

Memory analysis:

With 768 MB heap:

Heap: 748 MB / 768 MB (97% usage)
G1 Old Gen: 453 seconds / 825 collections

After increasing to 4 GB:

Heap: ~1 GB / 4 GB (25% usage)
G1 Old Gen: 0 collections
System stable for several days at ~1 GB

If this were a memory leak, heap usage would continue growing. Instead, it consistently stays around 1 GB, which suggests this is the actual memory requirement for my setup.

Rationale for 4 GB:

  • G1GC works best with headroom (typically 3-4x actual usage)
  • String deduplication needs space to be effective
  • With 8 GB total RAM, using 4 GB for openHAB leaves 4 GB for the OS

I understand that 768 MB works fine for smaller installations, but with this many components, more heap seems necessary for stable operation.

I get what you mean. OpenHab should never use that much memory!

So, let me talk about my configuration.

  • I use latest OpenHab, on a Raspberry Pi 4, 4Gb RAM.
  • Most of my integrations run through MQTT, and all the other services are on the same box (Zigbee2MQTT, Z-Wave JS, …), but use relatively little memory.
  • There are about 100 things, 500 items, and 130 rules; the vast majority of things and items are defined in .things/.item files, while all rules are in automation/ruby.

I started to notice the memory issue few months ago, when I migrated to 5.0.
However, at about the same time I also migrated all my rules to ruby – and that seems to be the major problem.

Initially, I noticed the problem when updating rules “a bit too frequently”, as if the compilation was leaking some memory. Essentially, after the compilation the GC threads would start using 100% CPU, and never stop (or at least, not stop in a reasonable amount of time).
Once I increased the maximum memory for OpenHab, the memory usage was stable: it would maybe increase temporarily, but then drop to the usual (very high!) level. The GC threads are still working after compilation, but only for few seconds (which is still a lot!) and then then terminate.

My impression so far was that ruby uses a lot of memory; but it was such a better experience that I was happy to trade that lot of memory for it.

But maybe there’s some other issue?

@rliffredo Yes 5.0 and with it the move to 64 bit Java increased memory usage by roughly 50%. But yes, what you describe should not happen.
Ruby is not in widespread use so cannot comment on its mem requirements. Could be the reason or not.

@Anpro
Sure, if you raise or remove -Xmx, you’ll end up with OH using as much as it wants, 1G in your case, but that doesn’t mean you have a real NEED for 1+ GB of heap. Never ever.
Did your OH java also crash ?

I’m adding a new veryhighmem config to openHABian that omits -Xms and -Xmx for boxes with >3GB. If I’m not mistaken, the Java defaults are to use 1/4th of physical memory so you should be getting 1/2 GB on 4/8 GB boxes once the code is active.

@mstormi Thanks for adding the veryhighmem config – that will definitely help users with larger installations :+1:
And just to be clear up front: I fully understand (and agree with) your intention to improve the defaults so users don’t have to manually tweak -Xms/-Xmx at all in the future. My post is not meant to contradict that approach, but rather to explain why I adjusted the heap settings in my specific setup.

Context for my setup

This is a fairly large installation:

  • 286 Things
  • 2,182 Items
  • 1,278 Rules (mix of DSL and JS Scripting)
  • A number of bindings, including camera-related ones

Hardware-wise this runs on a Raspberry Pi 5 with 8 GB RAM.
Besides openHAB, the only other process is openV (legacy Viessmann heating control), which is negligible in terms of resource usage.

Background / why I started looking at memory at all

The system had always been on the slower side during startup, but it was stable for months (basically since January).
In November, after extending my camera setup and related automation logic, the system began to behave differently.

From that point on:

  • openHAB started crashing regularly, sometimes once per day
  • no meaningful logs were written
  • first suspicion was SD card wear, so I replaced the card → no improvement

At that point I started looking more closely at overall memory pressure.

Situation with the default / small heap

With a ~768 MB heap the system appeared to be under constant pressure:

  • Heap usage around ~97%
  • frequent G1 Old Gen activity
  • increasing instability

Given the number of Things, Items, Rules, and the amount of event-driven processing, this didn’t look completely unexpected to me.

What I changed (temporarily / explicitly)

I adjusted EXTRA_JAVA_OPTS mainly to give the JVM more headroom while observing the system behavior:

-Xms1024m
-Xmx4096m
-XX:+UseG1GC
-XX:MaxGCPauseMillis=500
-XX:G1ReservePercent=20
-XX:InitiatingHeapOccupancyPercent=35
-XX:+ParallelRefProcEnabled
-XX:+UseStringDeduplication
-XX:MaxMetaspaceSize=512m
-XX:+ExitOnOutOfMemoryError

Result so far

Since that change:

  • :white_check_mark: No crashes
  • :white_check_mark: Faster startup
  • :white_check_mark: More responsive overall behavior
  • :white_check_mark: Zero Old Gen collections so far

Current state after a recent restart:

Memory: heap: 801.0 MiB / 4.0 GiB
G1 Old Generation: 0 col. / 0.000 s
Threads: 798
Classes: ~49k loaded

So the JVM is clearly not using the full 4 GB, but the additional headroom seems to have eliminated the pathological behavior I was seeing before and made the system stable again.

Rules / processing context

To give a better idea of the workload involved: part of this setup includes fairly complex JS rules that process frequent camera events, fetch images, handle caching, notifications, and multiple delivery paths (Cloud + Telegram).
I’ve shared one of these rules in the forum here, in case it’s useful for context:

:backhand_index_pointing_right: New comprehensive Frigate binding - #126 by Anpro

It’s very possible that the observed behavior is caused by the combination of rules, event frequency, and bindings rather than any single component on its own.

Regarding defaults / future improvements

I’m not claiming that everyone needs 4 GB, nor that 4 GB should be the new default.
If openHAB can automatically scale heap usage more intelligently in the future (avoiding manual Xms/Xmx tuning altogether), that would be ideal.

My main point was simply:

  • with this installation size
  • on this hardware
  • with this kind of event-driven automation

the small default heap became a real stability issue, and additional heap resolved it immediately, allowing the system to run reliably again.

If you think testing something like 2 GB (¼ of RAM) would be a more representative comparison point, I’m happy to try that as well and report back.

Thanks again for the work you’re putting into improving this area – it’s definitely appreciated.

Just to add a very short follow-up: increasing the heap wasn’t the only change I made.

In parallel, I also adjusted persistence and network settings to better handle bursts from cameras and many parallel connections.

MapDB persistence:

commitinterval=120
commitsamestate=false
maxdeferredevents=200000

Network (Linux sysctl):

net.core.rmem_max=16777216
net.core.wmem_max=16777216
net.ipv4.tcp_rmem=4096 87380 16777216
net.ipv4.tcp_wmem=4096 65536 16777216
net.netfilter.nf_conntrack_max=262144
net.ipv4.tcp_tw_reuse=1
net.ipv4.tcp_fin_timeout=20
net.ipv4.ip_local_port_range=1024 65535

Since applying these changes together, the system has been stable again and feels much more responsive.

Testing with Java defaults, i.e. without -Xmx -Xms, would make sense.
Make sure to find out which values your Java actually applies then and let use know.

It ain’t helpful to throw everything in the mix like you do, it’s merely the opposite… in order to know which mem settings are most beneficial, you should revert those other modifications and test with default/increased RAM limits only.
We need to know which ones are the settings that make the difference of most relevance.