@mstormi Thanks for adding the veryhighmem config – that will definitely help users with larger installations 
And just to be clear up front: I fully understand (and agree with) your intention to improve the defaults so users don’t have to manually tweak -Xms/-Xmx at all in the future. My post is not meant to contradict that approach, but rather to explain why I adjusted the heap settings in my specific setup.
Context for my setup
This is a fairly large installation:
- 286 Things
- 2,182 Items
- 1,278 Rules (mix of DSL and JS Scripting)
- A number of bindings, including camera-related ones
Hardware-wise this runs on a Raspberry Pi 5 with 8 GB RAM.
Besides openHAB, the only other process is openV (legacy Viessmann heating control), which is negligible in terms of resource usage.
Background / why I started looking at memory at all
The system had always been on the slower side during startup, but it was stable for months (basically since January).
In November, after extending my camera setup and related automation logic, the system began to behave differently.
From that point on:
- openHAB started crashing regularly, sometimes once per day
- no meaningful logs were written
- first suspicion was SD card wear, so I replaced the card → no improvement
At that point I started looking more closely at overall memory pressure.
Situation with the default / small heap
With a ~768 MB heap the system appeared to be under constant pressure:
- Heap usage around ~97%
- frequent G1 Old Gen activity
- increasing instability
Given the number of Things, Items, Rules, and the amount of event-driven processing, this didn’t look completely unexpected to me.
What I changed (temporarily / explicitly)
I adjusted EXTRA_JAVA_OPTS mainly to give the JVM more headroom while observing the system behavior:
-Xms1024m
-Xmx4096m
-XX:+UseG1GC
-XX:MaxGCPauseMillis=500
-XX:G1ReservePercent=20
-XX:InitiatingHeapOccupancyPercent=35
-XX:+ParallelRefProcEnabled
-XX:+UseStringDeduplication
-XX:MaxMetaspaceSize=512m
-XX:+ExitOnOutOfMemoryError
Result so far
Since that change:
No crashes
Faster startup
More responsive overall behavior
Zero Old Gen collections so far
Current state after a recent restart:
Memory: heap: 801.0 MiB / 4.0 GiB
G1 Old Generation: 0 col. / 0.000 s
Threads: 798
Classes: ~49k loaded
So the JVM is clearly not using the full 4 GB, but the additional headroom seems to have eliminated the pathological behavior I was seeing before and made the system stable again.
Rules / processing context
To give a better idea of the workload involved: part of this setup includes fairly complex JS rules that process frequent camera events, fetch images, handle caching, notifications, and multiple delivery paths (Cloud + Telegram).
I’ve shared one of these rules in the forum here, in case it’s useful for context:
New comprehensive Frigate binding - #126 by Anpro
It’s very possible that the observed behavior is caused by the combination of rules, event frequency, and bindings rather than any single component on its own.
Regarding defaults / future improvements
I’m not claiming that everyone needs 4 GB, nor that 4 GB should be the new default.
If openHAB can automatically scale heap usage more intelligently in the future (avoiding manual Xms/Xmx tuning altogether), that would be ideal.
My main point was simply:
- with this installation size
- on this hardware
- with this kind of event-driven automation
the small default heap became a real stability issue, and additional heap resolved it immediately, allowing the system to run reliably again.
If you think testing something like 2 GB (¼ of RAM) would be a more representative comparison point, I’m happy to try that as well and report back.
Thanks again for the work you’re putting into improving this area – it’s definitely appreciated.