This is why I raised the value. I have a number of rules which do calculations on power usage / generation. Seeing as the usage fluctuates a lot (shifts by 1-2Ws every reading / less than 5 seconds) these rules fire often.
I measure power usage, solar generation and a number of circuits in the house. Based on this, these rules were always running and I figured I would increase the cache size.
Was this right or wrong?
Considering I now have a RPi4 with 4GB of RAM is there any better value? Why not just peg them (Xms and Xmx) both at 1GB? Assuming there is nothing else running on the RPi.
Based on what @matt1 says, I am just about right? (But I am interested in hearing @mstormi view as it too makes sense.
It’s quite simple:
Start with the defaults and increase gradually if you encounter lags in rules execution. If you do not, stay with your settings.
For openHABian Xms/Xmx defaults are 250/350 which is fine for a 1GB ARM system and on x86 or ARM boxes with more mem probably not more harmful than the defaults.
If you run on 64bit x86 you need more as code and data are usually 4 times larger. But I don’t use x86 so I cannot and won’t recommend anything there.
To increase beyond what’s needed results in more paging. That’s harmful to SD cards. If you don’t run on SD you don’t need to care but even a RPi4 still uses SD so I would.
And it’s very unlikely you have a benefit. As you can imagine I’ve got a large number of rules and they all execute without lagging so they fit into those Xms=250MB I use.
Because there are no single optimal settings, they depend on many more stuff like HW, programs you run beyond OH and more. It’s no beginner topic.
Beginners should use openHABian so for a RPI2/3 they get those values anyway.
Type free -h in the terminal and check how much swap is used. If you are not using swap then your max heap space is fine if it also passes the test in point 2. Increasing the heap too much takes away ram from non Java apps and also cache. Cache is great as it may mean a SD card read is not needed as the file is already in ram. It also impacts the GC if you set it too big AND too small.
Look at what your current heap size grows and shrinks to, you only need to know a rough min and max value. If the min value is <15% of your xmx value, then you may want to consider dropping xmx down lower. This is due to GC reasons and your system can probably make good use of the ram for cache.
Make both xmx and xms the same value so the system does not keep resizing it which you will see this in the ‘committed heap size’ value if it keeps changing. It makes things simple to keep them the same, but you probably won’t notice either way.
If you press the up arrow on your keyboard it will re show the last command you used, so you can watch the values this way, or there are more advanced tools for watching the contents of the heap in real time which I often do to look for memory leaks. Everything that runs in java uses the heap (bindings, rules, the openhab backend and more) and everything outside of Java uses the ram that is left over.
I totally agree and when you have 4gb of ram it is FAR easier to find a value that works compared to a juggling act on a 1gb limited setup.
Sounds like that will be fine, but it would be interesting to know what range your heap size shrinks and grows between instead of a static known number from your pic. There is no perfect number, it is just you need to make sure you don’t run out of heap space and also don’t set it excessively high.
I ready this treat as it might be the reason of OH 4.0 slowing down.
What I try to understand:
If XMX is the pre-allocated RAM, hence the Minimum, why do we not e.g. set it to 1m and the XMX as the max value to e.g. 50% of RAM, like 4GB on my RPi4?
Is there a chance OH recognizes RAM is full? If so, a log error would be extremely helpful from my point of view?
Does it make sense to maybe have some standard configs available (maybe even selectable in openhabian) as probably RPi2, 3 and 4 with different RAM are widely used. One could also have a config with only OH running (plus some basics like mosquitto etc.) while a parallel config could be for e.g. just using max 50%?
Besides that: How to I find out if this is the root cause and what XMS and XMX values are best for my setup?
Thanks all for the feedback so far.
I extended XMS to 512MB and XMX to 3GB. Actually when trying 4GB for XMX openhab does not start any more!?
What I observed with old as well as new values: If I change a rule, once a while also System started trigger is fired, which seems to be wrong because reloading the rule file (by saving) is not starting the system.
Anything is possible if someone is willing to put in the work. But I don’t think a percentage of available memory is going to work across the board. On an RPi 3, 2/3s of 1 GB is probably not enough. On an Rpi 4 with 8 GB RAM, 2/3’s is probably way too much.
That would be a false assumption. I suspect it’s a tiny minority of users who only run OH. If that’s the audience, it would be guidance for almost no one.
Well, the operating system and everything else on the machine needs some RAM to run in too. It won’t let you claim it all for OH alone.
This is a different discussion and it’s an old behavior from OH 2 that was restored in OH 4. In order to avoid having system started rules fail to run because of timing during startup, whenever a rule is loaded and the start level that triggers that rule has passed, the rule will trigger.
That means that if you change the rule, it will trigger upon a save because that runlevel has been reached.
I had almost 7 GB of RAM available at that time - does not make sense for me, especially as 4GB (or the 3GBI use now) is the max, not the pre-allocated value. Maybe leading towards the real issue somehow?
If I understand correct, only if the rule file I change something also has the boot finish trigger, it will fire. But not the rules in other files, correct?
Have I overseen a change log as well as documentation mentioning this?