today I upgraded to OH2.5.3 as previous 2.5.2 wasn’t stable due to high memory consumption.
I decided to open this thread as even though problem with growing memory was observed before in different releases, it was either solved or not related to the recent software. I think new thread will present clean view on it.
I’m running OH on the RPi 3 and there isn’t much deployed apart from OH there.
It was rock solid stable until 2.4.x and then after 2.5.2 upgrade it started causing issues (I think even 2.5.1 was OK but as I’m not 100% sure I don’t want to mess around). Currently, even after restarts for some reason memory is growing to the following levels:
pi@dom-pi:/usr/share/openhab2/misc $ top
top - 10:36:23 up 28 min, 1 user, load average: 0.18, 0.53, 1.34
Tasks: 116 total, 1 running, 115 sleeping, 0 stopped, 0 zombie
%Cpu(s): 1.9 us, 1.1 sy, 0.0 ni, 96.8 id, 0.2 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 926.1 total, 35.0 free, 421.3 used, 469.7 buff/cache
MiB Swap: 100.0 total, 98.7 free, 1.2 used. 437.9 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
585 openhab 20 0 586712 309372 15216 S 5.9 32.6 6:33.27 java
and floats between 35-50MB free causing system to be hardly responsive. Sometimes there is a peak in the usage causing that CLI freezes for simple commands.
Is there anything I can improve in the configuration here or rather this is SW issue in the latest release?
When OH stops the free memory is about 450MB, the rest is distributed across used and buff/cache.
Then during OH startup I observed in ‘top’ the memory change in the following way:
it starts quickly growing the buff/cache to 500-600M, not increasing used that much (from ~100MB to ~270MB), then cache/buff drops to about 480M and used grows up to 405MB, leaving free at the level of ~35-37MB. In that case startup takes veery long (10min or so).
It doesn’t change in mid-term, maybe after hours it would differ but I doubt.
I don’t know why cache/buff grows that much.
in my case it is related to version upgrade, not knowing what exactly causes it.
I’d like obviously understand which component is causing it to remove/disable as in long term (about 1 week) it is causing OH crash due to low memory.
How can I check consumption per add-on?
I don’t see anything unusual about your memory usage.
It’s even a pretty small instance actually. Larger setups consume more (like mine to use ~750MB with ~550MB resident, also on Pi3).
Don’t stare at the ‘free’ value, it is actually meaningless as the OS uses all but a defined minimum of the available capacity for caching but will release that immediately whenever any app requests memory.
As you say you encounter freezes, find out first if that’s due to memory. Run iostat 2 or the like to see if the system starts paging when it gets slow. It could well be due to other reasons such as SD corruption.
Get the openhab java process ID (using ps). Then run the following (replacing process-id with the openhab java process id. This will report memory usage every 5 seconds. If something in openHAB is leaking memory, you’ll see evidence of it here.
pidstat -r -p process-id 5
Note if pidstat is not found, you may need to install the sysstat package.
thanks for your efforts above. Sorry for not telling the OS, completely missed it. I’m on Raspbian Buster.
My output (IMHO 30% by OH now is a lot more than used to be, but as said I don’t have stats from before upgrade and didn’t check it when there was no noticeable problem):
which looks different than before (I understood your point about not looking religiously at ‘free’). It took it decent time to get to this point though. responsivness is much better, maybe it was some temporary and incidental issue just after the upgrade. Waiting to see your comments whether you think I should worry about 30% mem usage by java and maybe this deserves closing…
If it stays close to 30.4 after running pidstat for a couple hours, you likely can conclude that openHAB is not leaking memory.
The one thing I know causes OH to leak memory is an invalid binding in addons.cfg or addons.config. You would see evidence of that in pidstat by a blip in page faults (minflt) along with a slight increase in RSS and %MEM every minute.
thanks @mhilbush, I will keep observing, for the moment it doesn’t leak. Apparently there was a spike in usage after upgrade (I don’t know exact mechanisms of OH but it was like creating some cache or updating some other data by the new SW) which got flatten after some period of time. Thanks everyone involved for your hints!
For me the Pi 3B+ was not enough in terms of hardware as I ran other (heavier) tasks on it next to openHAB.
However I also think that your Mem usage is not that bad, maybe other things can cause this freezing (which is not strongly related to the size of the memory).
Also as others said:
I think all operating systems will use the (or it will be seen like this) most of the memory to run everything as smoothly as it can. I think you can experience same behavior on a client. Like my computer has 16GB RAM, and after just turning on it will use 8-9 GB, while Windows can also run with 2-4 GB…
openHAB memory usage will always keep climbing until the predefined max memory usage. This is because in languages like Java, not used values and objects (hopes it makes sense to you) will be only cleaned if it is out of memory, so not instantly…
Thanks guys for your efforts
Edit, after 2 days: memory seems to be quite stable so there wasn’t a real problem. Issue was apparently after version change where OH slowed down significantly but then recovered.