I assume that should be sooner rather than later.
But that would be you then. There are no free developers at the moment, because of the Eclipse smarthome reintegration and the 50 new bindings pull requests of the last 4 weeks (each > 1000 lines).
I can´t… I havn´t got the knowlegde.
I understand things are highly complicated atm with alot of stuff to do. But what worry me is, if ie. bindings has higher priority than this kind of problem?
A binding developer is a potential new core contributor and openhab is in need of more core developers ^^.
Each person is donating their time on what THEY see is important for getting their own system running. For me a binding is more important, but that will change as more of my hardware is operating.
Yes I see binding development as a way to play and learn before doing some bug fixes on the core to learn how it operates, and then move into adding new features to the core later on after I know the framework and how things operate under the hood.
ok, i see… looks like that i will have to wait for a fix for this issue… as i already said… running basic ui in fully kiosk browser is an acceptable temporary workaround for me. sometimes missing the left nav bar and the dark theme … but the the main thing is that the system is running stable… which it does since i’m avoiding the permanent usage of the openhab android app.
Is there any news about this Issue and a fix - i am dancing the same problem with oh 2.4.0
Same problem here. I’m using 2.4.0 stable version. Need to reboot every two weeks. It consumes approx 200 K per hour. I’m doing nothing complicated (some rules and mqtt).
I have the same problem with Openhab 2.4. After 2 or 3 weeks there is no more memory available and the system crashes on a raspberry 3. Is there a way to monitor the individual memory consumption of every used binding and the openhab core?
Using the Android App with the remote myopenhab URL to get notifications from openhab.
Unfortunately not. openHAB is a so called monolith in a single JVM (java virtual machine).
There were other reports about the Android APP already. Apparently it uses an API very frequently that causes leaking memory in core. The REST API part of openHAB uses a 5 year old unmaintained library, so that is actually quite likely.
When your system has been running for a while you could make a heap dump and analyze where all the memory is consumed with Eclipse Memory Analyzer Tool (MAT). If you configure the runtime such that it will always write a heap dump on OOM errors you can always analyze them after they just occurred. I am very interested in the results.
Create a heap dump from the console with the
dev:dump-createcommand. This may take several minutes when you run it on a Raspberry Pi.
/etc/default/openhab2file to get automatic heap dumps in case of an error.
I use Visual VM to look for leaks whilst Openhab is running. You can see how fast the memory is used and trigger a Garbage Collection at any time to see what gets left behind. The graphs in this article are all from the Visual VM tool.
Ok, so yesterday it got me too.
I’ve created a dump with dev:dump-create and a ZIP got created. Then I downloaded Eclipse Memory Analyser Tool (MAT) for Mac but then it requested me to install the old Java 6 (WTF?) so I gave up.
Can anybody do something with my dump to get this memory leak fixed or should I keep it for security reasons?
I have this issue too. I have never had an OH setup that was more stable that staying up for a few days - I thought this was normal!
I have a rule setup that warns me when available memory gets below a set point - I then reboot the pi (before it dies).
I find it difficult to understand how an issue like this has gone un-fixed for years (while at the same time claiming that OH is stable)!
I’m sorry that I don’t have the skills to help resolve this, but in my opinion, getting OH to be truly stable should be a priority.
A complex matter, here’s some simple answers: 1) there’s only very few people to have this problem 2) no user has been providing the input needed to debug this (noone has tracked it down to use of a particular binding or whatever component which is required that a developer could start looking into) … and (well, sorry) 3) many people like you only ever complain but don’t think of helping instead.
This is also often misunderstood. Sure there can be real memory leaks at times, but most of the time there’s default heap size limits in use which if reached make Java terminate (“crash”).
But regular use with a fair amount of bindings can already require more memory than this limit will allow for.
That’s not a OH problem but a user problem then - use of inadequate system settings.
BTW: the default maximum heap on ARM/Zulu is 256M.
Anyone (and in particular anyone to have a ‘crash’ problem) should raise the limit rather than shout at OH.
Ok, I hear what you are saying. Given that I don’t have the skills to debug the issue, what can I do to help someone who does have the skills, find the solution?
I am very willing to help in any way I can. If it is a case of providing dumps, or other things to help track it down - I just need to know.
I’m not a linux person, so I simply downloaded openHABian image and went to it from there. If there are settings that need to be changed to help make the system stable, please let me know and I will do it. Or at least point me in the direction.
I don’t honestly think we are doing OpenHAB any favours by saying “it’s not an OpenHAB problem, it’s a user problem”
Not all of us have the skills to debug this type of very complex issue, no worries! Christoph suggested some ways a user could help but my eyes glazed over about half way thru reading his suggestions, they are beyond someone who isn’t a Linux ‘power user’ or developer. But even just reporting an issue is useful to the developers and a small contribution. As a community this is our strength.
*shrug* well it is like it is.
The major part of the problem is to properly describe when it happens and to dissect it.
There’s the openHAB core, and there’s the specific bindings, actions and eventually specific config a user runs (an infinite number of combinations), and there’s the system (OS) this all runs on.
Potentially there’s memory leaks in bindings and the core/API. It’s the user’s task to nail them down on his machine, with his config. No developer can do that for you because every OH installation is unique so the leak usually only shows up in your combination of bindings and config and does not show if someone else tries to reproduce it.
You say you already use openHABian so check and eventually raise the limit (the EXTRA_JAVA_OPTS line in
That should at least increase the time it takes until the limit is reached and Java crashes.
To find the leaking entity, you need to run OH with a reduced set of bindings and functionality and gradually add more of them (no need to restart, just change config via PaperUI or files).
DO ALL OF THIS WHILE WATCHING memory consumption (with tools like htop or by generating memory dumps every now and then) until you notice a substantial increase in memory use over time (without actions of yours that might explain this). Then your last change probably includes the component to have a memory leak, at least these are the candidates you should have a closer look at next. Try running with and without them, try identifying actions (device discovery, API calls you made, …? ) that result in increased memory footprint.
Once you identify a component, config or action to show this problem, you can open a Github issue and we (ideally) can assign this to the component’s developer. But take care to well describe the exact conditions that need to be present to make the problem show up.
Thanks for your advice. I have been, and continue to record memory use regularly. I am also working my way through bindings and add-ons in an effort to figure out what, or what combinations are causing the issue.
It has also been suggested that I increase the swap (currently set at 100M), do you think that is a good plan?
yes, having more swap ready never hurts. With just 100M you eventually already hit the virtual process size limit. Make it 2xRAM size.