If you are Java programmer hook up a Profiler and see what part of the system has a memory leak.
If not you can remove your addons one at a time until the OOM error goes away. Then you will know what addon is causing the problem. If none of them do you know the bug is in the core.
I’ve seen a couple of other OOM errors posted to this forum over the past couple of weeks so there may be a bug people are running into.
Another possibility. Use the JVM -XX:+HeapDumpOnOutOfMemoryError command line option (must edit the openHAB startup script). More information is here. The article also describes several other ways to manually request a heap dump of a running Java process. Depending on the operating system and your scripting skills you could externally monitor the process memory usage and trigger a heap dump based on some criteria.
After the OH2 JSR223 feature is merged and released, you’ll be able to write a script to monitor the JVM memory usage and trigger a heap dump using JMX MBeans. I’ve used this technique to diagnose an Sonos-related OOM in OH1. I then used the Eclipse Memory Analyzer for the heap analysis.
Also watch for high thread counts. They can cause an OOM from native code (due to thread stack memory usage) even if the JVM doesn’t appear to be using excessive memory. The Sonos binding bug, for example, was an issue with runaway thread creation.
Is there a /etc/default/openhab2 file on your system? If so, you can add extra JVM arguments to that script. It is processed before the openHAB process is started. There’s some related information here.
That’s where I put my JSR223-related arguments. I’m hoping that the apt-get installation doesn’t overwrite it. (If anybody knows of a better place to put custom JVM arguments that won’t be overwritten by apt-get, let us know!)
@kohlsalem You can also let it run for some time and then create a heap dump from the openhab console with the dev:dump-create command. This may take several minutes when you run it on a Raspberry Pi.
This will create a ZIP file (e.g. 2017-03-13_221953.zip) with diagnostic data in your userdata directory. This file will also contains the heap dump as heapdump.txt . The extension of this file should actually be .hprof so you can then open it in any program that can open .hprof files. I also usually analyze the heap dump with Eclipse Memory Analyzer.
Might be unrelated, but:
I had my system going bananas here the other day, terminating with an out of memory error.
OH2 restarts and even PC reboot had OH2 hogging 400% CPU.
Turned out it was a Chrome Browser Tab open with Basic UI during a restart/reboot that caused it.
Closed the Browser tab, and things settled.
The connection was from my workplace through nginx, but i think I have seen it once on a direct LAN connection as well.
It doesn’t solve memory leaks. It just creates a (.hprof) file containing a memory heapdump whenever it runs out of memory. That way you can analyze the memory contents in a program like Eclipse MAT.