Hi,
since switching to a containerized version of OH with Docker, I encounter frequent out-of-memory scenarios.
While this might be related to JavaScript memory/stability issues - #2 by J-N-K, the fix by florian-h05 should’ve fixed that cause for a memory leak in the M6.
- Platform information:
- Hardware: Raspi 4, 2GB
- OS: Debian GNU/Linux 11 (bullseye)
- Java Runtime Environment: Docker container
- openHAB version: 3.0.4 M6 (Build #3227)
Add-ons:
150 │ Active │ 80 │ 3.4.0.202212150739 │ openHAB Core :: Bundles :: Marketplace Add-on Services
151 │ Active │ 80 │ 3.4.0.202212150740 │ openHAB Core :: Bundles :: Community Marketplace Add-on Service :: Karaf
251 │ Active │ 80 │ 3.4.0.202212160328 │ openHAB Add-ons :: Bundles :: Automation :: JavaScript Scripting
252 │ Active │ 80 │ 3.4.0.202212160331 │ openHAB Add-ons :: Bundles :: Astro Binding
253 │ Active │ 80 │ 3.4.0.202212160333 │ openHAB Add-ons :: Bundles :: Chromecast Binding
254 │ Active │ 80 │ 3.4.0.202212160334 │ openHAB Add-ons :: Bundles :: Daikin Binding
255 │ Active │ 80 │ 3.4.0.202212160335 │ openHAB Add-ons :: Bundles :: Dresden Elektronik deCONZ Binding
256 │ Active │ 80 │ 3.4.0.202212160336 │ openHAB Add-ons :: Bundles :: Doorbird Binding
258 │ Active │ 80 │ 3.4.0.202212160343 │ openHAB Add-ons :: Bundles :: Homematic Binding
259 │ Active │ 80 │ 3.4.0.202212160343 │ openHAB Add-ons :: Bundles :: HTTP Binding
261 │ Active │ 80 │ 3.4.0.202212160351 │ openHAB Add-ons :: Bundles :: Mail Binding
263 │ Active │ 80 │ 3.4.0.202212160356 │ openHAB Add-ons :: Bundles :: Netatmo Binding
264 │ Active │ 80 │ 3.4.0.202212160356 │ openHAB Add-ons :: Bundles :: Network Binding
265 │ Active │ 80 │ 3.4.0.202212160356 │ openHAB Add-ons :: Bundles :: Network UPS Tools Binding
266 │ Active │ 80 │ 3.4.0.202212160359 │ openHAB Add-ons :: Bundles :: OpenWeatherMap Binding
267 │ Active │ 80 │ 3.4.0.202212160408 │ openHAB Add-ons :: Bundles :: Sonos Binding
268 │ Active │ 80 │ 3.4.0.202212160409 │ openHAB Add-ons :: Bundles :: Telegram Binding
271 │ Active │ 80 │ 3.4.0.202212160417 │ openHAB Add-ons :: Bundles :: IO :: openHAB Cloud Connector
272 │ Active │ 80 │ 3.4.0.202212160418 │ openHAB Add-ons :: Bundles :: Persistence Service :: MapDB
273 │ Active │ 80 │ 3.4.0.202212160418 │ openHAB Add-ons :: Bundles :: Persistence Service :: RRD4j
274 │ Active │ 75 │ 3.4.0.202212160418 │ openHAB Add-ons :: Bundles :: Transformation Service :: JavaScript
275 │ Active │ 75 │ 3.4.0.202212160419 │ openHAB Add-ons :: Bundles :: Transformation Service :: JSonPath
276 │ Active │ 75 │ 3.4.0.202212160340 │ openHAB Add-ons :: Bundles :: Transformation Service :: Map
277 │ Active │ 75 │ 3.4.0.202212160419 │ openHAB Add-ons :: Bundles :: Transformation Service :: RegEx
280 │ Active │ 80 │ 3.4.0.202212160421 │ openHAB Add-ons :: Bundles :: Voice :: VoiceRSS Text-to-Speech
Logs:
Below are some log messages that appear various times before the out-of-memory state is reached and the OH instance doesn’t do anything anymore:
HTTP binding encounters timeouts:
2022-12-18 17:09:16.161 [WARN ] [p.internal.http.HttpResponseListener] - Requesting 'http://pizero/api/0C894FF2A2/lights/9' (method='GET', content='null') failed: java.util.concurrent.TimeoutException: Total timeout 5000 ms elapsed
2022-12-18 17:09:28.246 [WARN ] [p.internal.http.HttpResponseListener] - Requesting 'http://pizero/api/0C894FF2A2/lights/12' (method='GET', content='null') failed: java.util.concurrent.TimeoutException: Total timeout 5000 ms elapsed
Cloud connection breaks (logged about every 3 minutes):
2022-12-18 17:10:11.236 [INFO ] [io.openhabcloud.internal.CloudClient] - Disconnected from the openHAB Cloud service (UUID = db...50, base URL = http://localhost:8080)
2022-12-18 17:11:16.132 [INFO ] [io.openhabcloud.internal.CloudClient] - Connected to the openHAB Cloud service (UUID = db...50, base URL = http://localhost:8080)
2022-12-18 17:11:35.124 [WARN ] [io.openhabcloud.internal.CloudClient] - Error during communication: EngineIOException xhr poll error
2022-12-18 17:11:37.173 [WARN ] [io.openhabcloud.internal.CloudClient] - Socket.IO disconnected: transport error
2022-12-18 17:11:37.229 [INFO ] [io.openhabcloud.internal.CloudClient] - Disconnected from the openHAB Cloud service (UUID = db...50, base URL = http://localhost:8080)
This indicates the beginning of the end:
2022-12-18 17:13:48.425 [WARN ] [ab.core.internal.events.EventHandler] - Dispatching event to subscriber 'org.openhab.core.internal.items.ItemUpdater@72739a31' takes more than 5000ms.
2022-12-18 17:14:01.240 [WARN ] [ab.core.internal.events.EventHandler] - Dispatching event to subscriber 'org.openhab.core.internal.items.ItemUpdater@72739a31' takes more than 5000ms.
Not sure about this one:
2022-12-18 17:24:04.525 [ERROR] [nternal.DiscoveryServiceRegistryImpl] - Cannot notify the DiscoveryListener 'org.openhab.core.config.discovery.internal.PersistentInbox' on Thing discovered event!
Timers from rules fail (unfortunately I cannot see what rule), logged quite frequently:
.
at java.util.Timer.sched(Timer.java:398) ~[?:?]
at java.util.Timer.schedule(Timer.java:194) ~[?:?]
at org.openhab.core.storage.json.internal.JsonStorage.deferredCommit(JsonStorage.java:414) ~[?:?]
at org.openhab.core.storage.json.internal.JsonStorage.put(JsonStorage.java:160) ~[?:?]
at org.openhab.core.config.discovery.internal.PersistentInbox.internalAdd(PersistentInbox.java:290) ~[bundleFile:?]
at org.openhab.core.config.discovery.internal.PersistentInbox.add(PersistentInbox.java:241) ~[bundleFile:?]
at org.openhab.core.config.discovery.internal.PersistentInbox.thingDiscovered(PersistentInbox.java:422) ~[bundleFile:?]
at org.openhab.core.config.discovery.internal.DiscoveryServiceRegistryImpl$1.run(DiscoveryServiceRegistryImpl.java:260) ~[bundleFile:?]
at org.openhab.core.config.discovery.internal.DiscoveryServiceRegistryImpl$1.run(DiscoveryServiceRegistryImpl.java:1) ~[bundleFile:?]
at java.security.AccessController.doPrivileged(Native Method) ~[?:?]
at org.openhab.core.config.discovery.internal.DiscoveryServiceRegistryImpl.thingDiscovered(DiscoveryServiceRegistryImpl.java:257) [bundleFile:?]
at org.openhab.core.config.discovery.AbstractDiscoveryService.thingDiscovered(AbstractDiscoveryService.java:251) [bundleFile:?]
at org.openhab.core.config.discovery.mdns.internal.MDNSDiscoveryService.createDiscoveryResult(MDNSDiscoveryService.java:227) [bundleFile:?]
at org.openhab.core.config.discovery.mdns.internal.MDNSDiscoveryService.considerService(MDNSDiscoveryService.java:214) [bundleFile:?]
at org.openhab.core.config.discovery.mdns.internal.MDNSDiscoveryService.serviceResolved(MDNSDiscoveryService.java:207) [bundleFile:?]
at javax.jmdns.impl.ListenerStatus$ServiceListenerStatus.serviceResolved(ListenerStatus.java:106) [bundleFile:3.5.8]
at javax.jmdns.impl.JmDNSImpl$1.run(JmDNSImpl.java:911) [bundleFile:3.5.8]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
First time the heap space is mentioned:
2022-12-18 17:28:25.711 [WARN ] [mmon.WrappedScheduledExecutorService] - Scheduled runnable ended with an exception:
java.lang.OutOfMemoryError: Java heap space
What follows are various errors caused by the heap space, logged very frequently at the end:
2022-12-18 18:03:02.091 [ERROR] [internal.handler.ScriptActionHandler] - Script execution of rule with UID 'airConditioners-5' failed: An error occurred during the script execution: Java heap space in airConditioners
...
2022-12-18 18:13:15.958 [WARN ] [mmon.WrappedScheduledExecutorService] - Scheduled runnable ended with an exception:
java.lang.OutOfMemoryError: Java heap space
...
2022-12-18 18:24:24.624 [ERROR] [internal.handler.ScriptActionHandler] - Script execution of rule with UID 'deconz-1' failed: Java heap space in deconz
Then nothing happens anymore, no logs are written, no item states updated (neither via automation nor via HabPanel or the mobile app).
Any help is greatly appreciated as it drives me crazy that I have to restart OH every time. Sometimes even the whole Raspi becomes unresponsive and I cannot SSH into it anymore, causing me to pull its plug in order to get it working again.