Openhabian become very slow and unresponsive after update

  • Platform information:
    • Hardware: Rpi 3B+
    • OS: openhabian latest (as in time of writing)
    • Java Runtime Environment: default version in openhabian
    • openHAB version: 2.4
  • Issue of the topic:
    After month of running untouched, I decided to run update on openhabian on late July 2019, updating openhabian-config and firing apt-get update && apt-get upgrade. After that, everything become sluggish, and restart makes SD card corrupted. So I bought new SD card, restoring openhab_conf and openhab_userdata from bitbucket daily backup. install not official broadlink binding addons from here

Now it runs very slow, paper UI respond my command in 10 seconds, ssh to system is having 0.5-2s delay between I type and characters shown. htop often shows 100% on single core (random) when htop is fired then that core become stable at 2-3%, and RAM always about 550-600MB.

  • Please post configurations (if applicable):
    possible cause, but not sure since config is changed months before become sluggish:
    openhab.log moved to /tmp/openhab.log
    reverse proxy enabled
    built in MQTT server on openhab 2.4
  • If logs where generated please post these here using code fences:
    how long should I wait to get DEBUG log to be worth posting here? But here is what I get from frontail after I update just before I post this thread:
    (htop still shows that RAM usage is about 560MB)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255) [84:org.eclipse.jetty.server:9.4.11.v20180605]

	at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317) [84:org.eclipse.jetty.server:9.4.11.v20180605]

	at org.ops4j.pax.web.service.jetty.internal.HttpServiceContext.doHandle(HttpServiceContext.java:293) [188:org.ops4j.pax.web.pax-web-jetty:7.2.3]

	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203) [84:org.eclipse.jetty.server:9.4.11.v20180605]

	at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473) [85:org.eclipse.jetty.servlet:9.4.11.v20180605]

	at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564) [84:org.eclipse.jetty.server:9.4.11.v20180605]

	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201) [84:org.eclipse.jetty.server:9.4.11.v20180605]

	at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219) [84:org.eclipse.jetty.server:9.4.11.v20180605]

	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144) [84:org.eclipse.jetty.server:9.4.11.v20180605]

	at org.ops4j.pax.web.service.jetty.internal.JettyServerHandlerCollection.handle(JettyServerHandlerCollection.java:80) [188:org.ops4j.pax.web.pax-web-jetty:7.2.3]

	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) [84:org.eclipse.jetty.server:9.4.11.v20180605]

	at org.eclipse.jetty.server.Server.handle(Server.java:531) [84:org.eclipse.jetty.server:9.4.11.v20180605]

	at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352) [84:org.eclipse.jetty.server:9.4.11.v20180605]

	at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260) [84:org.eclipse.jetty.server:9.4.11.v20180605]

	at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281) [75:org.eclipse.jetty.io:9.4.11.v20180605]

	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102) [75:org.eclipse.jetty.io:9.4.11.v20180605]

	at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118) [75:org.eclipse.jetty.io:9.4.11.v20180605]

	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333) [87:org.eclipse.jetty.util:9.4.11.v20180605]

	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310) [87:org.eclipse.jetty.util:9.4.11.v20180605]

	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168) [87:org.eclipse.jetty.util:9.4.11.v20180605]

	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126) [87:org.eclipse.jetty.util:9.4.11.v20180605]

	at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366) [87:org.eclipse.jetty.util:9.4.11.v20180605]

	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:762) [87:org.eclipse.jetty.util:9.4.11.v20180605]

	at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:680) [87:org.eclipse.jetty.util:9.4.11.v20180605]

	at java.lang.Thread.run(Thread.java:748) [?:?]

Caused by: org.eclipse.jetty.io.EofException

	at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:286) ~[75:org.eclipse.jetty.io:9.4.11.v20180605]

	at org.eclipse.jetty.io.WriteFlusher.flush(WriteFlusher.java:429) ~[75:org.eclipse.jetty.io:9.4.11.v20180605]

	at org.eclipse.jetty.io.WriteFlusher.write(WriteFlusher.java:323) ~[75:org.eclipse.jetty.io:9.4.11.v20180605]

	at org.eclipse.jetty.io.AbstractEndPoint.write(AbstractEndPoint.java:380) ~[75:org.eclipse.jetty.io:9.4.11.v20180605]

	at org.eclipse.jetty.server.HttpConnection$SendCallback.process(HttpConnection.java:798) ~[84:org.eclipse.jetty.server:9.4.11.v20180605]

	at org.eclipse.jetty.util.IteratingCallback.processing(IteratingCallback.java:241) ~[87:org.eclipse.jetty.util:9.4.11.v20180605]

	at org.eclipse.jetty.util.IteratingCallback.iterate(IteratingCallback.java:224) ~[87:org.eclipse.jetty.util:9.4.11.v20180605]

	at org.eclipse.jetty.server.HttpConnection.send(HttpConnection.java:538) ~[84:org.eclipse.jetty.server:9.4.11.v20180605]

	at org.eclipse.jetty.server.HttpChannel.sendResponse(HttpChannel.java:790) ~[84:org.eclipse.jetty.server:9.4.11.v20180605]

	at org.eclipse.jetty.server.HttpChannel.write(HttpChannel.java:846) ~[84:org.eclipse.jetty.server:9.4.11.v20180605]

	at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:240) ~[84:org.eclipse.jetty.server:9.4.11.v20180605]

	at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:216) ~[84:org.eclipse.jetty.server:9.4.11.v20180605]

	at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:515) ~[84:org.eclipse.jetty.server:9.4.11.v20180605]

	at org.glassfish.jersey.servlet.internal.ResponseWriter$NonCloseableOutputStreamWrapper.write(ResponseWriter.java:325) ~[?:?]

	at org.glassfish.jersey.message.internal.CommittingOutputStream.write(CommittingOutputStream.java:229) ~[?:?]

	at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$UnCloseableOutputStream.write(WriterInterceptorExecutor.java:299) ~[?:?]

	at org.glassfish.jersey.message.internal.ReaderWriter.writeTo(ReaderWriter.java:116) ~[?:?]

	at org.glassfish.jersey.message.internal.AbstractMessageReaderWriterProvider.writeTo(AbstractMessageReaderWriterProvider.java:79) ~[?:?]

	at org.glassfish.jersey.message.internal.InputStreamProvider.writeTo(InputStreamProvider.java:105) ~[?:?]

	at org.glassfish.jersey.message.internal.InputStreamProvider.writeTo(InputStreamProvider.java:60) ~[?:?]

	at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.invokeWriteTo(WriterInterceptorExecutor.java:265) ~[?:?]

	at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.aroundWriteTo(WriterInterceptorExecutor.java:250) ~[?:?]

	at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162) ~[?:?]

	at org.glassfish.jersey.server.internal.JsonWithPaddingInterceptor.aroundWriteTo(JsonWithPaddingInterceptor.java:106) ~[?:?]

	at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162) ~[?:?]

	at org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundWriteTo(MappableExceptionWrapperInterceptor.java:86) ~[?:?]

	... 53 more

Caused by: java.io.IOException: Broken pipe

	at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[?:?]

	at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) ~[?:?]

	at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) ~[?:?]

	at sun.nio.ch.IOUtil.write(IOUtil.java:51) ~[?:?]

	at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) ~[?:?]

	at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:264) ~[75:org.eclipse.jetty.io:9.4.11.v20180605]

	at org.eclipse.jetty.io.WriteFlusher.flush(WriteFlusher.java:429) ~[75:org.eclipse.jetty.io:9.4.11.v20180605]

	at org.eclipse.jetty.io.WriteFlusher.write(WriteFlusher.java:323) ~[75:org.eclipse.jetty.io:9.4.11.v20180605]

	at org.eclipse.jetty.io.AbstractEndPoint.write(AbstractEndPoint.java:380) ~[75:org.eclipse.jetty.io:9.4.11.v20180605]

	at org.eclipse.jetty.server.HttpConnection$SendCallback.process(HttpConnection.java:798) ~[84:org.eclipse.jetty.server:9.4.11.v20180605]

	at org.eclipse.jetty.util.IteratingCallback.processing(IteratingCallback.java:241) ~[87:org.eclipse.jetty.util:9.4.11.v20180605]

	at org.eclipse.jetty.util.IteratingCallback.iterate(IteratingCallback.java:224) ~[87:org.eclipse.jetty.util:9.4.11.v20180605]

	at org.eclipse.jetty.server.HttpConnection.send(HttpConnection.java:538) ~[84:org.eclipse.jetty.server:9.4.11.v20180605]

	at org.eclipse.jetty.server.HttpChannel.sendResponse(HttpChannel.java:790) ~[84:org.eclipse.jetty.server:9.4.11.v20180605]

	at org.eclipse.jetty.server.HttpChannel.write(HttpChannel.java:846) ~[84:org.eclipse.jetty.server:9.4.11.v20180605]

	at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:240) ~[84:org.eclipse.jetty.server:9.4.11.v20180605]

	at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:216) ~[84:org.eclipse.jetty.server:9.4.11.v20180605]

	at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:515) ~[84:org.eclipse.jetty.server:9.4.11.v20180605]

	at org.glassfish.jersey.servlet.internal.ResponseWriter$NonCloseableOutputStreamWrapper.write(ResponseWriter.java:325) ~[?:?]

	at org.glassfish.jersey.message.internal.CommittingOutputStream.write(CommittingOutputStream.java:229) ~[?:?]

	at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$UnCloseableOutputStream.write(WriterInterceptorExecutor.java:299) ~[?:?]

	at org.glassfish.jersey.message.internal.ReaderWriter.writeTo(ReaderWriter.java:116) ~[?:?]

	at org.glassfish.jersey.message.internal.AbstractMessageReaderWriterProvider.writeTo(AbstractMessageReaderWriterProvider.java:79) ~[?:?]

	at org.glassfish.jersey.message.internal.InputStreamProvider.writeTo(InputStreamProvider.java:105) ~[?:?]

	at org.glassfish.jersey.message.internal.InputStreamProvider.writeTo(InputStreamProvider.java:60) ~[?:?]

	at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.invokeWriteTo(WriterInterceptorExecutor.java:265) ~[?:?]

	at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.aroundWriteTo(WriterInterceptorExecutor.java:250) ~[?:?]

	at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162) ~[?:?]

	at org.glassfish.jersey.server.internal.JsonWithPaddingInterceptor.aroundWriteTo(JsonWithPaddingInterceptor.java:106) ~[?:?]

	at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162) ~[?:?]

	at org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundWriteTo(MappableExceptionWrapperInterceptor.java:86) ~[?:?]

	... 53 more

2019-08-26 07:58:05.616 [ERROR] [handler.BroadlinkRemoteModel2Handler] - rm2:b4-43-0d-c3-0f-05[^]: updateItemStatus: Online -> Offline

2019-08-26 07:58:27.426 [WARN ] [mmon.WrappedScheduledExecutorService] - Scheduled runnable ended with an exception: 

java.lang.OutOfMemoryError: Java heap space

2019-08-26 08:20:45.170 [ERROR] [io.openhabcloud.internal.CloudClient] - Socket.IO error: {}

io.socket.engineio.client.EngineIOException: websocket error

	at io.socket.engineio.client.Transport.onError(Transport.java:71) [224:org.openhab.io.openhabcloud:2.4.0]

	at io.socket.engineio.client.transports.WebSocket.access$500(WebSocket.java:28) [224:org.openhab.io.openhabcloud:2.4.0]

	at io.socket.engineio.client.transports.WebSocket$2$5.run(WebSocket.java:150) [224:org.openhab.io.openhabcloud:2.4.0]

	at io.socket.thread.EventThread$2.run(EventThread.java:80) [224:org.openhab.io.openhabcloud:2.4.0]

	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:?]

	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:?]

	at java.lang.Thread.run(Thread.java:748) [?:?]

Caused by: java.io.EOFException

	at okio.RealBufferedSource.require(RealBufferedSource.java:59) ~[?:?]

	at okio.RealBufferedSource.readByte(RealBufferedSource.java:72) ~[?:?]

	at okhttp3.internal.ws.WebSocketReader.readHeader(WebSocketReader.java:115) ~[?:?]

	at okhttp3.internal.ws.WebSocketReader.processNextFrame(WebSocketReader.java:104) ~[?:?]

	at okhttp3.internal.ws.RealWebSocket.readMessage(RealWebSocket.java:97) ~[?:?]

	at okhttp3.ws.WebSocketCall.createWebSocket(WebSocketCall.java:152) ~[?:?]

	at okhttp3.ws.WebSocketCall.access$000(WebSocketCall.java:41) ~[?:?]

	at okhttp3.ws.WebSocketCall$1.onResponse(WebSocketCall.java:97) ~[?:?]

	at okhttp3.RealCall$AsyncCall.execute(RealCall.java:126) ~[?:?]

	at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) ~[?:?]

	... 3 more

2019-08-26 08:25:24.312 [ERROR] [org.apache.felix.fileinstall        ] - In main loop, we have serious trouble

java.lang.OutOfMemoryError: Java heap space

2019-08-26 08:25:28.229 [WARN ] [mmon.WrappedScheduledExecutorService] - Scheduled runnable ended with an exception: 

java.lang.OutOfMemoryError: Java heap space

2019-08-26 08:34:13.578 [WARN ] [mmon.WrappedScheduledExecutorService] - Scheduled runnable ended with an exception: 

java.lang.OutOfMemoryError: Java heap space

2019-08-26 08:47:04.008 [WARN ] [ommon.io.AbstractWebSocketConnection] - 

org.eclipse.jetty.websocket.api.WebSocketException: java.lang.OutOfMemoryError: Java heap space

	at org.eclipse.jetty.websocket.common.Parser.notifyFrame(Parser.java:228) ~[92:org.eclipse.jetty.websocket.common:9.4.11.v20180605]

	at org.eclipse.jetty.websocket.common.Parser.parse(Parser.java:245) ~[92:org.eclipse.jetty.websocket.common:9.4.11.v20180605]

	at org.eclipse.jetty.websocket.common.io.AbstractWebSocketConnection.readParse(AbstractWebSocketConnection.java:560) [92:org.eclipse.jetty.websocket.common:9.4.11.v20180605]

	at org.eclipse.jetty.websocket.common.io.AbstractWebSocketConnection.onFillable(AbstractWebSocketConnection.java:391) [92:org.eclipse.jetty.websocket.common:9.4.11.v20180605]

	at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281) [75:org.eclipse.jetty.io:9.4.11.v20180605]

	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102) [75:org.eclipse.jetty.io:9.4.11.v20180605]

	at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118) [75:org.eclipse.jetty.io:9.4.11.v20180605]

	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333) [87:org.eclipse.jetty.util:9.4.11.v20180605]

	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310) [87:org.eclipse.jetty.util:9.4.11.v20180605]

	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168) [87:org.eclipse.jetty.util:9.4.11.v20180605]

	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126) [87:org.eclipse.jetty.util:9.4.11.v20180605]

	at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366) [87:org.eclipse.jetty.util:9.4.11.v20180605]

	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:762) [87:org.eclipse.jetty.util:9.4.11.v20180605]

	at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:680) [87:org.eclipse.jetty.util:9.4.11.v20180605]

	at java.lang.Thread.run(Thread.java:748) [?:?]

Caused by: java.lang.OutOfMemoryError: Java heap space

2019-08-26 08:47:49.370 [WARN ] [e.jetty.util.thread.QueuedThreadPool] - Unexpected thread death: org.eclipse.jetty.util.thread.QueuedThreadPool$2@23ffd9 in QueuedThreadPool[ServletModel-14]@13c6605{STARTED,8<=8<=8,i=5,q=0}[ReservedThreadExecutor@1e355e9{s=0/1,p=0}]

2019-08-26 08:47:58.648 [WARN ] [org.apache.karaf.services.eventadmin] - EventAdmin: Exception during event dispatch [org.osgi.service.event.Event [topic=smarthome] {topic=smarthome/things/mihome:bridge:7811dc64e075/status, type=ThingStatusInfoEvent, payload={"status":"ONLINE","statusDetail":"NONE"}, timestamp=1566783534151} | {org.osgi.service.event.EventHandler, org.apache.karaf.event.service.EventCollector}={service.id=246, service.bundleid=47, service.scope=singleton, event.topics=*} | Bundle(org.apache.karaf.event_4.2.1 [47])]

java.lang.OutOfMemoryError: Java heap space

2019-08-26 09:35:52.732 [WARN ] [diClientSocket$KodiWebSocketListener] - Unhandled Error (closing connection)

java.lang.OutOfMemoryError: Java heap space

2019-08-26 09:37:46.636 [WARN ] [mmon.WrappedScheduledExecutorService] - Scheduled runnable ended with an exception: 

java.lang.OutOfMemoryError: Java heap space

2019-08-26 09:46:50.138 [WARN ] [mmon.WrappedScheduledExecutorService] - Scheduled runnable ended with an exception: 

java.lang.OutOfMemoryError: Java heap space

2019-08-26 09:50:07.775 [WARN ] [mmon.WrappedScheduledExecutorService] - Scheduled runnable ended with an exception: 

java.lang.OutOfMemoryError: Java heap space

2019-08-26 09:52:52.072 [WARN ] [.util.thread.strategy.EatWhatYouKill] - 

java.lang.OutOfMemoryError: Java heap space

2019-08-26 09:53:04.337 [ERROR] [org.apache.felix.fileinstall        ] - In main loop, we have serious trouble

java.lang.OutOfMemoryError: Java heap space

2019-08-26 09:53:00.379 [WARN ] [e.jetty.util.thread.QueuedThreadPool] - Unexpected thread death: org.eclipse.jetty.util.thread.QueuedThreadPool$2@1ce9e23 in QueuedThreadPool[qtp1907246]@1d1a2e{STARTED,2<=9<=50,i=1,q=0}[ReservedThreadExecutor@cc10be{s=1/4,p=0}]

2019-08-26 10:58:08.278 [WARN ] [mmon.WrappedScheduledExecutorService] - Scheduled runnable ended with an exception: 

java.lang.OutOfMemoryError: Java heap space

2019-08-26 11:04:35.557 [WARN ] [mmon.WrappedScheduledExecutorService] - Scheduled runnable ended with an exception: 

java.lang.OutOfMemoryError: Java heap space

2019-08-26 11:04:48.358 [WARN ] [mmon.WrappedScheduledExecutorService] - Scheduled runnable ended with an exception: 

java.lang.OutOfMemoryError: Java heap space

2019-08-26 11:56:04.712 [WARN ] [mmon.WrappedScheduledExecutorService] - Scheduled runnable ended with an exception: 

java.lang.OutOfMemoryError: Java heap space

2019-08-26 12:04:10.196 [WARN ] [mmon.WrappedScheduledExecutorService] - Scheduled runnable ended with an exception: 

java.lang.OutOfMemoryError: Java heap space

2019-08-26 12:04:10.198 [WARN ] [mmon.WrappedScheduledExecutorService] - Scheduled runnable ended with an exception: 

java.lang.OutOfMemoryError: Java heap space

2019-08-26 12:04:19.210 [WARN ] [mmon.WrappedScheduledExecutorService] - Scheduled runnable ended with an exception: 

java.lang.OutOfMemoryError: Java heap space

2019-08-26 12:04:10.197 [WARN ] [e.jetty.util.thread.QueuedThreadPool] - 

java.lang.OutOfMemoryError: Java heap space

2019-08-26 12:05:04.398 [WARN ] [e.jetty.util.thread.QueuedThreadPool] - Unexpected thread death: org.eclipse.jetty.util.thread.QueuedThreadPool$2@9ff59c in QueuedThreadPool[ESH-httpClient-common]@327e48{STARTED,10<=10<=40,i=5,q=0}[ReservedThreadExecutor@d31700{s=0/4,p=0}]

2019-08-27 00:16:41.455 [ERROR] [org.apache.felix.fileinstall        ] - In main loop, we have serious trouble

java.lang.OutOfMemoryError: Java heap space

2019-08-28 00:07:07.010 [WARN ] [mmon.WrappedScheduledExecutorService] - Scheduled runnable ended with an exception: 

java.lang.OutOfMemoryError: Java heap space

2019-08-28 00:07:07.010 [WARN ] [mmon.WrappedScheduledExecutorService] - Scheduled runnable ended with an exception: 

java.lang.OutOfMemoryError: Java heap space

2019-08-28 00:09:01.694 [WARN ] [e.jetty.util.thread.QueuedThreadPool] - Unexpected thread death: org.eclipse.jetty.util.thread.QueuedThreadPool$2@1bc95ed in QueuedThreadPool[jupnp-jetty-client]@1c598e2{STARTED,5<=5<=20,i=2,q=0}[ReservedThreadExecutor@9f7718{s=0/2,p=0}]

2019-08-31 06:47:37.113 [WARN ] [org.eclipse.jetty.server.HttpChannel] - /start/index

java.lang.OutOfMemoryError: Java heap space

2019-09-02 23:21:33.392 [WARN ] [.util.thread.strategy.EatWhatYouKill] - 

java.lang.OutOfMemoryError: Java heap space

2019-09-02 23:21:29.429 [WARN ] [mmon.WrappedScheduledExecutorService] - Scheduled runnable ended with an exception: 

java.lang.OutOfMemoryError: Java heap space

2019-09-02 23:29:32.212 [WARN ] [e.jetty.util.thread.QueuedThreadPool] - 

java.lang.OutOfMemoryError: Java heap space

2019-09-02 23:29:16.555 [WARN ] [e.jetty.util.thread.QueuedThreadPool] - 

java.lang.OutOfMemoryError: Java heap space

2019-09-02 23:30:50.419 [WARN ] [.util.thread.strategy.EatWhatYouKill] - 

java.lang.OutOfMemoryError: Java heap space

2019-09-02 23:30:54.493 [WARN ] [e.jetty.util.thread.QueuedThreadPool] - Unexpected thread death: org.eclipse.jetty.util.thread.QueuedThreadPool$2@1ce9e23 in QueuedThreadPool[qtp1907246]@1d1a2e{STARTED,2<=11<=50,i=2,q=0}[ReservedThreadExecutor@cc10be{s=0/4,p=0}]

2019-09-02 23:26:07.718 [WARN ] [mmon.WrappedScheduledExecutorService] - Scheduled runnable ended with an exception: 

java.lang.OutOfMemoryError: Java heap space

2019-09-02 23:30:59.426 [WARN ] [e.jetty.util.thread.QueuedThreadPool] - Unexpected thread death: org.eclipse.jetty.util.thread.QueuedThreadPool$2@9ff59c in QueuedThreadPool[ESH-httpClient-common]@327e48{STARTED,10<=10<=40,i=5,q=0}[ReservedThreadExecutor@d31700{s=0/4,p=0}]

2019-09-02 23:52:40.950 [WARN ] [org.eclipse.jetty.server.HttpChannel] - /start/index

java.lang.OutOfMemoryError: Java heap space

2019-09-02 23:56:13.643 [WARN ] [org.eclipse.jetty.server.HttpChannel] - /start/index

java.lang.OutOfMemoryError: Java heap space

2019-09-02 23:58:10.508 [WARN ] [org.eclipse.jetty.server.HttpChannel] - /start/index

java.lang.OutOfMemoryError: Java heap space

2019-09-03 00:00:19.061 [WARN ] [org.eclipse.jetty.server.HttpChannel] - /start/index

java.lang.OutOfMemoryError: Java heap space

2019-09-03 00:06:19.930 [WARN ] [org.eclipse.jetty.server.HttpChannel] - /start/index

java.lang.OutOfMemoryError: Java heap space

/tmp is in RAM, so depending on how large your logs grew (and when) this is a potential explanation

Did you actually apt-get upgrade or did you also switch to buster and did a dist-upgrade ?

I suggest you download and install latest openHABian image (which is buster based).
If you want to go easy on your SD card (save on writes to save it from wearing out) then enable the new zram feature.

1 Like

Putting logs in RAM is exactly the reason of putting it on /tmp
RAM is still in reasonable range, below 600MB while 500MB is used for Java and it stays like that after reboot until weeks later. However, even after reboot, OH still slow.

I put logs on /tmp months before and it runs normally. Everything changed after updating from repository

RAM usage depends on many factors beyond just Java. And the world keeps on turning. Any OS or OH update means a potential change in resource usage. Solutions for a past situation are not necessarily still a good solution today.
As I said check out openHABian and the zram feature. It’s built/configured specifically for low-memory SD based boxes such as RPis to not conflict with OH or OS settings.

I also started to have issues with heap space and I’m running openhabian on Intel NUC 16GB. Previously issues were connected to ipcamera binding and refreshing images from cameras, however I’ve turned it off and openhab still goes out of memory after 24hours. I’ve put 8GB for openhabian in /etc/default/openhab2 already and it doesn’t help.
Now can I determine what causes this issue? I know that there is a possibility to dump the memory to file, but I have no idea how to analyse that file. Can anbody help?
EDIT: Found this site https://dzone.com/articles/java-heap-dump-analyzer-1
also
https://dzone.com/articles/finding-java-memory-leaks-from-a-heap-dump
@WeirdlyChosen, you can check if increasing memory will help you. Try to modify /etc/default/openhab2 and increase default value of -xmx.

That won’t help with his problem (sluggishness). If he was to hit the heap size limit (which is what you increase by using -Xmx) his OH would crash at that moment.

Understood. Does anybody have a working solution for monitoring heap size from openhab? Since this problem is usually not easy to be resolved quickly I would like to have a solution for restarting openhab itself when it’s getting close to the limit, do some statistics etc. I know that this is not a solution, however my family is getting exhausted of situations when openhab stops working at all. Especially when it comes to Satel alarm system.

See Openhab filling up Memory and Swap - #90 by rlkoshak which should get you started.

I stopped logging openhab.log at all to test performance, however it looks like RAM is still consumed by cache. This is what I see on top command after few minutes restarting openhab service

top - 01:52:42 up 2 days, 11:40,  1 user,  load average: 1.00, 1.05, 1.06
Tasks: 110 total,   1 running,  65 sleeping,   0 stopped,   0 zombie
%Cpu(s): 25.2 us,  0.2 sy,  0.0 ni, 74.7 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem :   999036 total,    77924 free,   547868 used,   373244 buff/cache
KiB Swap:   102396 total,   100604 free,     1792 used.   388160 avail Mem 

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                            
15441 openhab   20   0  655716 483868  15004 S 100.0 48.4  55:03.93 java                                               
18325 openhab+  20   0    9732   3236   2784 R   0.7  0.3   0:00.14 top                                                
    1 root      20   0   26980   6060   4932 S   0.0  0.6   0:06.85 systemd                                            
    2 root      20   0       0      0      0 S   0.0  0.0   0:00.27 kthreadd                                           
    3 root       0 -20       0      0      0 I   0.0  0.0   0:00.00 rcu_gp                                             
    4 root       0 -20       0      0      0 I   0.0  0.0   0:00.00 rcu_par_gp                                         
    8 root       0 -20       0      0      0 I   0.0  0.0   0:00.00 mm_percpu_wq                                       
    9 root      20   0       0      0      0 S   0.0  0.0   0:06.29 ksoftirqd/0   

Is there any clue on what I should search? Do daily cron to commit and push conf and userdata folder to git may take that many cache? Does openhab use cache that big?

this is what is inside /tmp folder

openhabian@ohpi:/tmp$ ls
hsperfdata_openhab  systemd-private-771418155355416f9b1a523aa7594cb6-systemd-timesyncd.service-gdY6jv

Your java is running at full speed (100%) for a longer time (load averages all >1). Every time you change (or just touch) files in items and rules directories, OH will process these, or even all rules (if you touch items files). This results in changes queuing up. That’s what I suspect is your problem.
It’s either you to manually edit files in place, or some script, cron or other to modify your config files.

That may be the case. I crammed all items in a file and all groups in separate file in order to reduce typo and to make me able to track which group contains what items in a spreadsheet. It takes less than 500 lines including comments.

I do not touch items file for months, but I Will try to split items file tonight to see if it works.

After splitting those items into 2 files, with 79 and 100 lines of items including comments, it does withstand RAM eater longer. But it still run out of RAM.

###############################################################################
###############  ohpi  ########################################################
###############################################################################
##        Ip = 192.168.1.79
##   Release = Raspbian GNU/Linux 9 (stretch)
##    Kernel = Linux 4.19.66-v7+
##  Platform = Raspberry Pi 3 Model B Rev 1.2
##    Uptime = 7 day(s). 19:49:9
## CPU Usage = 2% avg over 4 cpu(s) (4 core(s) x 1 socket(s))
##  CPU Load = 1m: 0.07, 5m: 0.02, 15m: 0.00
##    Memory = Free: 0.08GB (9%), Used: 0.86GB (91%), Total: 0.95GB
##      Swap = Free: 0.09GB (98%), Used: 0.00GB (2%), Total: 0.09GB
##      Root = Free: 11.44GB (83%), Used: 2.21GB (17%), Total: 14.26GB
##   Updates = 21 apt updates available.
##  Sessions = 1 session(s)
## Processes = 114 running processes of 32768 maximum processes
###############################################################################

This is TOP result:

top - 10:04:08 up 7 days, 19:51,  1 user,  load average: 0.14, 0.09, 0.03
Tasks: 110 total,   1 running,  65 sleeping,   0 stopped,   0 zombie
%Cpu(s):  1.9 us,  0.6 sy,  0.0 ni, 97.4 id,  0.0 wa,  0.0 hi,  0.2 si,  0.0 st
KiB Mem :   999036 total,    93604 free,   555000 used,   350432 buff/cache
KiB Swap:   102396 total,   100860 free,     1536 used.   375288 avail Mem 

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                    
 4667 openhab   20   0  659928 492728  15068 S   5.9 49.3 410:34.57 java                       
13306 openhab+  20   0   43204  14144  11832 S   2.3  1.4   0:02.59 smbd                       
13820 openhab+  20   0    9732   3232   2772 R   1.3  0.3   0:00.16 top                        
 8590 root      20   0       0      0      0 I   0.3  0.0   0:02.60 kworker/0:2-eve            
13227 root      20   0       0      0      0 I   0.3  0.0   0:00.04 kworker/2:0-eve            
    1 root      20   0   26980   6056   4928 S   0.0  0.6   0:14.76 systemd                    
    2 root      20   0       0      0      0 S   0.0  0.0   0:00.82 kthreadd                   
    3 root       0 -20       0      0      0 I   0.0  0.0   0:00.00 rcu_gp                     
    4 root       0 -20       0      0      0 I   0.0  0.0   0:00.00 rcu_par_gp                 
    8 root       0 -20       0      0      0 I   0.0  0.0   0:00.00 mm_percpu_wq               
    9 root      20   0       0      0      0 S   0.0  0.0   0:19.79 ksoftirqd/0    

and this is HTOP result:

With these result, I cannot access openhabian.local nor openhab mobile app open their sitemaps.

Is there something that I overlooked?

So java does not eat CPU any longer so OH should not be sluggish any more.
What do you mean by “it still run out of RAM” ? That’s normal numbers.
How much RAM did you give to tmpfs /tmp ? As I said, move to zram instead.

When I launch htop it still show 100% usage on 1 core for a second then it went back normal. I wonder why my uploaded screenshot is not displaying.

You can see on ssh welcome page, free memory is only 9% while Java use only 49%. I suspect something is using cache to drain my RAM, but I do not know how to track it.

It is openhabian default. I no longer write logs on /tmp since last week.

I really want to try zram, but I prefer my system to be stable before trying something new, so I can track what is the source of my problem.