After installing version 2.5.4-1 and I’ve been having some issues to use it on RP3. The only way that I found to solve the problem was to reboot the Raspberry PI. After 13 hours approximately the memory usage is about 98%, everything is running very slow and within a few hours, OH2 will stop responding.
I’d like to ask if someone faced this problem and if there’s a solution other than buying a new Raspberry PI with more memory.
Paulo
######################################################################
################# raspberrypi ############################## 7.05.170509
####################################################################### Ip = 192.168.15.20
Release = Raspbian GNU/Linux 8 (jessie)
Kernel = Linux 4.9.35-v7+
Platform = Raspberry Pi 3 Model B Rev 1.2
Uptime = 0 day(s). 13:6:15
CPU Usage = 32.7 % avg over 4 cpu(s) (4 core(s) x 1 socket(s))
I’ve been looking at the logs and it shows only general issues, I do use only one sitemap and about 30 devices configured at .items and at .things files.
I’ll post only FYI the logs, and I’ll try to do use Openhabian to see if there’s any further issue.
I’ll also flash another SD Card, but I’ve been upgrading OH since version 1, I’ve been using it flawlessly for 2 years, there may be an issue happening here and I’d like to discover it before going to a more radical solution.
Paulo
==> /var/log/openhab2/events.log <==
2020-04-27 10:00:19.813 [vent.ItemStateChangedEvent] - MotionSensor_LastMotion changed from 2020-04-27T09:35:25.773-0300 to 2020-04-27T10:00:19.777-0300
2020-04-27 10:00:40.789 [vent.ItemStateChangedEvent] - MotionSensor_LastMotion2 changed from 2020-04-27T09:59:39.337-0300 to 2020-04-27T10:00:40.774-0300
2020-04-27 10:00:59.533 [vent.ItemStateChangedEvent] - SunElevation changed from 40.79564239053302 to 41.62171581714554
2020-04-27 10:00:59.580 [vent.ItemStateChangedEvent] - MoonElevation changed from -5.33826828758231 to -4.267754456742366
2020-04-27 10:01:00.374 [vent.ItemStateChangedEvent] - CurrDateTime changed from 2020-04-27T09:57:54.160-0300 to 2020-04-27T10:01:00.338-0300
2020-04-27 10:01:00.376 [vent.ItemStateChangedEvent] - LocalTime_Date changed from 2020-04-27 09:57:54 BRT to 2020-04-27 10:01:00 BRT
2020-04-27 10:01:00.379 [vent.ItemStateChangedEvent] - Date changed from 2020-04-27T09:57:54.180-0300 to 2020-04-27T10:01:00.357-0300
2020-04-27 10:01:43.319 [vent.ItemStateChangedEvent] - MotionSensor_LastMotion2 changed from 2020-04-27T10:00:40.774-0300 to 2020-04-27T10:01:43.298-0300
2020-04-27 10:02:19.799 [vent.ItemStateChangedEvent] - MotionSensor_MotionStatus changed from ON to OFF
2020-04-27 10:02:43.576 [vent.ItemStateChangedEvent] - MotionSensor_LastMotion2 changed from 2020-04-27T10:01:43.298-0300 to 2020-04-27T10:02:43.558-0300
==> /var/log/openhab2/openhab.log <==
2020-04-27 10:03:39.207 [INFO ] [g.miio.internal.cloud.CloudConnector] - No Xiaomi cloud credentials. Cloud connectivity disabled
==> /var/log/openhab2/events.log <==
2020-04-27 10:03:49.217 [vent.ItemStateChangedEvent] - MotionSensor_LastMotion2 changed from 2020-04-27T10:02:43.558-0300 to 2020-04-27T10:03:49.197-0300
2020-04-27 10:04:05.754 [vent.ItemStateChangedEvent] - CurrDateTime changed from 2020-04-27T10:01:00.338-0300 to 2020-04-27T10:04:05.723-0300
2020-04-27 10:04:05.761 [vent.ItemStateChangedEvent] - LocalTime_Date changed from 2020-04-27 10:01:00 BRT to 2020-04-27 10:04:05 BRT
2020-04-27 10:04:05.768 [vent.ItemStateChangedEvent] - Date changed from 2020-04-27T10:01:00.357-0300 to 2020-04-27T10:04:05.745-0300
2020-04-27 10:04:55.091 [vent.ItemStateChangedEvent] - MotionSensor_LastMotion2 changed from 2020-04-27T10:03:49.197-0300 to 2020-04-27T10:04:55.068-0300
2020-04-27 10:05:29.969 [vent.ItemStateChangedEvent] - SunElevation changed from 41.62171581714554 to 42.308286609922554
2020-04-27 10:05:30.011 [vent.ItemStateChangedEvent] - MoonElevation changed from -4.267754456742366 to -3.3596854485490386
2020-04-27 10:06:06.340 [vent.ItemStateChangedEvent] - MotionSensor_LastMotion2 changed from 2020-04-27T10:04:55.068-0300 to 2020-04-27T10:06:06.317-0300
Probably I am writing obvious things. You are using raspbian jessy, which is rather old! Mstormi suggestion is a good one: this way you would start from a known working condition, with the updated raspbian buster.
Then you would need to add one binding at a time and see what happens. I am using the systeminfo binding and I keep track of the memory usage.
If you have only items and things it should be easy to start from a fresh installation.
Once you find the binding causing the memory leak (or the thing) you may have to increase the logging level for that binding with the Karaf console.
Thanks for the answers, I’m planning to start everything from scratch, installing slowly the bindings. I’ll try the system info to see the memory usage. I’ll keep you updated…
Just a quick update, I reinstalled the whole system this time using Openhabian. Then I simply restored the system using the restore command.
When I first started, the system was really light, with less processes and less memory. But the messages from the log files were showing that the available memory was dropping. After some hours the logs were showing the following. The only way I found was to restart the whole system. I’m using RPI 3 with 1Gb RAM, maybe if I change it to RPI 4 will solve the problem??
Paulo
2020-05-12 17:20:32.888 [WARN ] [org.eclipse.jetty.server.HttpChannel] - /rest/sitemaps
javax.servlet.ServletException: javax.servlet.ServletException: org.glassfish.jersey.server.ContainerException: java.lang.OutOfMemoryError: unable to create new native thread
at org.ops4j.pax.web.service.jetty.internal.JettyServerHandlerCollection.handle(JettyServerHandlerCollection.java:88) ~[bundleFile:?]
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) ~[bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.server.Server.handle(Server.java:494) ~[bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:374) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:268) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint.onFillable(SslConnection.java:426) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:320) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.io.ssl.SslConnection$2.succeeded(SslConnection.java:158) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:367) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:782) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:918) [bundleFile:9.4.20.v20190813]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_252]Caused by: javax.servlet.ServletException: org.glassfish.jersey.server.ContainerException: java.lang.OutOfMemoryError: unable to create new native thread
No, the problem is inside openHAB, not in openHABian or the hardware.
Eventually restart OH right in the beginning.
Set loglevel debug on org.apache.karaf to get some information during startup that migth point you at possible problems.
Hello, @mstormi I’ve been following up the logs all day long to see if there’s a message that would give me a clue on where to look for.
Just now Openhab started to behave instable after this log:
2020-05-13 16:44:34.277 [WARN ] [.util.thread.strategy.EatWhatYouKill] -
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method) ~[?:1.8.0_252]
at java.lang.Thread.start(Thread.java:717) ~[?:1.8.0_252]
at org.eclipse.jetty.util.thread.QueuedThreadPool.startThread(QueuedThreadPool.java:641) ~[bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.util.thread.QueuedThreadPool.execute(QueuedThreadPool.java:525) ~[bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.io.SelectorManager.execute(SelectorManager.java:163) ~[bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.io.ManagedSelector.execute(ManagedSelector.java:208) ~[bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.io.ManagedSelector.destroyEndPoint(ManagedSelector.java:281) ~[bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.io.ChannelEndPoint.onClose(ChannelEndPoint.java:219) ~[bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.io.AbstractEndPoint.doOnClose(AbstractEndPoint.java:225) ~[bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.io.AbstractEndPoint.shutdownInput(AbstractEndPoint.java:107) ~[bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.io.ChannelEndPoint.fill(ChannelEndPoint.java:237) ~[bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.server.HttpConnection.fillRequestBuffer(HttpConnection.java:341) ~[bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) ~[bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) ~[bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) ~[bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117) ~[bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:367) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:782) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:918) [bundleFile:9.4.20.v20190813]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_252]
I entered at the console to see the memory load and surprisely the free RAM is very good:
Have the same issue here since some weeks. I run openahb for 3 years now, no probs. Deinstalled some bindings, did not help. I have also the log entries regarding jetty getting out of memory errors. Updating all components did not help up to now.
@mstormi just enabled ZRAM, I’ll be monitoring the RPI for the next hours to see if there will be any problem… if it doesn’t work, I’ll try to increase swap…