Memory overload on openHAB 2.5.4 - 1

Tags: #<Tag:0x00007f617a7750a0> #<Tag:0x00007f617a774d80> #<Tag:0x00007f617a774920>

After installing version 2.5.4-1 and I’ve been having some issues to use it on RP3. The only way that I found to solve the problem was to reboot the Raspberry PI. After 13 hours approximately the memory usage is about 98%, everything is running very slow and within a few hours, OH2 will stop responding.

I’d like to ask if someone faced this problem and if there’s a solution other than buying a new Raspberry PI with more memory.

Paulo

######################################################################
################# raspberrypi ############################## 7.05.170509
####################################################################### Ip = 192.168.15.20

Release = Raspbian GNU/Linux 8 (jessie)

Kernel = Linux 4.9.35-v7+

Platform = Raspberry Pi 3 Model B Rev 1.2

Uptime = 0 day(s). 13:6:15

CPU Usage = 32.7 % avg over 4 cpu(s) (4 core(s) x 1 socket(s))

CPU Load = 1m: 9.38, 5m: 7.25, 15m: 8.16

Memory = Free: 0.02GB (2%), Used: 0.87GB (98%), Total: 0.90GB

Swap = Free: 0.09GB (100%), Used: 0.00GB (0%), Total: 0.09GB

Root = Free: 20.41GB (77%), Used: 5.87GB (23%), Total: 27.71GB

Updates = 0 apt-get updates available.

Sessions = 3 sessions

Processes = 161 running processes of 32768 maximum processes

#####################################################################

You fail to give several details one would need to help you such as config and debug logs as requested here

I’d suggest you migrate to openHABian first.
Flash to another SD card and use openhab-cli backup/restore to get your OH config over.
Enable ZRAM.

1 Like

I’ve been looking at the logs and it shows only general issues, I do use only one sitemap and about 30 devices configured at .items and at .things files.

I’ll post only FYI the logs, and I’ll try to do use Openhabian to see if there’s any further issue.

I’ll also flash another SD Card, but I’ve been upgrading OH since version 1, I’ve been using it flawlessly for 2 years, there may be an issue happening here and I’d like to discover it before going to a more radical solution.

Paulo

==> /var/log/openhab2/events.log <==
2020-04-27 10:00:19.813 [vent.ItemStateChangedEvent] - MotionSensor_LastMotion changed from 2020-04-27T09:35:25.773-0300 to 2020-04-27T10:00:19.777-0300
2020-04-27 10:00:40.789 [vent.ItemStateChangedEvent] - MotionSensor_LastMotion2 changed from 2020-04-27T09:59:39.337-0300 to 2020-04-27T10:00:40.774-0300
2020-04-27 10:00:59.533 [vent.ItemStateChangedEvent] - SunElevation changed from 40.79564239053302 to 41.62171581714554
2020-04-27 10:00:59.580 [vent.ItemStateChangedEvent] - MoonElevation changed from -5.33826828758231 to -4.267754456742366
2020-04-27 10:01:00.374 [vent.ItemStateChangedEvent] - CurrDateTime changed from 2020-04-27T09:57:54.160-0300 to 2020-04-27T10:01:00.338-0300
2020-04-27 10:01:00.376 [vent.ItemStateChangedEvent] - LocalTime_Date changed from 2020-04-27 09:57:54 BRT to 2020-04-27 10:01:00 BRT
2020-04-27 10:01:00.379 [vent.ItemStateChangedEvent] - Date changed from 2020-04-27T09:57:54.180-0300 to 2020-04-27T10:01:00.357-0300
2020-04-27 10:01:43.319 [vent.ItemStateChangedEvent] - MotionSensor_LastMotion2 changed from 2020-04-27T10:00:40.774-0300 to 2020-04-27T10:01:43.298-0300
2020-04-27 10:02:19.799 [vent.ItemStateChangedEvent] - MotionSensor_MotionStatus changed from ON to OFF
2020-04-27 10:02:43.576 [vent.ItemStateChangedEvent] - MotionSensor_LastMotion2 changed from 2020-04-27T10:01:43.298-0300 to 2020-04-27T10:02:43.558-0300
==> /var/log/openhab2/openhab.log <==
2020-04-27 10:03:39.207 [INFO ] [g.miio.internal.cloud.CloudConnector] - No Xiaomi cloud credentials. Cloud connectivity disabled
==> /var/log/openhab2/events.log <==
2020-04-27 10:03:49.217 [vent.ItemStateChangedEvent] - MotionSensor_LastMotion2 changed from 2020-04-27T10:02:43.558-0300 to 2020-04-27T10:03:49.197-0300
2020-04-27 10:04:05.754 [vent.ItemStateChangedEvent] - CurrDateTime changed from 2020-04-27T10:01:00.338-0300 to 2020-04-27T10:04:05.723-0300
2020-04-27 10:04:05.761 [vent.ItemStateChangedEvent] - LocalTime_Date changed from 2020-04-27 10:01:00 BRT to 2020-04-27 10:04:05 BRT
2020-04-27 10:04:05.768 [vent.ItemStateChangedEvent] - Date changed from 2020-04-27T10:01:00.357-0300 to 2020-04-27T10:04:05.745-0300
2020-04-27 10:04:55.091 [vent.ItemStateChangedEvent] - MotionSensor_LastMotion2 changed from 2020-04-27T10:03:49.197-0300 to 2020-04-27T10:04:55.068-0300
2020-04-27 10:05:29.969 [vent.ItemStateChangedEvent] - SunElevation changed from 41.62171581714554 to 42.308286609922554
2020-04-27 10:05:30.011 [vent.ItemStateChangedEvent] - MoonElevation changed from -4.267754456742366 to -3.3596854485490386
2020-04-27 10:06:06.340 [vent.ItemStateChangedEvent] - MotionSensor_LastMotion2 changed from 2020-04-27T10:04:55.068-0300 to 2020-04-27T10:06:06.317-0300

Probably I am writing obvious things. You are using raspbian jessy, which is rather old! Mstormi suggestion is a good one: this way you would start from a known working condition, with the updated raspbian buster.

Then you would need to add one binding at a time and see what happens. I am using the systeminfo binding and I keep track of the memory usage.

If you have only items and things it should be easy to start from a fresh installation.

Once you find the binding causing the memory leak (or the thing) you may have to increase the logging level for that binding with the Karaf console.

2 Likes

Thanks for the answers, I’m planning to start everything from scratch, installing slowly the bindings. I’ll try the system info to see the memory usage. I’ll keep you updated…

Just a quick update, I reinstalled the whole system this time using Openhabian. Then I simply restored the system using the restore command.

When I first started, the system was really light, with less processes and less memory. But the messages from the log files were showing that the available memory was dropping. After some hours the logs were showing the following. The only way I found was to restart the whole system. I’m using RPI 3 with 1Gb RAM, maybe if I change it to RPI 4 will solve the problem??

Paulo

2020-05-12 17:20:32.888 [WARN ] [org.eclipse.jetty.server.HttpChannel] - /rest/sitemaps

javax.servlet.ServletException: javax.servlet.ServletException: org.glassfish.jersey.server.ContainerException: java.lang.OutOfMemoryError: unable to create new native thread

at org.ops4j.pax.web.service.jetty.internal.JettyServerHandlerCollection.handle(JettyServerHandlerCollection.java:88) ~[bundleFile:?]

at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) ~[bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.server.Server.handle(Server.java:494) ~[bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:374) [bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:268) [bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) [bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) [bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint.onFillable(SslConnection.java:426) [bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:320) [bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.io.ssl.SslConnection$2.succeeded(SslConnection.java:158) [bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) [bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117) [bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336) [bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313) [bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171) [bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129) [bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:367) [bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:782) [bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:918) [bundleFile:9.4.20.v20190813]

at java.lang.Thread.run(Thread.java:748) [?:1.8.0_252]Caused by: javax.servlet.ServletException: org.glassfish.jersey.server.ContainerException: java.lang.OutOfMemoryError: unable to create new native thread

just FYI, this is my first screen after rebooting it:

###############  openhab  ## 
###################################################
###########################################
##        Ip = 192.168.15.20
##   Release = Raspbian GNU/Linux 10 (buster)
##    Kernel = Linux 4.19.97-v7+
##  Platform = Raspberry Pi 3 Model B Rev 1.2
##    Uptime = 0 day(s). 0:6:45
## CPU Usage = 2.28% avg over 4 cpu(s) (4 core(s) x 1 socket(s))
##  CPU Load = 1m: 0.05, 5m: 0.27, 15m: 0.18
##    Memory = Free: 0.38GB (41%), Used: 0.56GB (59%), Total: 0.95GB
##      Swap = Free: 0.09GB (100%), Used: 0.00GB (0%), Total: 0.09GB
##      Root = Free: 109.70GB (97%), Used: 2.52GB (3%), Total: 117.01GB
##   Updates = 0 apt updates available.
##  Sessions = 1 session(s)
## Processes = 112 running processes of 32768 maximum processes

No, the problem is inside openHAB, not in openHABian or the hardware.
Eventually restart OH right in the beginning.
Set loglevel debug on org.apache.karaf to get some information during startup that migth point you at possible problems.

1 Like

I’ve set the DEBUG level as stated but I didn’t see anything different so far:

2020-05-13 10:25:19.265 [DEBUG] [mpl.info.InfoBundleTrackerCustomizer] - Ignore incorrect info null provided by bundle org.openhab.io.openhabcloud
2020-05-13 10:25:19.307 [DEBUG] [mpl.info.InfoBundleTrackerCustomizer] - Ignore incorrect info null provided by bundle org.openhab.io.webaudio
2020-05-13 10:25:20.113 [DEBUG] [mpl.info.InfoBundleTrackerCustomizer] - Ignore incorrect info null provided by bundle org.openhab.ui.basic
2020-05-13 10:25:20.658 [DEBUG] [mpl.info.InfoBundleTrackerCustomizer] - Ignore incorrect info null provided by bundle org.openhab.ui.classic
2020-05-13 10:25:20.988 [INFO ] [ui.habmin.internal.servlet.HABminApp] - Started HABmin servlet at /habmin
2020-05-13 10:25:21.018 [DEBUG] [mpl.info.InfoBundleTrackerCustomizer] - Ignore incorrect info null provided by bundle org.openhab.ui.habmin
2020-05-13 10:25:21.245 [INFO ] [panel.internal.HABPanelDashboardTile] - Started HABPanel at /habpanel
2020-05-13 10:25:21.263 [DEBUG] [mpl.info.InfoBundleTrackerCustomizer] - Ignore incorrect info null provided by bundle org.openhab.ui.habpanel
2020-05-13 10:25:21.332 [DEBUG] [mpl.info.InfoBundleTrackerCustomizer] - Ignore incorrect info null provided by bundle org.openhab.ui.iconset.classic
2020-05-13 10:25:21.559 [INFO ] [openhab.ui.paper.internal.PaperUIApp] - Started Paper UI at /paperui
2020-05-13 10:25:21.571 [DEBUG] [mpl.info.InfoBundleTrackerCustomizer] - Ignore incorrect info null provided by bundle org.openhab.ui.paper
2020-05-13 10:25:21.579 [DEBUG] [mpl.info.InfoBundleTrackerCustomizer] - Ignore incorrect info null provided by bundle org.reactivestreams.reactive-streams
2020-05-13 10:25:21.648 [DEBUG] [mpl.info.InfoBundleTrackerCustomizer] - Ignore incorrect info null provided by bundle org.openhab.binding.mqtt.generic
2020-05-13 10:25:22.001 [DEBUG] [mpl.info.InfoBundleTrackerCustomizer] - Ignore incorrect info null provided by bundle org.openhab.binding.mqtt.homeassistant
2020-05-13 10:25:22.143 [DEBUG] [mpl.info.InfoBundleTrackerCustomizer] - Ignore incorrect info null provided by bundle org.openhab.binding.mqtt.homie

Hello, @mstormi I’ve been following up the logs all day long to see if there’s a message that would give me a clue on where to look for.

Just now Openhab started to behave instable after this log:

2020-05-13 16:44:34.277 [WARN ] [.util.thread.strategy.EatWhatYouKill] - 

java.lang.OutOfMemoryError: unable to create new native thread

at java.lang.Thread.start0(Native Method) ~[?:1.8.0_252]

at java.lang.Thread.start(Thread.java:717) ~[?:1.8.0_252]

at org.eclipse.jetty.util.thread.QueuedThreadPool.startThread(QueuedThreadPool.java:641) ~[bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.util.thread.QueuedThreadPool.execute(QueuedThreadPool.java:525) ~[bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.io.SelectorManager.execute(SelectorManager.java:163) ~[bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.io.ManagedSelector.execute(ManagedSelector.java:208) ~[bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.io.ManagedSelector.destroyEndPoint(ManagedSelector.java:281) ~[bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.io.ChannelEndPoint.onClose(ChannelEndPoint.java:219) ~[bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.io.AbstractEndPoint.doOnClose(AbstractEndPoint.java:225) ~[bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.io.AbstractEndPoint.shutdownInput(AbstractEndPoint.java:107) ~[bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.io.ChannelEndPoint.fill(ChannelEndPoint.java:237) ~[bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.server.HttpConnection.fillRequestBuffer(HttpConnection.java:341) ~[bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) ~[bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) ~[bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) ~[bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117) ~[bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336) [bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313) [bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171) [bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129) [bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:367) [bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:782) [bundleFile:9.4.20.v20190813]

at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:918) [bundleFile:9.4.20.v20190813]

at java.lang.Thread.run(Thread.java:748) [?:1.8.0_252] 

I entered at the console to see the memory load and surprisely the free RAM is very good:

###################################################################
##        Ip = 192.168.15.20
##   Release = Raspbian GNU/Linux 10 (buster)
##    Kernel = Linux 4.19.97-v7+
##  Platform = Raspberry Pi 3 Model B Rev 1.2
##    Uptime = 0 day(s). 6:20:22
## CPU Usage = 26.95% avg over 4 cpu(s) (4 core(s) x 1 socket(s))
##  CPU Load = 1m: 1.87, 5m: 2.10, 15m: 1.75
##    Memory = Free: 0.29GB (31%), Used: 0.65GB (69%), Total: 0.95GB
##      Swap = Free: 0.09GB (100%), Used: 0.00GB (0%), Total: 0.09GB
##      Root = Free: 109.66GB (97%), Used: 2.56GB (3%), Total: 117.01GB
##   Updates = 0 apt updates available.
##  Sessions = 2 session(s)
## Processes = 110 running processes of 32768 maximum processes
###################################################################

I have no idea on how to proceed, any help will be appreciated!!

Paulo

Hmm, seems jetty (the webserver) runs out of mem.
Double-check that

  • java runs with -Xms and -Xmx (ps -ef)
  • swap is on (swapon)

Clear the cache again.

Enable ZRAM if you have not already.

1 Like

Have the same issue here since some weeks. I run openahb for 3 years now, no probs. Deinstalled some bindings, did not help. I have also the log entries regarding jetty getting out of memory errors. Updating all components did not help up to now.

1 Like

@snapjack this issue started after I upgraded OH to the version 2.5.4-1, I have to reboot it every 9 - 10 hours… Still struggling with this problem…

Did you attempt my proposals?
Things to try other than those:

  • move to snapshot
  • identify the component to probably cause the memleak by deactivating some of the bindings

Do you have the miio binding enabled?

@Simsal, yes I do!

@mstormi I didn’t enable ZRAM because all the warnings from openhabian-config, but I’ll do it right away.

the swapon shows that /var/swap file size is 100M used 0.

  • I listed the processes (ps -ef) but I didn’t find informations about java:

    UID        PID  PPID  C STIME TTY          TIME CMD
    root         1     0  0 06:19 ?        00:00:06 /sbin/init
    root         2     0  0 06:19 ?        00:00:00 [kthreadd]
    root         3     2  0 06:19 ?        00:00:00 [rcu_gp]
    root         4     2  0 06:19 ?        00:00:00 [rcu_par_gp]
    root         8     2  0 06:19 ?        00:00:00 [mm_percpu_wq]
    root         9     2  0 06:19 ?        00:00:02 [ksoftirqd/0]
    root        10     2  0 06:19 ?        00:00:16 [rcu_sched]
    root        11     2  0 06:19 ?        00:00:00 [rcu_bh]
    root        12     2  0 06:19 ?        00:00:00 [migration/0]
    root        13     2  0 06:19 ?        00:00:00 [cpuhp/0]
    root        14     2  0 06:19 ?        00:00:00 [cpuhp/1]
    root        15     2  0 06:19 ?        00:00:00 [migration/1]
    root        16     2  0 06:19 ?        00:00:00 [ksoftirqd/1]
    root        19     2  0 06:19 ?        00:00:00 [cpuhp/2]
    root        20     2  0 06:19 ?        00:00:00 [migration/2]
    root        21     2  0 06:19 ?        00:00:00 [ksoftirqd/2]
    root        24     2  0 06:19 ?        00:00:00 [cpuhp/3]
    root        25     2  0 06:19 ?        00:00:00 [migration/3]
    root        26     2  0 06:19 ?        00:00:01 [ksoftirqd/3]
    root        29     2  0 06:19 ?        00:00:00 [kdevtmpfs]
    root        30     2  0 06:19 ?        00:00:00 [netns]
    root        32     2  0 06:19 ?        00:00:03 [kworker/1:1-events]
    root        34     2  0 06:19 ?        00:00:00 [khungtaskd]
    root        35     2  0 06:19 ?        00:00:00 [oom_reaper]
    root        36     2  0 06:19 ?        00:00:00 [writeback]
    root        37     2  0 06:19 ?        00:00:00 [kcompactd0]
    root        38     2  0 06:19 ?        00:00:00 [crypto]
    root        39     2  0 06:19 ?        00:00:00 [kblockd]
    root        40     2  0 06:19 ?        00:00:00 [watchdogd]
    root        42     2  0 06:19 ?        00:00:00 [rpciod]
    root        43     2  0 06:19 ?        00:00:00 [kworker/u9:0-hci0]
    root        44     2  0 06:19 ?        00:00:00 [xprtiod]
    root        47     2  0 06:19 ?        00:00:00 [kswapd0]
    root        48     2  0 06:19 ?        00:00:00 [nfsiod]
    root        59     2  0 06:19 ?        00:00:00 [kthrotld]
    root        60     2  0 06:19 ?        00:00:00 [iscsi_eh]
    root        61     2  0 06:19 ?        00:00:00 [dwc_otg]
    root        62     2  0 06:19 ?        00:00:00 [DWC Notificatio]
    root        63     2  0 06:19 ?        00:00:00 [vchiq-slot/0]
    root        64     2  0 06:19 ?        00:00:00 [vchiq-recy/0]
    root        65     2  0 06:19 ?        00:00:00 [vchiq-sync/0]
    root        66     2  0 06:19 ?        00:00:00 [vchiq-keep/0]
    root        68     2  0 06:19 ?        00:00:00 [irq/86-mmc1]
    root        71     2  0 06:19 ?        00:00:00 [mmc_complete]
    root        75     2  0 06:19 ?        00:00:01 [jbd2/mmcblk0p2-]
    root        76     2  0 06:19 ?        00:00:00 [ext4-rsv-conver]
    root        77     2  0 06:19 ?        00:00:00 [kworker/1:2H-kblockd]
    root        78     2  0 06:19 ?        00:00:01 [kworker/2:1H-kblockd]
    root        80     2  0 06:19 ?        00:00:00 [ipv6_addrconf]
    root        96     2  0 06:19 ?        00:00:00 [kworker/3:2H-kblockd]
    root        97     2  0 06:19 ?        00:00:01 [kworker/0:2H+kblockd]
    root       102     1  0 06:19 ?        00:00:01 /lib/systemd/systemd-journald
    root       146     1  0 06:19 ?        00:00:01 /lib/systemd/systemd-udevd
    root       209     2  0 06:19 ?        00:00:00 [cfg80211]
    root       211     2  0 06:19 ?        00:00:00 [brcmf_wq/mmc1:0]
    root       213     2  0 06:19 ?        00:00:00 [brcmf_wdog/mmc1]
    systemd+   252     1  0 06:19 ?        00:00:00 /lib/systemd/systemd-timesyncd
    root       295     1  0 06:19 ?        00:00:00 /lib/systemd/systemd-logind
    message+   296     1  0 06:19 ?        00:00:00 /usr/bin/dbus-daemon --system --      address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
    root       297     1  0 06:19 ?        00:00:00 /usr/sbin/alsactl -E HOME=/run/alsa -s -n 19 -c rdaemon
    avahi      298     1  0 06:19 ?        00:00:07 avahi-daemon: running [openhabian.local]
    root       304     1  0 06:19 ?        00:00:00 /sbin/wpa_supplicant -u -s -O /run/wpa_supplicant
    root       309     1  0 06:19 ?        00:00:00 /usr/sbin/cron -f
    root       314     1  0 06:19 ?        00:00:09 /usr/sbin/rngd -r /dev/hwrng
    avahi      315   298  0 06:19 ?        00:00:00 avahi-daemon: chroot helper
    root       323     1  0 06:19 ?        00:00:00 /usr/sbin/rsyslogd -n -iNONE
    root       386     1  0 06:19 ?        00:00:00 wpa_supplicant -B -c/etc/wpa_supplicant/wpa_supplicant.conf -iwlan0 -Dnl80211,wext
    root       407     1  0 06:20 ?        00:00:00 /usr/bin/hciattach /dev/serial1 bcm43xx 921600 noflow - b8:27:eb:81:e2:a7
    root       411     2  0 06:20 ?        00:00:00 [kworker/u9:2-hci0]
    root       412     1  0 06:20 ?        00:00:00 /usr/lib/bluetooth/bluetoothd
    root       482     1  0 06:20 ?        00:00:06 /sbin/dhcpcd -q -w
    root       485     1  0 06:20 ?        00:00:00 /usr/bin/python3 /usr/share/unattended-upgrades/unattended-upgrade-shutdown --wait-for-signal
    openhab    486     1 27 06:20 ?        02:23:30 /usr/bin/java -Dopenhab.home=/usr/share/openhab2 -Dopenhab.conf=/etc/openhab2 -Dopenhab.runtime=/usr/share/openhab2/runt
    openhab    487     1  0 06:20 ?        00:00:11 node /usr/lib/node_modules/frontail/bin/frontail --ui-highlight --ui-highlight-preset /usr/lib/node_modules/frontail/pre
    root       488     1  0 06:20 ?        00:00:09 /usr/sbin/nmbd --foreground --no-process-group
    mosquit+   489     1  0 06:20 ?        00:00:28 /usr/sbin/mosquitto -c /etc/mosquitto/mosquitto.conf
    root       497     1  0 06:20 tty1     00:00:00 /sbin/agetty -o -p -- \u --noclear tty1 linux
    root       535     1  0 06:20 ?        00:00:00 /usr/sbin/sshd -D
    root       628     1  0 06:20 ?        00:00:00 /usr/sbin/smbd --foreground --no-process-group
    root       654   628  0 06:20 ?        00:00:00 /usr/sbin/smbd --foreground --no-process-group
    root       655   628  0 06:20 ?        00:00:00 /usr/sbin/smbd --foreground --no-process-group
    root       660   628  0 06:20 ?        00:00:00 /usr/sbin/smbd --foreground --no-process-group
    openhab    724   487  0 06:20 ?        00:00:00 tail -n 200 -F /var/log/openhab2/openhab.log /var/log/openhab2/events.log
    root       824     2  0 13:58 ?        00:00:00 [kworker/0:1H]
    root      1332     2  0 14:05 ?        00:00:00 [kworker/1:1H]
    root      1400     2  0 14:06 ?        00:00:00 [kworker/u8:1-events_unbound]
    root      1803     2  0 14:12 ?        00:00:00 [kworker/2:0-mm_percpu_wq]
    root      1818     2  0 14:12 ?        00:00:00 [kworker/1:2-cgroup_destroy]
    root      1983     2  0 14:14 ?        00:00:01 [kworker/0:0-events]
    root      2416     2  0 14:21 ?        00:00:00 [kworker/3:1H]
    root      3177     2  0 14:33 ?        00:00:00 [kworker/0:2-events_power_efficient]
    root      4554     2  0 14:54 ?        00:00:00 [kworker/3:0-mm_percpu_wq]
    root      4615     2  0 14:55 ?        00:00:00 [kworker/2:0H]
    root      4927     2  0 14:59 ?        00:00:00 [kworker/3:1-mm_percpu_wq]
    root      5235     2  0 15:05 ?        00:00:00 [kworker/u8:0]
    root      5273     2  0 15:05 ?        00:00:00 [kworker/2:2H]
    root      5277   535  1 15:05 ?        00:00:00 sshd: openhabian [priv]
    root      5300     2  0 15:05 ?        00:00:00 [kworker/3:2-events]
    root      5308     2  0 15:05 ?        00:00:00 [kworker/2:2-rcu_gp]
    openhab+  5309     1  2 15:05 ?        00:00:00 /lib/systemd/systemd --user
    openhab+  5312  5309  0 15:05 ?        00:00:00 (sd-pam)
    openhab+  5326  5277  0 15:05 ?        00:00:00 sshd: openhabian@pts/0
    openhab+  5329  5326  3 15:05 pts/0    00:00:00 -bash
    root      5727     2  0 15:05 ?        00:00:00 [kworker/0:1]
    root      5730     2  0 15:05 ?        00:00:00 [kworker/0:0H]
    openhab+  5731  5329  0 15:05 pts/0    00:00:00 ps -ef
    root     12855     1  0 09:09 ?        00:00:00 /usr/lib/policykit-1/polkitd --no-debug
    root     15152     2  0 09:24 ?        00:00:01 [kworker/2:1-rcu_gp]
    root     32684     2  0 13:51 ?        00:00:00 [kworker/u8:2-events_unbound]
    

I’ll post the results as soon as get any updates.

Thanks in advance!

Paulo

Duh, so you’ve got 1.1G in total… that’s not enough. Increase swap if you can (you can add another swapfile). But ZRAM is even better.

1 Like

Please read Marcel’s recommendations.
Another user tracked it to the new miio cloud connection and faulty configuration.

2 Likes

@mstormi just enabled ZRAM, I’ll be monitoring the RPI for the next hours to see if there will be any problem… if it doesn’t work, I’ll try to increase swap…