High CPU usage after migration to OH4

What language are your rules written in? File based or UI rules?
How big is your set up? How many Things, Items and rules?
From what version did you upgrade from?


rules are file based. My setup 144 things, 876 items and 7 rules files. It is a little bit strange at the beginning OH has an usage of 10% but after a whil it increase to 70%.

best regards René

1 Like

Can you change the display option in htop to show the thread names?
That makes it easier to figure out what is causing the load.


check out this thread:

Sounds pretty similar to what I encountered and resolved.


I have already the lateset openhabian version installed, so the Java Option “EXTRA_JAVA_OPTS=”-Xms192m -Xmx768m" in the openhab file was present already. I have changed the htop configuration. Main processes are now SocketListner and Yesterday also upnp-main-queue. With the old Java 11 and OH3.4.4 I had average load of 4% this year.

best regards René

1 Like

I think you have the same issue as @laursen in:


yes at least the same OH4 process. So as I understand there is right now no fix available for this? I am running 4.0.1 stable release.

best regards René

There’s no fix yet. What add-ons do you have installed?


binding = astro,harmonyhub,homeconnect,hue,http,icalendar,knx,kostalinverter,logreader,livisismarthome,luxtronikheatpump,mail,miio,mqtt,nanoleaf,netatmo,ntp,openweathermap,opensprinkler,renault,sonos,spotify,systeminfo,tankerkoenig,tr064,yamahareceiver

misc = openhabcloud

persistence = jdbc-mysql

transformation = jsonpath,map,xslt

ui = basic,cometvisu,habpanel,habot

best regards René

From that list I also have astro, harmonyhub, hue, http, mqtt, netatmo, sonos.

And from this list I would mostly suspect sonos. I also have wemo using a similar UPnP implementation.

I use myself astro, homeconnect, netatmo, ntp, openweathermap, sonos.
As my CPU usage is still very low on a RPI3, I would exclude all of them.
I also use the hue binding, but the old API, so to not exclude in case you are using the new API v2.

With upnp remember that the size of the upnp network is very important. Not necessarily the size of what’s configured. Jupnp tracks ALL upnp devices on the network once it’s active even if they aren’t configured as things. It may be prudent to dial up the jupnp thread pools to see if theres a contention issue.

Someone stated that the high CPU could be due to a bug in JS script engine.
Worth trying to uninstall JSScripting in case you use it to confirm it solves the problem.

FWIW I don’t seem to have this problem all the time. I had it in the afternoon when checking, but somehow it resolved itself - for now (without a restart). So maybe you are excluding them too fast, I’m not sure.

The reason why I mentioned sonos specifically is because of the thread name upnp-main-queue and the usage of openhab-transport-upnp in sonos, which I remembered.

Here is the full list:

  • autelis
  • avmfritz
  • deconz
  • fsinternetradio
  • heos
  • homematic
  • hue
  • kodi
  • konnected
  • lametrictime
  • lgwebos
  • loxone
  • magentatv
  • miele
  • onkyo
  • openwebnet
  • pioneeravr
  • pulseaudio
  • samsungtv
  • sonos
  • sonyaudio
  • squeezebox
  • upnpcontrol
  • wemo
  • yamahamusiccast
  • yamahareceiver
  • hueemulation

You are right about hue, it’s also on the list, and common between installed bindings for @rene54321 and me. I very recently migrated to API v2. However, I think hue only uses UPnP for discovery, so there shouldn’t be any difference between using API v1 and v2?

From the full list I use deconz, hue, kodi, lgwebos, miele, samsungtv, sonos, squeezebox and wemo.

Start removing samsungtv, it is known to be problematic.

Okay, I could temporarily modify some rules in order to remove dependencies towards channels from this binding, but out of curiosity: Can you share a bit more on this? I might have a closer look at this binding then.

I’ve been battling this issue since April first reported here and then in the safeCall thread here

At one point I found a Jruby rule with a syntax error. At that point the CPU was pinning at 100% every half an hour or 45 minutes. Disabling that rule made things calm down a little but it would still happen once or twice a day. I’ve since disabled most my rules but it still happens.
For awhile I was setting safeCall to 100 as described here by Cody but it did not survive reboots and seem to only put the problem off for awhile, not cure it.
Jan also mentioned a bug in the RRD persistence service (since fixed) might be the culprit but I’ve disabled all persistence services and the problem persists.
I use the following bindings:
Amazon Echo
IP camera
Hue (still on V1)

Apt install OH on Dell desktop with Mint Linux 19, I3, 8 Gbs RAM, 1 Tbs spinning HD
DSL and Jruby rules all UI based, no file configured anything
Heck, I just got home from work and it’s pinned right now!

I have read it is only spikes, meaning CPU is at 100% only at few moments in the day. So maybe I also have it but I do not see it.

Could be worth disabiny all persistence services in case the problem comes from one of them?

What is the easiest solution to monitor CPU usage with chart? Is it to install the systeminfo binding?

For me the is not the case. In my case, when it happens, the cpu pins at 100% and stays there until I do something about it. Either I must restart openHAB or run a script Cody wrote found in this post

Here is the script. I just use the run now button to run it

org.openhab.core.common.ThreadPoolManager.field_reader :pools
tp = org.openhab.core.common.ThreadPoolManager.pools["safeCall"]

def unblock_thread_pool(tp)
  (tp.maximum_pool_size + 1).times do
    tp.submit { sleep 1 }