"Low configured threads" warning

Installing the latest snapshot #1369 (on Ubuntu using the official package), I get this warning on startup:

2018-09-23 11:49:07.668 [WARN ] [e.jetty.util.thread.ThreadPoolBudget] - Low configured threads: (max=8 - required=1)=7 < warnAt=8 for QueuedThreadPool[ServletModel-14]@2a7e782c{STARTING,8<=0<=8,i=0,q=0}[ReservedThreadExecutor@3e416fb8{s=0/1,p=0}]

Does anyone know what it means, or what I should do about it?

thanks,

Dan

1 Like

Don’t know, but #1370 is just out to include quite some ESH changes.
Suggest you try that first. I also had some issues with #1369.

EDIT: I saw this on #1370 startup, too. Guess it could be related to the Karaf update. Any idea @wborn ?

1 Like

I noticed the following to be in /var/lib/openhab2/etc/org.ops4j.pax.logging.cfg.dpkg-dist.
I (as probably you as well) refused to have the install routine replace my org.ops4j.pax.logging.cfg.

# Filters warnings about small thread pools.
# The thread pool is kept small intentionally for supporting resource constrained hardware.
log4j2.logger.threadpoolbudget.name = org.eclipse.jetty.util.thread.ThreadPoolBudget
log4j2.logger.threadpoolbudget.level = ERROR
1 Like

We also noticed the warning while upgrading Karaf in #761 and opted for suppressing the warning with the logging changes you refused to apply. :wink:

1 Like

You are both exactly right. I got bored some time ago comparing the package version of the log cfg with my version, and so stopped checking at all. My bad!

Given my PC isn’t resource constrained, how would I go about increasing ThreadPoolBudget, and what to?

Ok thanks. But glad I refused, isn’t that a sort of head-in-sand policy ?

We don’t use HTTP/2 which requires more threads and there are false positives of the warning but the fix for that is not part of the Jetty integrated in Karaf 4.2.1.

See: ThreadPoolBudget logs WARN when minThreads == maxThreads (was: Reasoning behind ThreadPoolBudget warning logic change on 3/5/18) · Issue #2798 · jetty/jetty.project · GitHub

I had the same error some time ago. (I did not see it today)
I’m not sure I understand the thread. Should I do anything?
Should I just ignore (it seems that is what this thread is saying)

What type of hardware are you running on? If it has many cores, see this.

I’m running it on a Pine64. if I remember correct, this is a quad core.
so quad is not more then 16.

so I assume this won’t work for me.

Probably not.

After upgrading my openhab 2.3 stable installation (RPI 3B) to 2.4 stable, i see this message in my startup log:

2018-12-23 10:52:37.373 [WARN ] [e.jetty.util.thread.ThreadPoolBudget] - Low configured threads: (max=8 - required=1)=7 < warnAt=8 for QueuedThreadPool[ServletModel-18]@9c5fbb{STARTING,8<=0<=8,i=0,q=0}[ReservedThreadExecutor@1627520{s=0/1,p=0}]

In addition I have a lot of “503 jersey is not ready yet” using PaperUI since I’m on 2.4 (I never had these messages with 2.3). Maybe this is in relation to the warning?

I followed the tips from

I also see this message after upgrading to 2.4-stable (w/o further impact on the system).
I tried to increase the thread pool size via runtime.cfg

org.eclipse.smarthome.webclient:minThreadsShared=20
org.eclipse.smarthome.webclient:maxThreadsShared=40
org.eclipse.smarthome.webclient:minThreadsCustom=15
org.eclipse.smarthome.webclient:maxThreadsCustom=25

Restarted openhab. But the warning is still there.
But if I got it right (and as this is just a warning) this should be nothing to worry about, should it?

This false positive warning is specific to openHAB 2.4.0. If you didn’t apply the suggested logging configuration changes (while upgrading) you can also suppress it by raising the log level in the Console with:

log:set error org.eclipse.jetty.util.thread.ThreadPoolBudget

It doesn’t help with solving Fix for Jetty error when running on host with many cores, which is another issue.

The false positive warning seems to be fixed with the upgrade to Karaf 4.2.2 which uses Jetty 9.4.12.v20180830 and is available in openHAB 2.5.0-SNAPSHOT Build # 1482 (see PR #834).

2 Likes

I’m concerned this is less of a false positive and more of an actual issue on systems with big configurations. When I upgraded from 2.3.0 to the 2.4.0 maintenance releases (I believe I went up at M4) I noticed that my bindings and things would take a very long time to load when starting OH. According to the logs, all of my rules and items load with no issues. As soon as that happens, the ThreadPoolBudget warning comes up and everything goes sideways. I’ve seen it take as long as 10 minutes after that for my things and bindings to load. This system has a substantial backend, I’m not worried about resources. I’ve read through these threads and applied about 10 different configurations in the runtime.cfg and quartz.properties which fixed some of the issues, but this still happens. Everything eventually loads and the messages calm down. While the errors are flying I can see bindings loading one at a time, random things showing up in the inbox (presumably before the thing configuration loaded), and then the things attaching to them when they finally load. The only thing that comes to mind is that the system is starved for threads in the pool and it’s taking a long time to load everything. Are there any other thread pool configurations that could be added to mitigate this? To note, I’m also seeing the 503 errors noted by adahmen above.

2 Likes

Maybe you can then test if it’s solved with 2.5.0-SNAPSHOT? It no longer shows the warning for me after the Jetty upgrade.

2 Likes

I’ll cut over to 2.5 when I get some time to take the plunge. Inevitably those leaps breaks something and I don’t have time to start getting in the weeks of fixing things right now. I was hoping to do a quick test of any additional thread pool configs before making that leap.

For reference, these are the parameters I’ve configured so far in /etc/openhab2/services/runtime.cfg:
org.eclipse.smarthome.threadpool:thingHandler
org.eclipse.smarthome.threadpool:discovery
org.eclipse.smarthome.threadpool:safeCall
org.eclipse.smarthome.threadpool:ruleEngine

org.eclipse.smarthome.webclient:minThreadsShared
org.eclipse.smarthome.webclient:maxThreadsShared
org.eclipse.smarthome.webclient:minThreadsCustom
org.eclipse.smarthome.webclient:maxThreadsCustom

And in /usr/share/openhab2/runtime/etc/quartz.properties (only because I can’t figure out the correct syntax for /etc/openhab2/services/runtime.cfg)
org.quartz.threadPool.threadCount

EDIT: Go down about 10 replies for the longer list.

Do you care to share your settings for these parameters?

I’ve tested several different variations from very low numbers (<100) to insanely high (up to 10000) and I’ve come to the conclusion that “it depends on what you have running” is mostly the answer. Right now I believe I have the first 4 and quartz set to 1000 and the second 4 set to 50 and 500 respectively for min/max. The only negative I’ve seen with going to insanely high is more memory utilization. While 1000 is definitively overkill, I’ve not otherwise been negatively effected by having it that high and it ensures I won’t jam up during load. The OH install runs as a VM on an ESXi host with considerable resources so memory usage was not a concern of mine through the testing. I’ve dialed them up and down and the answer has effectively been “as long as you have enough threads, you don’t jam up”.

I bit the bullet last night and pushed to 2.5.0-SNAPSHOT (Build 1486) and while the error message went away, the startup behavior has not changed. I’ve always been a fan of the motto that “if someone spent the time to code the error/warning, there was probably a reason they did it even if that’s not immediately apparent.” At this point, my belief is that I’m in some sort of race condition or thread starvation which is causing the bindings and things to load in a congested manner. I would like to figure out what was creating the original thread warning (and setting me to 8 threads) and manually set that number higher. ESXi handles threads a little differently than a physical host would and I can utilize additional threads through some magic voodoo that they have.

1 Like