I noticed the following to be in /var/lib/openhab2/etc/org.ops4j.pax.logging.cfg.dpkg-dist.
I (as probably you as well) refused to have the install routine replace my org.ops4j.pax.logging.cfg.
# Filters warnings about small thread pools.
# The thread pool is kept small intentionally for supporting resource constrained hardware.
log4j2.logger.threadpoolbudget.name = org.eclipse.jetty.util.thread.ThreadPoolBudget
log4j2.logger.threadpoolbudget.level = ERROR
You are both exactly right. I got bored some time ago comparing the package version of the log cfg with my version, and so stopped checking at all. My bad!
Given my PC isn’t resource constrained, how would I go about increasing ThreadPoolBudget, and what to?
We don’t use HTTP/2 which requires more threads and there are false positives of the warning but the fix for that is not part of the Jetty integrated in Karaf 4.2.1.
I had the same error some time ago. (I did not see it today)
I’m not sure I understand the thread. Should I do anything?
Should I just ignore (it seems that is what this thread is saying)
In addition I have a lot of “503 jersey is not ready yet” using PaperUI since I’m on 2.4 (I never had these messages with 2.3). Maybe this is in relation to the warning?
Restarted openhab. But the warning is still there.
But if I got it right (and as this is just a warning) this should be nothing to worry about, should it?
This false positive warning is specific to openHAB 2.4.0. If you didn’t apply the suggested logging configuration changes (while upgrading) you can also suppress it by raising the log level in the Console with:
The false positive warning seems to be fixed with the upgrade to Karaf 4.2.2 which uses Jetty 9.4.12.v20180830 and is available in openHAB 2.5.0-SNAPSHOT Build # 1482 (see PR #834).
I’m concerned this is less of a false positive and more of an actual issue on systems with big configurations. When I upgraded from 2.3.0 to the 2.4.0 maintenance releases (I believe I went up at M4) I noticed that my bindings and things would take a very long time to load when starting OH. According to the logs, all of my rules and items load with no issues. As soon as that happens, the ThreadPoolBudget warning comes up and everything goes sideways. I’ve seen it take as long as 10 minutes after that for my things and bindings to load. This system has a substantial backend, I’m not worried about resources. I’ve read through these threads and applied about 10 different configurations in the runtime.cfg and quartz.properties which fixed some of the issues, but this still happens. Everything eventually loads and the messages calm down. While the errors are flying I can see bindings loading one at a time, random things showing up in the inbox (presumably before the thing configuration loaded), and then the things attaching to them when they finally load. The only thing that comes to mind is that the system is starved for threads in the pool and it’s taking a long time to load everything. Are there any other thread pool configurations that could be added to mitigate this? To note, I’m also seeing the 503 errors noted by adahmen above.
I’ll cut over to 2.5 when I get some time to take the plunge. Inevitably those leaps breaks something and I don’t have time to start getting in the weeks of fixing things right now. I was hoping to do a quick test of any additional thread pool configs before making that leap.
For reference, these are the parameters I’ve configured so far in /etc/openhab2/services/runtime.cfg:
org.eclipse.smarthome.threadpool:thingHandler
org.eclipse.smarthome.threadpool:discovery
org.eclipse.smarthome.threadpool:safeCall
org.eclipse.smarthome.threadpool:ruleEngine
And in /usr/share/openhab2/runtime/etc/quartz.properties (only because I can’t figure out the correct syntax for /etc/openhab2/services/runtime.cfg)
org.quartz.threadPool.threadCount
EDIT: Go down about 10 replies for the longer list.
I’ve tested several different variations from very low numbers (<100) to insanely high (up to 10000) and I’ve come to the conclusion that “it depends on what you have running” is mostly the answer. Right now I believe I have the first 4 and quartz set to 1000 and the second 4 set to 50 and 500 respectively for min/max. The only negative I’ve seen with going to insanely high is more memory utilization. While 1000 is definitively overkill, I’ve not otherwise been negatively effected by having it that high and it ensures I won’t jam up during load. The OH install runs as a VM on an ESXi host with considerable resources so memory usage was not a concern of mine through the testing. I’ve dialed them up and down and the answer has effectively been “as long as you have enough threads, you don’t jam up”.
I bit the bullet last night and pushed to 2.5.0-SNAPSHOT (Build 1486) and while the error message went away, the startup behavior has not changed. I’ve always been a fan of the motto that “if someone spent the time to code the error/warning, there was probably a reason they did it even if that’s not immediately apparent.” At this point, my belief is that I’m in some sort of race condition or thread starvation which is causing the bindings and things to load in a congested manner. I would like to figure out what was creating the original thread warning (and setting me to 8 threads) and manually set that number higher. ESXi handles threads a little differently than a physical host would and I can utilize additional threads through some magic voodoo that they have.