I noticed the following to be in /var/lib/openhab2/etc/org.ops4j.pax.logging.cfg.dpkg-dist.
I (as probably you as well) refused to have the install routine replace my org.ops4j.pax.logging.cfg.
# Filters warnings about small thread pools.
# The thread pool is kept small intentionally for supporting resource constrained hardware.
log4j2.logger.threadpoolbudget.name = org.eclipse.jetty.util.thread.ThreadPoolBudget
log4j2.logger.threadpoolbudget.level = ERROR
This false positive warning is specific to openHAB 2.4.0. If you didn’t apply the suggested logging configuration changes (while upgrading) you can also suppress it by raising the log level in the Console with:
I’m concerned this is less of a false positive and more of an actual issue on systems with big configurations. When I upgraded from 2.3.0 to the 2.4.0 maintenance releases (I believe I went up at M4) I noticed that my bindings and things would take a very long time to load when starting OH. According to the logs, all of my rules and items load with no issues. As soon as that happens, the ThreadPoolBudget warning comes up and everything goes sideways. I’ve seen it take as long as 10 minutes after that for my things and bindings to load. This system has a substantial backend, I’m not worried about resources. I’ve read through these threads and applied about 10 different configurations in the runtime.cfg and quartz.properties which fixed some of the issues, but this still happens. Everything eventually loads and the messages calm down. While the errors are flying I can see bindings loading one at a time, random things showing up in the inbox (presumably before the thing configuration loaded), and then the things attaching to them when they finally load. The only thing that comes to mind is that the system is starved for threads in the pool and it’s taking a long time to load everything. Are there any other thread pool configurations that could be added to mitigate this? To note, I’m also seeing the 503 errors noted by adahmen above.
I’ll cut over to 2.5 when I get some time to take the plunge. Inevitably those leaps breaks something and I don’t have time to start getting in the weeks of fixing things right now. I was hoping to do a quick test of any additional thread pool configs before making that leap.
For reference, these are the parameters I’ve configured so far in /etc/openhab2/services/runtime.cfg:
I’ve tested several different variations from very low numbers (<100) to insanely high (up to 10000) and I’ve come to the conclusion that “it depends on what you have running” is mostly the answer. Right now I believe I have the first 4 and quartz set to 1000 and the second 4 set to 50 and 500 respectively for min/max. The only negative I’ve seen with going to insanely high is more memory utilization. While 1000 is definitively overkill, I’ve not otherwise been negatively effected by having it that high and it ensures I won’t jam up during load. The OH install runs as a VM on an ESXi host with considerable resources so memory usage was not a concern of mine through the testing. I’ve dialed them up and down and the answer has effectively been “as long as you have enough threads, you don’t jam up”.
I bit the bullet last night and pushed to 2.5.0-SNAPSHOT (Build 1486) and while the error message went away, the startup behavior has not changed. I’ve always been a fan of the motto that “if someone spent the time to code the error/warning, there was probably a reason they did it even if that’s not immediately apparent.” At this point, my belief is that I’m in some sort of race condition or thread starvation which is causing the bindings and things to load in a congested manner. I would like to figure out what was creating the original thread warning (and setting me to 8 threads) and manually set that number higher. ESXi handles threads a little differently than a physical host would and I can utilize additional threads through some magic voodoo that they have.