Hi to all.
The last days, i am searching for system settings for bigger machines. I think, after installing openhab i am using the default settings for a pi.
This settings are very interesting for me:
The thread pool for example:
and the swap file size:
My system is a intel Nuc i3 Gen7 with 8 Gb RAM and Ubuntu 16.04 LTS Server on it.
But all this used settings are test values without calculation or something like this.
Before, i used the default values. I think, i can use the full power of my system. But it shouldn’t overdose. Are this settings okay? In the moment all is running good.
Which settings do you use and why?
Thanks and greetings,
Default settings. As long as I don’t get any errors this should be good enough.
I’m using default settings as well on a 4-core, 8GB, M.2 SSD Intel system that runs OH2.5.0.M1 and test version of SNAPSHOT (in containers) as well as Grafana, InfluxDB, Mosquitto and a few ‘scripts’ that interact with my heating system, solar panel inverter, etc. No issues for me.
I have ~600 items and ~100 rules (still DSL currently).
You shouldn’t mess with OH settings such as thread pool and webclients unless you have a problem with them (haven’t heard of anyone to have this though).
You also shouldn’t mess with OS settings if you have no problem (and I never heard of anyone using x86 to have a performance problem).
-Xms -Xmx have nothing to do with swap. They are java opts that are HW specific. openHABian defaults are 250/350 for 1GB ARM system. x86 uses (I think) twice as much if 32 bit else with 64 it’s 4 times the amount of memory.
I don’t know the default on your HW/OS, but it’s reasonable to quadruple the ARM values if you have the HW capacity available so what you use should be fine.
I run the following settings on an overclocked Intel Core i5 with 8 GB.
My current openHAB config is
- 123 things
- 1558 items
- 276 rules
# Configuration of thread pool sizes
FWIW, I also just recently did some serious event processing performance testing using those settings. I had 24 load generator things, each generating 20 item changes every 350 ms, which resulted in an event processing rate of 1370 events per second. I’m not so sure the default thingHandler and safeCall settings would’ve allowed that event rate (although, admittedly, I didn’t test the default config). This was on snapshot build 1618 running on a 4 core 1.6 GHz box with 4 GB memory. CPU averaged about 40% per core throughout the test.
I don’t think the default settings are specific to running on an RPi. I use the default settings on a relatively powerful VM without complaint. I would only change those settings if you encounter some problem and it is clear that upping the thread pool will correct the problem, and there isn’t a better solution to the problem (e.g. avoid long running rules).
Increasing the size of your thread pools will not make OH use your machine more effeciently. For the most part it will cause OH to use more RAM.
In rare cases you might have a configuration that needs to process events at a very high rate, in which case OH may need l not be there best tool, where upping the thread pools would have an impact.
Thanks for your answers.
When i change the pool size, it doesn’t change something noticeable for me. That right. And the system runs the same stable as before.
But if i change the Xms and Xmx values, after a restart, the first comparing of the rule files goes much faster. I would say, you can’t recognise, that it is the first time.
Yes, the use of RAM is higher, but why shouldn’t i use it?
That’s right, you won’t notice a difference unless your system is under heavy load. In my testing, the load needs to be VERY heavy before those thread pool settings come into play.