I’m not having any issues with my setup on the aforementioned setup, I’ve got about 67 rules and 150 items and it doesn’t seem to be breaking a sweat. But I was curious what the limits would be for Rules or items numbers theoretically before a person should consider upgrading?
Just for fun, I wrote a test script to create lots of items.
- I have 128GB of RAM, but some of it is used by other stuff. It’s safe to say I should have around 64GB free at least
- I tried creating 10,000 items (in addition to my existing items of around 1000).
- I noticed that it started getting a bit slower after 5000 - 6000 but it was still fine all the way up to 10,000 items. The system isn’t swapping.
- I then tried creating 100,000 items, at > 10,000 it’s getting even slower to just create each additional item. Still not swapping. It’s still chugging along trying to create items, it’s at 38,000 now but progress is slow. I can restart openhab gracefully though.
I noticed this warning in the log:
02:03:16.067 [WARN ] [hab.core.internal.events.EventHandler] - The queue for a subscriber of type 'class org.openhab.core.automation.internal.module.handler.GroupStateTriggerHandler' exceeds 5000 elements. System may be unstable.
I know this doesn’t really help you figure out the limits for a 4GB Rpi though, but maybe try writing a similar script.
Here’s my script in JRuby:
MAX = 10_000
items.build do
1.upto(MAX) do |i|
switch_item "TestAA#{i}", "TestAA#{i}"
logger.info "#{i} items created" if (i % 100).zero?
end
end
sweet!
But for real, the benchmark isn’t number of items or rules - but what kind of load they generate. you can have 100 light-switches or 100 PV/inverter items. The lights will only change status once in a while, whereas inverter items tend to change every second.
Meaning: if your system is dependend on “real-time” events, it’ll generate more load than if you have more or less static states.
My setup is currently at 1200items, 93 rules (some four-liners, some >200lines), my events.log gets rotated daily and still my Pi4 idles at 90%. Currently it’s running only OH-stuff, but also an netdata-instance, a zabbix-agent and promtail sends every log-entry to a grafana-service also - and some more small services.
top - 17:17:02 up 99 days, 23:17, 1 user, load average: 0.81, 0.63, 0.50
Tasks: 163 total, 1 running, 162 sleeping, 0 stopped, 0 zombie
%Cpu(s): 9.6 us, 6.6 sy, 1.5 ni, 79.9 id, 2.1 wa, 0.0 hi, 0.3 si, 0.0 st
MiB Mem : 3844.2 total, 496.2 free, 1559.4 used, 1788.7 buff/cache
MiB Swap: 3072.0 total, 3072.0 free, 0.0 used. 2201.5 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
18079 openhab 20 0 1306760 1.0g 12568 S 58.8 27.9 7834:14 java
17675 netdata 39 19 340972 78132 11360 S 5.0 2.0 21:33.23 netdata
17997 netdata 39 19 3000 2332 328 S 2.7 0.1 18:09.32 apps.plug+
15729 root 20 0 854732 68464 51980 S 1.7 1.7 237:52.74 promtail
15 root 20 0 0 0 0 I 0.3 0.0 162:11.85 rcu_preem+
361 avahi 20 0 7468 3760 2720 S 0.3 0.1 272:11.24 avahi-dae+
363 message+ 20 0 7856 3832 3164 S 0.3 0.1 133:52.11 dbus-daem+
393 root 20 0 13044 5916 5252 S 0.3 0.2 58:03.37 systemd-l+
2675 netdata 39 19 1616 1268 976 S 0.3 0.0 0:05.57 bash
15986 root 0 -20 0 0 0 I 0.3 0.0 0:02.07 kworker/3+
17944 netdata 39 19 727604 44820 35060 S 0.3 1.1 3:44.83 go.d.plug+
17993 netdata 39 19 32172 19764 7672 S 0.3 0.5 0:51.17 python3
18004 root 39 19 8864 3416 1976 S 0.3 0.1 0:47.16 ebpf.plug+
1 root 20 0 35116 9224 6932 S 0.0 0.2 48:34.54 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:22.76 kthreadd
3 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_gp
4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_par_gp
And that’s not even near one of the bigger OH-installations! There’s loads of more complex instances running somewhere!
for the interested:
That is very true, most of my rules are pretty simple, if this is true, do this type rules. I thing the longest script I have, apart from the Expire Timer Update template, is around 12 lines.
That’s impressive! Obviously a raspberry pi wouldn’t be able to handle that many but I don’t forsee myself ever needing that many items either!
That’s an RPi4, 4GB.
The limitation is almost never your OH config. As @JimT demonstrates you really need to exceed what’s reasonable in a home automation context before you’ll start to encounter problems with resources from OH alone.
Usually what happens is all the other stuff one chooses to run along side OH on the same machine: InfluxDB, Grafana, NodeRed, Frontail, etc. Each of those take up their own resources, sometimes a lot of resources.
Hehe, but I think that’s not because of the high number of items, but the load while creating them.
Thanks to this little experiment, I found a piece of code in the jruby library that needed optimising. Once it’s optimised, creating 10,000 items took 6 seconds and 100,000 items took 40 seconds. Much faster than before.
This topic was automatically closed 41 days after the last reply. New replies are no longer allowed.