I would like to know what’s in your opinion is the fastest (and affordable) hardware available for openHAB4 (plus InfluxDB, Grafana, code-server)?
I started with openHAB2 on Raspberry with SD, moved to Raspberry with SSD and moved again to Docker/Container config on Synology NAS.
All moves increased the speed and I was happy.
The requirements and the WAF-factor are increasing permanently … and with more than 150 things and 1000 items in the meantime I am looking again for some hardware improvements
I am looking forward to your ideas and recommendations.
Thanks,
best, Kai
Well. I wouldn’t understand why speed is of utmost importance but if you believe it is, you can run on any x86 server.
But if automated well and programmed right, any recent Raspi (4 or 5) should do as well, and it has major advantages in terms of reliability to stick there. Think redundancy and quick replacement when in need. Reliability has much more impact on WAF than speed does.
1000 items are not uncommon. You would be the first that needs bigger hardware for that.
I’ll second the comment that your config is relatively modest. It should be performant on pretty much any supported machine (i.e. RPi 4+ or better).
But what exactly do you mean by “increased the speed”? The speed of what? OH restart times? Rules executions times? time between pressing a button on a UI and the device responmding to the command?
Performance can be defined and mesured in many ways and sometimes increasing performance in one area reduces performance in other areas.
Sometimes lack of performance in one area indicates some other problem and throwing more server hardware at it won’t do anything at all (e.g. you Zwave mesh isn’t very well connected so some messages need to take a long route to the device or never get there).
For a system of this size, given no other information, An RPi 4/5 with at least 2GB RAM (I’d shoot for 4 GB) would be more than performant. Any multicore x86 architectured machine with a similar amount of RAM should also be more than performant.
For about the same cost of an RPi with all the extra stuff you need to buy (SD card, power, case, etc.) there are several Intel N100 mini PCs available which have more RAM and pretty performant CPUs. They make a good compromise between cost and power. I’ve recently purchased a Beelink S12 Pro that I’m pretty happy with.
I don’t run OH on this machine. I desperately needed something that could run the software I need to drive my 3D printer while I wait for my new laptop which is a whole tradegy of a saga of it’s own with the end being I was scammed out of $2k from a supposedly reputable company. But this Beelink is even running Windows even as a daily driver accessed through RDP it’s holding up.
I recently moved my network to Opnsense running on a Protectli Vault. Protectli units are pretty common in the Opnsense space due to reliability. I liked it enough to get another and run OpenHAB on it. Probably overkill as it just runs OH and zwave-js-ui, but I trust my network with one, so why not OH. For reference, my previous server was a Zotac unit, and it ran perfect for 6-7 years (still in operation actually in another location). Point is, as Rich says, there are good options in the mini PC space for similar or not much more cost.
One of the resource hungry apps you named is code-server. To not impact to much your other running software, move code-server to another machine and configure it to use samba shares with your openHAB config files you want to edit.
I’m using a ZimaBoard 832 in combination with NVMe SSDs to host the complete range of servers for smart home, including OpenHAB, frontail, influxDB, Node-Red, MQTT, Grafana, Pi-hole, Syncthing, Watchtower and some others.
The system is running under ubuntu with a portainer/docker setup and separated volume areas for container, data and config. This allows very quick updates, tests and a restricted backup of only relevant and needed resources.
Until now I never felt any performance issues, the system is very handy and space saving!