Super fast hardware for openHAB?

Hi,

I would like to know what’s in your opinion is the fastest (and affordable) hardware available for openHAB4 (plus InfluxDB, Grafana, code-server)?

I started with openHAB2 on Raspberry with SD, moved to Raspberry with SSD and moved again to Docker/Container config on Synology NAS.
All moves increased the speed and I was happy.
The requirements and the WAF-factor are increasing permanently … and with more than 150 things and 1000 items in the meantime I am looking again for some hardware improvements :innocent:

I am looking forward to your ideas and recommendations.
Thanks,
best, Kai

Well. I wouldn’t understand why speed is of utmost importance but if you believe it is, you can run on any x86 server, NUC or larger.
But if automated well and programmed right, any Raspi 4 or 5 should do as well, and it has major advantages in terms of reliability to stick there. Think redundancy and quick replacement when in need. Reliability has much more impact on WAF than speed does.
1000 items are not uncommon. You would be the first that needs bigger hardware for that.

4 Likes

I’ll second the comment that your config is relatively modest. It should be performant on pretty much any supported machine (i.e. RPi 4+ or better).

But what exactly do you mean by “increased the speed”? The speed of what? OH restart times? Rules executions times? time between pressing a button on a UI and the device responmding to the command?

Performance can be defined and mesured in many ways and sometimes increasing performance in one area reduces performance in other areas.

Sometimes lack of performance in one area indicates some other problem and throwing more server hardware at it won’t do anything at all (e.g. you Zwave mesh isn’t very well connected so some messages need to take a long route to the device or never get there).

For a system of this size, given no other information, An RPi 4/5 with at least 2GB RAM (I’d shoot for 4 GB) would be more than performant. Any multicore x86 architectured machine with a similar amount of RAM should also be more than performant.

For about the same cost of an RPi with all the extra stuff you need to buy (SD card, power, case, etc.) there are several Intel N100 mini PCs available which have more RAM and pretty performant CPUs. They make a good compromise between cost and power. I’ve recently purchased a Beelink S12 Pro that I’m pretty happy with.

I don’t run OH on this machine. I desperately needed something that could run the software I need to drive my 3D printer while I wait for my new laptop which is a whole tradegy of a saga of it’s own with the end being I was scammed out of $2k from a supposedly reputable company. But this Beelink is even running Windows even as a daily driver accessed through RDP it’s holding up.

2 Likes

I recently moved my network to Opnsense running on a Protectli Vault. Protectli units are pretty common in the Opnsense space due to reliability. I liked it enough to get another and run OpenHAB on it. Probably overkill as it just runs OH and zwave-js-ui, but I trust my network with one, so why not OH. For reference, my previous server was a Zotac unit, and it ran perfect for 6-7 years (still in operation actually in another location). Point is, as Rich says, there are good options in the mini PC space for similar or not much more cost.

1 Like

I got a noticeable speed improvement on my sitemap by upgrading my SD card in the Pi4 for one with higher read/write speeds.

  • 728 items
  • 227 things
  • 111 transformations
  • 186 rules
  • 27 scripts, python and shell
  • 15 bindings
1 Like

One of the resource hungry apps you named is code-server. To not impact to much your other running software, move code-server to another machine and configure it to use samba shares with your openHAB config files you want to edit.

3 Likes

I’m using a ZimaBoard 832 in combination with NVMe SSDs to host the complete range of servers for smart home, including OpenHAB, frontail, influxDB, Node-Red, MQTT, Grafana, Pi-hole, Syncthing, Watchtower and some others.

The system is running under ubuntu with a portainer/docker setup and separated volume areas for container, data and config. This allows very quick updates, tests and a restricted backup of only relevant and needed resources.

Until now I never felt any performance issues, the system is very handy and space saving!

2 Likes

I’m in the middle of migrating away from my rPi4 OH3 instance to OH4 on an x86 VM running on my Xeon server. I’m not expecting any performance issues :sunglasses:

(Note I wasn’t having any performance issues on the rPi; the migration is intended for consolidation so I can eliminate one rPi from my network and reduce points of failure. Just thought I’d weigh in…)

I am running mine on a Xeon server also. Running in proxmox and works fine. I can only put 32gb of ram in the old server which I have done. No issues. Running the latest oh version.

Ouch. Wrong turn of this discussion.
Home automation needs to be available 24x7 unless you consider yours to be a playground only.
This applies to automation design as well as to the server hardware openHAB is running on so it’s related to the speed question.
However, while speed isn’t the most important factor, reliability and resilience are.

Consolidating stuff now will result in the opposite of improving reliability, resiliency and availability.

If your big iron hardware breaks, you’ll be in trouble (unless you have a warm standby replacement which I assume you do not as Xeons are expensive.)
How much would it cost you to have spare hardware for everything available on site?
How long would it take you to order a replacement ? will you even get it ? will it be compatible with what you have ? How long will it take you to re-install it ? will restoration work ?
(how do you test your restoration procedure is working when you don’t have a 2nd set of HW ?)

If your VM software layer has a problem, you will have, too (and not only with your home automation, so should that happen you’ll probably be having to fight multiple fires at the same time).
Availability is MTBF / MTTR: mean time between failures divided by mean time to repair.
Adding complexity (such as moving to VM) lowers MTBF, increases MTTR and vastly widens your range of impact.
There’s no such layer that can fail on dedicated boxes like RPis.

Long story short: be honest with yourself and do the math right.
Unless you’re a professional data center operator with many systems you can fall back to in case of need, better use a RPi or cheap NUC for your production, and have spares on site.

1 Like

Throwing in my 2cents:

my current setup includes:

  • 27 bindings
  • 118 things
    some with >100 channels
  • 1239 items
    a whole bunch of those changing every second at least
  • 128 rules and scripts
    some firing onChange on those rapidly changing items

besides OH4.2.1-release on the same machine runs: zabbix2-agent, netstat and a bunch of minor system-tools including backup-strategies - all of that running on a Pi4/4GB smoothly.

I did copy my config into a docker-image running on a NUC for testing, and in my experience, listing of items in the UI is a biiit slower, but bindings, rules and automations don’t seem to lag.
What I avoid is running more than openHAB on the Pi, e.g. persistence (except MapDB) is running on a dedicated MySQL-DB server in my network as does my MQTT-broker, etc.

so for @April_Wexler 's initial question: moving Influx/Grafana to another hardware is one way to reduce load on a Pi and thus boosting perfomance.

Whereas if you know, how to setup proxmoxx/docker/… for virtualisation and making snapshots and/or backups of your containers and their configuration, I see no downside in centralizing services in hardware - except with openHABian you can rely on full support here in the forum.
That’s why I went from dedicated Pis (had up to 7 running in parallel) for distributed services to one homeserver hardware running all in containers - including persistence and other services also relevant for openHAB. I still run openHAB on a dedicated Pi, simply because openHABian and its advantages.

1 Like

Probably still fits in the overkill category, a year later, but I am still running on the hardware I list in this post, from last year, and now have a reasonable EmonCMS workload running on there too.

All the other comments I made in this post are still valid > 1 year down the track, other than its now had a chance to prove itself as reliable for over a year.

This is not an endorsement for the specific unit/brand I purchased, there are plenty of other options on AliExpress or eBay. Just more for the general genre of PC…

1 Like

I have 4 Xeon servers I can use. I got them for free. dell 210RII also 2 APC UPS.
I only use one at a time but I back up proxmox from one to another.
I use proxmox for firewall and zoneminder and DMZ web server so the one server does a lot more than just openhab.
I use real hard drives not sd.
Anyway to each their own. I do have a pi4 but haven’t used that for over 3 years ago.
If the main server fails I just fire up the backup server and continue and fix the failed server and swap back again. Has happen once when a hard drive failed.

3 Likes

… like I conceded: unless you’re a data center operator.
You could offer OH hosting but mind the ‘green power efficiency’ competition, or do you also own a windmill? :rofl:

1 Like

No windmill but do have solar.
The Xeon usually runs about 30 watts.

2 Likes

Not to get too off topic but there are lots of ways to have adequate MTTR/MTBF and what constitutes adequate can be quite a long time. That’s the whole “build escalators instead of elevators” discussion.

I purposely build my home automations in such a way that it’s not that huge of a deal if OH goes offline. It’s inconvenient but it’s not the end of the world if it takes a few hours or even days to fix. So one way to address the problem is to make it so MTTR can be really long and it not really matter all that much.

If you can be offline for weeks, redudant hardware isn’t that big of a deal. You have the time to order replacements if you need them.

I’m not saying this is the only approach but it is a viable approach.

For the record, in addition to building my home automation to increase the MTTR requirement, I also deploy using automated scripts and containers and those automated scripts handle restoring the most recent backup. So if I needed to rebuild a VM or machine from scratch it doesn’t take too much time. Moving a single service takes minutes. With wireguard I can do this all remotely too, even from my phone thanks to JuiceSSH.

Thus, if my main Proxmox server were to ever completely die, I can move the 14 services I have running there (only three of which are home automation specific) out to one of my other machines temporarily through a couple changes to my Ansible inventory and running my root playbook.

I think my primary point here is that yes it is important to consider MTTR and MTBF when deploying one’s home automation, having identical spare hardware is not the only viable approach.

3 Likes

Granted, but the more unknowns, the higher the risk that something goes wrong.
So spare standard hardware probably is the safest of options.
And the price tag and market availability on that make me come back to my RPi recommendation.

Uptime on my Xeon is currently 1485 days, and it’s not higher mostly because I got bored early pandemic and decided to re-wire the rack to better organize it. rPi’s are awesome devices but they are toys compared to big iron hardware. I know that’s opinion instead of fact but does it need debating? Datacentres (for the most part) don’t run arrays of rPi’s or similar hardware, and they absolutely would if it were better…

I’m not running full hyper-converged infrastructure (yet?) but as all services are provided through VMs the host hardware and OS is mostly irrelevant. I have two physical machines that host VMs; they are heterogeneous but I could migrate VMs from one to the other easily.

One Xeon draws 105W (long-term average) and the other is only metered along with the routers on the same UPS but the whole thing pulls 80W. I have 16kW of solar on my roof.

I concur with your math on Availability = MTBF / MTTR but the MTBF variable includes aspects of redundancy that you have not mentioned (ECC memory, redundant PSUs – I actually run two separate power circuits back to the main panel, with a separate UPS on each). And the VM/hypervisor structure VASTLY improves MTTR as you just spin up the VM on whatever other host has enough free resources. I could run it on my laptop temporarily if I wanted. I used to have hypervisor-related issues in the 1990’s and early 2000’s but we are 20 years past that.

Anyway, I didn’t mean to derail this conversation. There are MANY successful ways to deploy IT solutions including OpenHAB. I absolutely recommend rPi & openhabian for OH deployment at this time. If you happen to also have a server/homelab (reddit r/homelab shoutout!), then I also hope to eventually recommend OH convergence via virtualization, once I’ve had a few years of experience running it daily.

2 Likes

Err, no. In terms of quick rollback of software issues, okay.
But this statement is on the hardware part.
Plus, there’s other as-quick options such as SD mirroring in openHABian.

In terms of hardware however, no it doesn’t improve MTTR if you properly compare apples to apples. Adding a VM layer to a single host doesn’t buy you anything but additional potential points of failure. And yes things have improved since 2000, but even nowadays vSphere still shows its purple screen of death at times. And with non-identical spare hardware, chances are you run into even more issues in the very moment of a failure.
As said, do the math right. Don’t let any anecdotal availability experiences fool you.
For the full-blown solution, you need to become a data center operator and have >= 1 unused spare hosts paid, installed and ready as part of your cluster.

But that’s more than an order of magnitude away from being efficient then, like 20x or more in energy as well as financial terms.
Plus, don’t forget to factor in the additional environment requirements (space, power/UPS remaining time etc) and knowledge it takes.
Virtualization clearly is not a recommendation to the vast majority of openHAB users, which this thread is about.

Any 10yr old can swap SD cards or exchange the RPi in just minutes with me just on the phone telling him how to. Hell, I could even have him reinstall everything from remote in just one hour because it’s automated. That is the MTTR I’m talking about.

I second the raspberry. I had OpenHAB running on a linux Debian machine which crashed every other week. That time I had not put that many control functions into OpenHAB as I have now. I then moved OpenHAB, influx and grafana to a RPi 4/2 Gig. I estimate I had about 200 things that time, mostly homematic. When I changed to a RPi 4/8 Gig restarting OpenHAB went significantly faster (5 minutes versus 15 minutes). The RPi runs on a high endurance SD card which is backed up every night on a file server. This way I can reinstall the system pretty easily which lets me sleep much better.

2 Likes