Advise for High Performance HA OpenHAB server hardware

It doesn’t matter if this should be a Wi-Fi or for example Z-wave relay. The principle is the same. Active RPi continuously runs a monitoring rule, which checks for periodic updates from Z-wave sensors (e.g. I assume Z-wave stick is died, if I stop getting them, MQTT connections to check that it has Ethernet etc) and toggles watchdog relay over Wi-fi. If Wi-fi Signal is gone - then changeover will occur.

And how do you ensure that changes are synchronized to the backup (particularly with two ZWave controllers)?

When I update the Z-wave network (which is now happening not so frequently) I clone the changes to backup Z-wave Controller.
But… you know what? Recently I stopped to care about synchronizing, because the main function of backup controller is to keep running most critical things. Furthermore when it starts up, it’s immediately starting annoying email notification service “Backup controller is active! Check your home!” So if couple of newly added Z-wave nodes are not controllable by backup controller - who cares? You have more serious trouble to check.

1 Like

Yes I had such solution before, but since we are going to vacations, etc, I just automated this process.


That’s a really good point to emphasize. I don’t worry about having an immediate backup because I don’t have any critical automation, and I suspect that this would be the case for many users. However, if someone does need a backup, it makes sense to focus on the bare necessities.

I agree, that is essentially what I suggest, too, when I recommend to create a spare SD card and have spares ready for every HW component. Read the Amanda docs I already linked to.

Stop focusing on server HW, that’s pretty low risk.
Invest in a reliable and fast restore procedure instead.

1 Like

For my home setup I used to have a cold spare RPI and ready to power up. But to do that you need phisical access to site unless you go for some hand crafted active standby solution A move to VM was such a relelief as now I can recover everything from remote interface Making conclusion from the tread I definitely want to go with some version of Virtualized environment as that also solves issue of reliable disc/storage that SBCs are generally all lacking

@Bruce_Osborne For my home setup where do have zwave, I got zwave working reliably with VM environment by use of additional RPI with a Zwave dongle. It is running a port server that VM is connecting to. There is a separate thread detailing that.

For this enterprise customer I will only use MQTT for everything.

1 Like

Enterprise is a whole different thing. Next time you start a thread, please review this tutorial forst ansd present the needed information in the initial post.

How to ask a good question / Help Us Help You - Tutorials & Examples - openHAB Community

I use proxmox and have 2 similar servers with ZFS to replicate content. Of course I have some redundancies built in the server itself, but if something breaks, the spare will boot up and start the VM that is not running on the main from the last snapshot it received (in my case, nightly). I can recommend that, it’s wife prove. But need to move the USB dongle to a networked solution…

1 Like

Hi can you please share how achieve to this?not step by step but in general

You can have a proxmox running on a single node, managing your VMs.
Extra nodes can be added at a later stage, so that you can utilise the VMs replication mechanism. The minimum replication period is 1 minute, so in case of the node failover you have pretty much equal state of the VM that is stated on a woring node.

The real problem is the availability of the usb connected devices, i.e. in my case zigbee and modbus dongles. would have to be doubled and conected to both proxmox nodes.

i tried it :slight_smile:
and then i decided to shutdown one server and got some qum error
Proxmox does not like when the other node goes to sleep

I’m now reading into the proxmox documentation and that looks like a very nice solution, have two questions can’f find straight answer to.

  1. If I have a 3 node cluster, is it that the cluster manager is running on one of the nodes or it is running on all 3 nodes and and in case of the one server failure it will keep running on the other 2.
  2. Do I really need shared networked storage for such a config? I would still need a NAS to do that yes? Or that can be done with CEPH and hold the VM images/backups within the 3 node cluster.
  1. Not entirely sure about the internal mechanism that proxmox uses to decide which of the cluster nodes is the current manager or some other fencing mechnaisms, but you can connect to either node via web (:8006) and control the cluster from either node. I haven’t tried powercycling one of the nodes to see how my “cluster” will behave, but it remained operational during one of the server reboots. (Although only the VMs that were on the node that was not rebooted remained operational, as I I didnt migrate them)
  2. From my experience, you don’t need a shared storage. Providing that you are not moving a massive amounts of data, the VM migration would simply move the virtual disk from local storage on nodeA to nodeB in couple of minutes without pausing the VM. (at least when ZFS is used for LocalStorage)
    As far as I remember, all mount points have to be available on all nodes. So in case when you have a single physical disk, like I do for Backups and it is connected to nodeB, you can export it using NFS and have NodeA and NodeB mount it. I would say CEPH is an overkill for a a home network :slight_smile:

I have a step by step in the works, but before everbody is jumping onto the wagon, here is a preview. Let me start with the overview, that is my proxmox machine.

As you can read in the middle, this is a very cheap CPU with not that much power nor RAM, so the total power consumption is about 15W, but it is not busy at all.
I have

  • openhab (don’t need explain that :smiley: )
  • TIG is Telegram,InfluxDB,Grafana.
  • Shinobi is doing my Ip-Camera Monitoring
  • UnifiController (Network Manager)
  • Fileserver (just a plain ubuntu with SMB share)
  • Nextcloud (this is where I share my documents/pictures between my devices/people)
  • Mosquitto (MQTT broker)
  • OPNSense (Firewall etc. )

They all share the same host running on a raid1 2TB HDD with a 500GB Nvme stick. The stick is read/write cache, this is easily done with ZFS. (

I haven’t had this problem, but I had other problems - so I decided NOT to use Proxmox Cluster. I just have 2 seperate Proxmox Hosts with 2 different IPs, each has 2 LAN ports (one goes to router, one to the LAN). No “sharing” - that was too complicated.

So because I have to seperate machines, I can run the simple command:
ZFS send … | ssh … | zfs receive… on one side, it will move the snapshot to a different machine. If that is combined with ZFS-auto-snapshot( script that manages when snaptshots are made, like 15min keep 8, hoursly keep 24, daily keep 7, weekly keep 4) and zfs-backup,(manages to calculate the difference between the newest snapshot on target and newest snapshot on host and then sends this incremental data to the target), you have 2 very similiar machines. In my case, If one machine breaks, I will continue to have all those services with the snapshot of last night. ( )

And for the replication part, I have a couple of simple scripts that run as cronjobs and ping a service and if it is down, they WOL the spare machine and on the spare host, a cronjob is pinging the services and start the machine ( on that spare host).

Indeed, one of the next steps is to move the USBs (z-wave and powermeter) to a little raspberryPI that either channels those as virtual USB devices to the hosts… or I just install OpenHab on that raspberryPI and use some MQTT or HTTP scripting to proxy the commands.

Hope that helps to get you started - or you wait a couple of days for my more detailed step -by-step guide.

1 Like

Here is the solution that works flawlessly, I never had a problem with the Zwave anymore as that also helped solving a problem with the range (have server room in basement)

Getting back to the proxmox, thank you for the guide. I still have one more question. Is Proxmox VM in a cluster working in a way that it hard assigned to the one of the node machines. And when it fails it needs to be recovered or migrated from backup to another node?
or it can be “distributed” among more then a single node for HA fail protection so when the node dies it will have no downtime, just maybe reduced performance.

By the time I got to end of thread, it had taken different turn towards virtualization being the answer. I don’t really agree with that, but it seems to be the way OP is leaning currently.

Even in that route, the Zwave (or whatever other) USB stick(s) seem to be a sticking point. Just one of many problems I would imagine going that way. I am not sure what the answer is to that, and I guess I won’t bother throwing out any other options as the point seems moot now.

But as I had read all the way from beginning, I still wanted to include some replies to earlier posts.

The only ARM boards I am aware of with ECC (not talking about data center stuff here, but things readily available to us mere mortals) are the Kobol Helios4 (no longer available) and their new version the Helios64 which is soon to (may have already started) shipping. But even the first batches of Helios64 will not have ECC, that will come in a later version.

Guess which board I am anxiously awaiting as my next purchase? :wink:

And yet, it gets recommended, over and over, on this very forum.

Sorry, Russ. I realize you may have heard all the arguments before in other threads. And I have new appreciation for your distaste for strife. However the debate is extremely relevant to the topic. Not only for OP, but anyone else coming along later, as well.

No, not “SBC in general”, only RPi are lacking this. I guess you must have completely missed this post?

(a moot point by now, perhaps)

My point was that the Pi is not broken. It was not designed primarily fir the purpose that person wants. It does not mean the Pi is flawed, just not the optimal tool for the task.

Debating the technical merits of the RPi with respect to the OP’s question is relevant. That’s not the direction the conversation was going, and I think @igorp was right to point out that we’re off topic and try to get it back on track.

And to be clear, if people can say why they like RPis, then people can say why they dislike them. I’ve actually been more inclined toward saying that while many users are happy to use RPis, some users have concerns about them. And if an OP asks what those concerns are, I can point them to conversations where it’s been discussed at length. Or better yet, introduce them to people with those perspectives.

Now we’re definitely off topic…:wink:

1 Like

Just to puts my 2 cents int RPI thing… I love RPI’s and I’m probably on the high edge on the number of PI’s that run anyone house with 14 being the number :slight_smile:
But they are not HA and SD card will fail sooner or later. Even high write endurance DVR sdcard failed in the PI after 1.5 year of 1 second persistence writing.

That’s why I was asking for opinion for HA clustering and if that could be done with 4-5 RPI’s that could be a solution (thing like cloverPI are being developed), I also would love it be more or less industrial solution for a rack or din rail.

I think that for research purposes I will invest in trio of Odroids H2+ as they do have more reliable NvME storage and someone did design a nice rack case for up to 8 blades. Still it will be home made solution and my original post intention was to ask if such a turnkey solution with redundant PS and maybe integrated switch is already commercially available, and it looks that conclusion is that it is not.

1 Like

And that’s why I told you to check the ZRAM implementation in openHABian which is greatly reducing the number of writes by a large factor hence increasing SD lifetime by about the same factor.
Pragmatic and to the point.

1 Like

I use a USB to SATA adapter that only cost 8GBP with a RPi 3 B+ and an PoE hat. This has been working flawlessly for around two years. I have connected a passive four way USB hub to the Pi to which I have plugged in a ZWave stick, an RFXcom 433Mhz stick and an 868Mhz CUL stick that are all served out across the network via USBoIP using ser2net.

The USB to SATA adapter allowed me to connect a spare SSD to the Pi so I don’t need to worry about SD cards failing.

The PoE hat allowed me to locate the Pi on top of a cupboard where there was no mains supply for a PSU. I could install a single Cat5e cable instead of routing a mains cable from the nearest socket.