Thoughts on Best Practice for Federating Multiple OH Installations

Background: I’ve been running OpenHAB since the early days of 2.x, and have made considerable investment in hardware over the years. I recently updated to 4.x, but two critical bindings for me are no longer supported/migrated to 4.x. To be honest they weren’t officially supported in 3.x either, last official support was OH2.5.12, however they were updated by several users and ran fine when placed in the addons folder of OH 3.x.

My OH installs run in a docker on a linux server(UnRAID). To get around the lack of OH4.x support for these critical bindings I am using the excellent binding remoteOpenHAB, which I believe was developed and is supported by @Lolodomo , and federating an OH4.0.2 docker install with an OH3.4.3 docker install. Other than a race condition that I inadvertently created between the two installations, and subsequently solved, this is working fine for me. My legacy bindings are running in the OH3.4.3 and the OH4.0.2 install has everything else, including the UI, rules, etc.

My question: Is there a best practice or strong opinions as to where the rules should reside when running multiple, but linked/coordinated OH installations? Should the rules reside in the installation where the bindings are installed, or should they reside primarily with the UI and the more modern code base, or doesn’t it matter as long as it works? All of my rules and items are text based, and at this moment the rules reside almost exclusively in the OH4.0.2 installation. I say almost exclusively as I found few rules that I needed to relocate to the OH3.4.3 install to avoid the race condition I mentioned earlier. Perhaps with some creative thought I could rewrite the rules to avoid the race condition and relocate it to OH4.x, but it works for now so I am content to leave it as is for the time being.

I would imagine that as OH advances and bindings are left stranded, this type of install will become more of a norm so I am interested to hear users thoughts on what might be a best practice for where rules should reside when federating multiple OH installations.

What are these critical bindings ?

The critical bindings are ISY/Insteon and Panasonic TV. I know there is an Insteon Binding that I could transition to, but to be honest it isn’t a substitute for ISY and I like the idea of maianiting a separate microcontroller for my Insteon installation as this covers all f my lighting, gates, garage openers, smoke detectors, etc.

The Panasonic TV binding is used for my Reference Video Displays which are Plasma and of course no longer produced but continue to serve me very well.

Why is that? If there is anything missing, why not doing a feature request.

There was work on the PanasonicTV binding as well

I can’t claim “best practice”, but related input. I have a RPI3 -SD card-basement level, Zwave-25 devices and Rpi4-2G with SSD -main floor- (IP cameras - 4, zwave-34, mysql, MQTT broker (for both Pi’s) remoteOpenHAB and to your point, most of the rules) both devices run on OH4.0.2.

I started with the Rpi3 about 6 years ago, but when the IP camera binding was developed a few years back, bought the Rpi4. Originally I retired the Rpi3, but returned it to service to create the basement level zwave network after I bought a number of energy monitors. To keep within the limits of the Rpi3 all readings get sent to the Rpi4 for mysql storage (have about 1.5 years of energy readings) and the SSD is a cheapo NVR capturing short gifs of camera activity. I have all my graphs on the Rpi4 and most of my rules on the Rpi4. I do not notice delays, but most of the data from the Rpi3 is not super time sensitive.

Related to another post (that is too long to post on) my memory use, including ZRam on the Rpi3 runs about 500MB with OH4.0.2

The only rules that I left on the Rpi3 were “Refresh” as they did not work backward through remoteOpenHAB

Best practice is to have a single controller so use rules on OH4 only. That should also avoid race conditions in the first place.
No I don’t think this is getting more common maybe there is a temporary need for obsoleted or abandoned hardware but no more. It’s edge cases.
I’d suggest you create a bounty for getting those bindings updated to 4.x.

Yes the Panasonic TV binding that was updated, but that work/maintenance stopped as of OH3.1. That is the binding I am currently using and it works well through OH3.4.3.

As for the Insteon Binding, the developers are well aware of the differences between using an ISY microcontroller interfaced to OH vs using OH alone to configure Insteon devices. Further I like the idea of redundant systems so if my OH system isn’t available for some reason, I have a fallback to the ISY microcontroller. So while the Insteon Binding remains an option, I prefer to stick with ISY for my use case. Unfortunately for me, the ISY/Insteon binding last update was also for OH3.1 and like the Panasonic TV binding it was never officially supported past OH2.5.12.

I don’t believe it would take much to get them working with OH4.x. I tried myself by setting up a development environment to update the headers from OH3.x to OH4.x, which I believe is likely all that is needed, but didn’t get very far in recompiling. My programming skills are limited and my understanding of the IDE is even more limited. I was able to load both bindings use the add-on folder in 4.0.x, and things were discovered, but as expected the binding itself was not available in the UI. Without the bindings being updated for 4.0.x I didn’t feel comfortable that this was a long term solution so I am pursuing the multiple docker installs of OH as my best, most reliable, option going forward.

I would minimize what runs on the old OH 3 instance to just the Things and Items needed from those legacy bindings. Everything else should be on the new OH 4 instance including rules and UI. There are of course edge cases (like your race condition) where a rule might need to be deployed to the older OH instance, but that rule should be to address a problem or work around a limitation local to that OH instance (e.g. debounce a noisy sensor, timing issues, etc.).

However, there are different types of federation. If the remote OH instance is supposed to largely operate autonomously (e.g. it’s controlling another house) any rules that only have to do with that separate instance should reside on that instance. If the link between the two should break, you’d want these two instances to still be able to operate independently. But that’s not the use case you’ve laid out here.

If that’s the only reason for connecting the two, you’d probably be better off severing the remote openHAB connection entirely and setting up the RPi 3’s persistence to save directly to the RPi 4’s MySQL. No connection between the two are required.

Note: this works because, last time I helped someone with a problem related to this, your OH doesn’t necessarily have to have the Item declared to chart it in MainUI if it’s in the DB.

I believe that’s the case for all addons added in that way. But even if they do not show up under Settings → Bindings, they should still show up in the Inbox and let you create bridges manually and scan for Thing that can be autodiscovered. Maybe it’s working and you just didn’t know where to look?

1 Like

Thank you to all that took the time to provide input and guidance. It is unanimous that rules should be implemented with the most modern version of OH, and not with the bindings in the legacy OH installation. This is the path I had taken as I outlined in my original post. I just wanted some additional validation that there wasn’t a better approach, or that I was missing something. It’s only been a short time, but I’m happy to say everything is working well.

Is there any default feature what would allow to create a multi host setup as described here

At least for me that page comes up blank so :person_shrugging: .

What OH offers is the Remote openHAB Addon which will let you mirror the Items defined in an other OH instance in another instance.

It also has MQTT Event Bus [;] which will pub/sub Item states and commands to a structured MQTT topic hierarchy that another OH instance (or something else entirely) can pub/sub to/from.

If you are looking for something like load balancing or automatic failover or something like that, OH provides nothing out of the box to support that.

Thanks for the clarification and sharing :slightly_smiling_face: