Binding to some sort of "Intelligent" bridging of multiple OH2 instances

Basically I would like a easy way to bridge multiple instances of OpenHAB together. I am aware of the MQTT Bridge facility, which works excellent (I am actully using it). My problem is that I want to control exactly what I send to my MQTT bus, so I have ended up with a lot of MQTT mappings for each Item that I would like to share between different applications and/or OpenHAB instances.
In other words, I would like to mange what is send to my MQTT server, both the naming on the MQTT Bus as well as what is relevant to share. I have been wondering if I am the only one who has this need?

My overall idea is to obtain this by implementing a MQTT Bridge Binding. Basically I would then use a OpenHAB group to control what items should be broadcastet to different “channels” on my MQTT Server. Each Group would then represent a Bridge type in the Binding. Where each member of the group then could be auto discovered.

I could use this both for making my OpenHAB setup redundant, one Idea is that the Binding can be set up to handle a primary and secondary OpenHAB instance (though I haven’t qualified that idea yet). A more obvious (for me at least) case is to have different OpenHAB instances for retrieving data. Eg. I could have one Raspberry with some proprietary sensor (eg. DHT11 humidity), and then bridge a instance of OH on this raspberry to my main OpenHAB instance.

There might be some easier way of obtaining this? But I really miss some UI approach to do it, instead of how it is now, which works really good, but looks too much of OH1 :slight_smile:

Hm, I do understand your motivation. I’ve spent quite some thoughts on this too, but I used to be ambivalent here for quite some time and finally went back and now tend to prefer a single OH master instance. I wouldn’t mind if someone built that MQTT instance bridge for you and me, but I imagine the potential authors also made up their mind on this and are not unlikely to share my view.

OH works based on ‘shared memory’ (a.k.a. single event bus), i.e. any state is known at any time and any switch can be triggered from any rule. If you go for a distributed data and finally multi-master command like model, this will be complicating programming logic works a lot. Lots of errors still to be made and lots of tedious debugging works are hiding here … Also, you need to duplicate (and maintain in a coherent fashion) a lot of configuration data.
Doing the overall math, it’s possibly not worth the effort.

So why not stick with a master-slave system: run a simple script like this one (thanks @ben_jones12) on your remote Pi to directly map (no remote OH instance inbetween) in and out pins to MQTT. Or use an Arduino based HW instead of the Pi to also talk MQTT to the outside (= single, central OH instance). There’s even cheap Chinese devices coming up to have WiFi and simple MQTT based control builtin.

If you really need multiple OH instances (hard to imagine a single one isn’t sufficient), you can still use partial item state and command MQTT bridging for the relevant items only, as you already do,

I am fully aware of Bens work, the “problem” is that it requires some script to run on another raspberry. Seen from my perspective it sure will work, but it will just clutter things, making it hard(er) to identify problems. My thoughts is inspired of NodeRed, which is more or less the same as OpenHAB. Eg. some infrastructure to bind things together.

The main reason that I want the distributed infrastructure, is that I then can have one RAspberry Pi mounted somewhere acting as a gateway to my Z-Wave, another one could have a One-Wire interface, and all inputs could be routed to my main instance of OH. The thought about redundancy is just something extra, not the reason for it. The basic idea here is that I can now split up my OpenHAB installation for what ever reason I might have:

  • Conflicting hardware (two different bindings requiring the same hardware)
  • Distribute load
  • Monitor devices where it isn’t possible (or difficult) to have wires to the same unit.
  • Using OpenHAB as the basic integration platform, streamline administration.

You feel it’s simpler to run another OH instance instead of a simple script ? I think that quite the opposite is true.
Rules only ever run on the central instance, all OH config is only ever done there. You will be cluttering things if you duplicate the OH instance or parts thereof.

That may be a reason in theory, but I haven’t met anyone where this cannot be solved. You can define the interfaces to be used for each binding.

Any Pi (2 or 3) should not ever be a bottleneck (unless you’re doing something substantially wrong in your rules). So there is no need to or benefit in distributing load. Worse even, to have multiple OH instances talk to each other will add overhead, i.e. increase load.

That you can do in both setups. Works from within central OH it’s done via MQTT to the remote Pi, and if you feel that’s not sufficient, you can complement that with local monitoring scripts.

Exactly my point: why operate and sync TWO or more OH instances when a single one does fine.

Not exactly. I think/believe that using OpenHAB as a platform (eg. first choice). In my world it makes things simpler to use the same platform everywhere. I still have to decide what is my main instance (for all the reasons you mention), but I wouldn’t have top look for five different scripts on different Rasperries, I allways know that on this machine everything I integrate to is represented by a Binding. Each instance is then bound together using MQTT.

I am not sure I fully agree on that point, but to some distinct yes. It depends on how you ask them to talk together. My point is exactly that if I use the eventbus strategy it will add overhead, because a lot of data has to be transferred without any need.

I have actually both a OH1 and OH2 instance running, that is because I don’t want to migrate fully to OH2 yet. Some of my sitemaps was causing issues, the persistence seems to cause issues. But I really needed Z-Wave improvements from OH2, so I have actually created this bridge manually. It runs perfectly on the same Raspberry. But I need the bridge.

I don’t have anything near laod issues, but I have hear/ read others that have split their OH for that reason.

That is what I really want to avoid as mentioned above. I agree I could tend to be a sort of religion. But I believe that making things equal also makes it more obvious to see what is going on. It is hard to guess what python script delievers what information after a year when things break down.

Well sending MQTT messages isn’t eactly rocket science.
But I guess we have quite different views on how our architecture should be. I would like to use OH as a generic integrationplatform for all my devices, thus having a number of OpenHAB instances each contributing with some data to the main instance (with rules and so on). I have already obtained that by mapping each and every item to a in and/or out MQTT queue. I just wanted to see if some binding could be defiend to help here.

You think my solution is cluttered, where I think your script approach will clutter things. Each one has its own pros and cons. In my case I belive the OpenHAB as first choice approach is best, where you belive the light approach is best.

Agreed. Didn’t want to start an argument on that, it’s just that I feel those few people capable of programming that type of inter-OH binding you’re looking for haven’t done it for some of the same reasons as those I gave, i.e. their view on architecture is similar to mine. Anyway, just guessing, and good luck, maybe you still find someone to build it.

I can actually build it my self, but I don’t want to build something, just because I can…

Interesting, and a different justification for the bridge.
Do you use a single stick ? Or is the 2nd one linked to OH2 a SUC ?

One stick, but one of the reasons I did think of redundancy was the master / slave possibility with the stick.
Basically I just want to avoid a lot of individual mapping, e.g. a graphical approach to the eventbus.

and it works to have two different java processes both do concurrently lock and use the same USB device ?

On ZWave redundancy, that’s a complicated one, see also this thread. Not much point in having two OH servers but a single ZWave controller/network only.

Oh, sorry I have misunderstood you. Nope that wouldn’t work.
In my case I have Z-Wave in my OH2 and sitemaps, rules etc. in OH1
I have a file called zwave.items in both OH1 and OH2

In OH2 one item looks like this:

Switch switchLightSmallSpot	<light> {channel="zwave:device:68ecb8f1:node6:switch_binary1", mqtt="<[oh2-eventbus-in:mqtt-eventbus/slave/outdoor/switch/gableswitch/state1:command:default], >[oh2-eventbus-out:mqtt-eventbus/master/outdoor/switch/gableswitch/state1:state:*:default]" }

In OH1 the same item looks like:

Switch switchLightSmallSpot	<light> {mqtt="<[oh1-eventbus:mqtt-eventbus/master/outdoor/switch/gableswitch/state1:state:default], >[oh1-eventbus:mqtt-eventbus/slave/outdoor/switch/gableswitch/state1:command:*:default] "}

I like the idea. I do think though that there is some work going on towards this direction in the 2.0 MQTT binding. I remember reading something like this. It is worth the look.

I’d second this and point you to another python script which supports more than just GPIO (I’m partial since I largely wrote it). Options are good.

There are use cases where it makes a lot of sense to have multiple. The one that comes to mind is watou’s setup where he has remote instances to monitor his aging parent that feed to his central OH server. It is admittedly an edge use case but for me a fascinating one that really is a great use of home automation.

Why is this any different than running some program on another raspberry (i.e. openHAB)? I personally find debugging and finding problems in these simple python scripts WAY easier than debugging any sort of problem in OH. And when you combine multiple OH instances… that is cluttering thing up.

In this scenario wouldn’t you want to send everything? The “master” has to know everything and be able to talk to everything. Though clearly the slave OH instances only need to receive those messages they care about/control so I see the use in that direction.

One case I can think of is where you have two wireless devices which interfere with each other’s signals. In that case, one solution could be to put that device on a separate computer, though in that case I’d probably go with socat instead of a separate OH.

With my script, I have four instances of the same script each configured differently on three different Pis and computers. And even with this configuration it is WAY easier to update, debug, and implement new capabilities than it would be to deal with the MQTT Event Bus. It’s just one anecdote but I don’t think it is an uncommon one.

And just for the record, this script:

  • executes shell commands (I run OH in Docker so Exec binding isn’t terribly useful)
  • reports state on GPIO pins
  • activates GPIO pins based on MQTT command from OH
  • reports presence or absence of nearby BT devices
  • reports Amazon Dash button presses
  • acts as a bridge to/from my RFM69 wireless network
  • queries the network for and reports the addresses of my Rokus

And despite these scripts doing all of that in different combinations across four machines updating and maintaining these scripts is dead simple, usually just editing a .ini file and checking that in to my git server.

Debuging problems are localized, network traffic is light, and because I’m using MQTT I even have a built in health and status feature that tells me when a node goes down.

At least in my case a simple cat chimera.ini tells me exactly what information my script is delivering and how on the machine I named chimera, assuming I’d forget which hasn’t happened yet.

No but dealing with circular messages and infinite feedback loops can become quite challenging even with carefully selecting what gets published.

I think what you may be missing is that both @mstormi and I have actually tried to do this. And I’ve tried to help lots of others deal with this. We are not talking theoretics. We have direct experience and battle scars. We KNOW it is a more complicated. OH simply was not architected like Node Red was to run like this. This is why I wrote my script in the first place.

I know it feels like it would be better and easier to have everything be the same, but when everything is OH, based on our experience, it is most definitely not easier.

But note, both of us have said we like the idea of this binding and see lots of applicable uses for it.

Yes and no, If I have lets say three raspberries, where A is my master and B and C is some secondary instances. Then I don’t want B and C to receive ANY MQTT messages. I am not looking for redundancy, I just mentioned that as a possibility. But I want B to report states to A and C to report states to A. I agree that could be done just using the the mqtt eventbus, but I would also like to be able to filter what i publish. For my Z-wave things, some of them is used in my home grown “alarm system” where others isn’t. For this reason I have a program running that acts like a Alarm monitor (basically do alert if a sensor is triggered and nobody is at home). I want Those items to be published on a Alarm topic in MQTT other things goes on another MQTT Topic.

As far as the scripts I would dare to say that they have grown in the OH1 world, because that was the possibility there. But I believe OH2 tends to turn a little more to a GUI, thus making it more user friendly to more people. I would ever never tend to claim that the scripts doesn’t work, but I believe the other approach has some strengths, where scripts also has some strengths.

I don’t want this to be a religious discussion, about the two approaches, but I would like to know if anybody is interested in such a binding, and / or if there is some work going on in OH Core that would ewventually suppport my requirement.

Like I said, I believe there is some work in this direction in the MQTT 2.0 native binding. And both mstormi and I expressed interest in the binding.

It does but it isn’t quite good enough yet to be the exclusive approach to configuring OH in all respects (it is getting better day-by-day) but the problems we are foreseeing and have experienced are independent of the UIs. They are technical and logical problems that arise when trying to set up a master/slave distributed system using a program that was built and architected as a centralized controller. Something like NodeRed was architected and built from the start to support this sort of architecture.

And while there is a lot that could be done to make this sort of thing easier to do in OH, it will always be a bolted on capability and will always work less well than something built from the start with this as a requirement.

That sounds absolutely great, since I believe that some of my ideas might not fit into the binding concept.

Great, and by the way, thanks for pinpointing pro’s and cons. It is always great to have such kinds of discussions, since it helps making things more useable. I will try to think the usage, and eventually describe a possible binding in more details. I guess it won’t be a all in one swiss army knife binding :slight_smile:

I’m confused though. The MQTT 2.0 binding IS a binding.

What parts would not fit?

My ideas of how MQTT could be used :grinning:

This work seems to be a bit stuck, see Eclipse Community Forums: Eclipse SmartHome » MQTT binding for ESH.
Would be great if it could be revived, I’d love to see an MQTT 2.0 binding (hint at @marcel_verpaalen :-)).

Seems quite interesting work going on here.
I saw some discussion about Auto detect from some sort of templete / config file. That seems to be exactly what i need :slight_smile:

I think for now I can live with the manual mapping so I will wait and see …

I am running three instances of Openhab 2. Two are Raspberry Pis (with the aim of having one as a redundant standby) and the the third is a laptop. I use MQTT. Essentially all three run MQTT but the MQTT server is referenced by ip address, so all data is going to/from 192.168.1.10. In addition to the real world I/O the other two OHs also reference the IP address and so their data is always up to date.

My aim is to make the OH2 servers change IP address in the event of a problem, i.e. 192.168.1.10 goes to 192.168.1.30 (and stops handling MQTT) and 192.168.1.50 goes to 192.168.1.10 (and starts handling MQTT). When everything is going well, this is simple to do, the issue how to achieve it when a server fails (or detect failure early enough to allow either shutdown or IP address change)

I have existing rules which determine whether the OH instance is “in control”. Only one OH handles watchdogs for data and RTC updating/synching