Connecting distributed openhab instances

@hollladiewaldfee You could try looking into Node Red ( for your distributed raspi instances. It supports usage of the gpio out of tge box and can speak back to openhab via MQTT.

Hi Daniel,

i’m interested in this possibility too and have made basic config of MQTT but i’m not sure if both instancec now alreay connect together. I would appreciate if you could share more details how you realized it.



so far I installed a MQTT Broker and connected the event bus of both instances to it.
The openhab configuration is the same for items, sitemaps, rules…

This is the openhab.cfg:

MQTT general

mqtt:pi2.clientId=pi1 for the 1st
mqtt:pi2.clientId=pi2 for the 2nd

MQTT event bus:

case 1:
I configured both instances to Publish/Subscribe to the same topic, like so:


both are updating the the MQTT Broker which can be seen with:

mosquitto_sub -v -t ‘home/openHAB/pi1/mySwitch/command’
home/openHAB/pi1/mySwitch/command OFF
home/openHAB/pi1/mySwitch/command ON

but the other instance is not updated via MQTT.

I configured the instances to Publish/Subscribe vice versa:





the result is that the first update on the bus ends up in a loop.
Pi1 updates one item, which is published to the MQTT broker, which sends it to the Pi2, which sends it back to the broker, which sends it back to Pi1…

Does anybody see the problem?

I see the problem but I’m not sure I know the solution. The problem is because EVERYTHING gets published to the bus when Pi1 (for example) reads in and processes an event from Pi2 it publishes that back to the bus and around and around we go.

I can see four ways to potentially do this:

  1. Rather than trying to keep the states of items in the two openHAB instances in sync (i.e. the same Items with the same states in both instances) instead keep them separate. For those items that Pi1 needs from Pi2, name them differently so when the events from the bus are processed it doesn’t get republished to the bus.

  2. Add some custom logic to the system to either keep openHAB from republishing the events that came from the other Pi or to filter the events so that the second time around they are not processed and the loop stops. I see hints in the back of my mind that this might be possible through the rules but I suspect it would require changes to OH’s core code.

  3. Abandon the eventbus approach and implement the publish and subscribe of the items between the two Pis directly through Items using the MQTT binding.

  4. Abandon one of the OH instances and implement the sensor reporting and actuator work through some other means (e.g. Python, Node-Red, etc).

This is not working either, I removed all items, sitemaps, scripts, rules from Pi2 and configured the eventbus on both Pi’s to publish/subscribe to the same tropics. As soon as any item on Pi1 is receiving a command the loop is starting again.

The next approach was to publish all eventbus messages from the “fence Pi” to one MQTT topic and subscribe the other Pi to it.


This is working if you want to push all updates from one OH to another, but I want to push some item updates in one direction and some in to other direction.

Therefore I’m now using the MQTT Binding and added outbound strings to the doorbell item at the “fence Pi” and inbound strings at the other Pi.

Switch doorbell {mqtt=">[pi2:/openhab/doorbell:command:ON:ON],>[pi2:/openhab/doorbell:command:OFF:OFF]"}
Switch doorbell {mqtt="<[pi2:/openhab/doorbell:command:ON:ON],<[pi2:/openhab/doorbell:command:OFF:OFF]"}

On the PiCam in the outside I would consider a camera that was designed for such environment, I would expect it will have ifrared for nightvision and the housing to withstand the elements. Or go for IP based doorphone perhaps…

@JjS: Your thoughts are good but not really related to the discussed topic of MQTT communication setup between multiple instances of openhab.

Does someone solved the loop problem?
It might be that it can work if we just send command instead of states also?

Hi Alberto,

I also thought about that, but I think it would loos e the whole idea in that way. Thinking from the point of the Broker I created multiple incoming queues and only one outgoing so that all MQTT clients get the same states back. But I’m not yet in a halfway usable production environment and therefore I’m not able to say it works or not.


Here is one approach:

Hello Gentlemen,

I have my openhab hardly running in my workshop, but my plans are to build a sytem for my home as well later.
Now my workshop and my home are at a half town distance away from each other it is obvious I’ll need two separate sytems running (alarms and remote gate openers involved etc.).
I wonder what is the right configuration to access both from my single smartphone (or one system from the other location) while I’d like to give limited access to my workmates.

Watou’s method seems reliable but is there any of you might have an idea how to figure this out?

Thank you in advance.

@Gorgo, I think a lot depends on all of your use cases and requirements as to what approach would be the best.

For example, given that you want to give some access to your workmates but not give unlimited access (presumably you would not want them to be able to access your house controls) I don’t think watou’s approach would be appropriate (without some additions) because openHAB really doesn’t have a mechanism for providing limited access to the sitemap based on user/password.

I’ve not done this myself so everything I’m about to say is notional and theoretical. I need to move to Theory, everything works there. :smiley:

Anyway, what I would do is apply @watou’s technique to get everything controllable from one central OH instance. Then create separate sitemaps, one for you and one for your workmates. Finally, and this is the special sauce, you need to set up a reverse proxy (ngnix, Apache, etc) and implement your user authentication there. With this setup you can have separate authentication on two URLs and the reverse proxy forwards the web requests to the appropriate OH sitemap. With this you should be able to limit the workmates to only access the limited sitemap while you can access the master sitemap.

Well, I think the first and main issue is that I’d like to have a controller set up locally for each location.
Somehow I wouldn’t find it too convincing to have the controller at the very other corner of the town. For me it means if my net connection failed I wouldn’t be able to turn on the light in my kitchen… no way.
As I explained the other day -talking about kitchen lights- I want to have my sytem built up from standalone nodes (wall switch in the kitchen-local node-relay --bonus:state reported to controller and possible remote controllig through it), instead of giving the controller full control (like wall switch-node-controller-node-relay).

All I want is to access the two systems.

Filtering access and security is the second step ahead.

Perhaps I should move towards nodered?

In watou’s solution you do not have to have a master/slave relationship with the controllers. A controller is set up for each location and I think watou centralized all of his rules to make maintenance of the rules easier. The controllers just share/duplicate their state and allow for them to interact with each other across a message bus.

Like I said, you don’t have to have one master controller. And even if you do, you can host a local sitemap on each instance and should the network between them go down you have a backup. But given your desire to keep your switches from being completely dependent on the controller it seems like this would be a non-issue. No controller, no problem, just use the local node relay directly.

I haven’t seen your other posts on this so I don’t yet know your reasoning or what risk you are trying to mitigate with your stand-alone nodes. But I will comment that many many devices follow this pattern. For example, all of my zwave devices can be controlled locally and it is only when I want to control them through OH (i.e. through the sitemap or through a rule) that the “controller” comes into the mix.

The limitation in OH’s phone apps is they can only connect to one OH instance at a time, and there is no way to limit what users can see and do based on log in. So by setting up a reverse proxy you are addressing both issues by applying separate authentication on a per URL basis and managing transfer of the web access from one URL to the appropriate URL on your OH instances. But this is itself introducing a single point of failure as the reverse proxy needs to be hosted at one location.

Again, it depends on your use case but you might try something like the following:

At your workshop set up the reverse proxy to do the authentication for your workmates and you. Also set it up so you can bring up your home sitemap when you are at the workshop. The reverse proxy’s URL should be configured for your local URL in the app. So when you are at the workshop you are accessing your OH instance there locally and not dependent on the network between your workshop and your house unless you want to get to your house’s HA, which the reverse proxy will forward it to.

At your house do the same only in reverse (i.e. the proxy forwards the traffic to the workshop instead of the house’s HA). Make sure the two URLs are the same on the two local networks.

Finally, pick one to be your main server for when you are not at either location and configure that URL as the Remote URL.

With this configuration when you are at a location you are only dependent on the network between locations when you try to remotely connect or when you are not at either location. The OH instances are also completely independent of each other. Finally, because you are using the same local URL you are able to access both locations from the same app without needing a Master Controller.

About two minutes of Google shows that you are likely to have the same problems with NodeRed.

You must be right, so I guess I’ve misunderstood you at some point.
I’ll do my research to find out wether this setup is operational or not.
Thank you for your help and explaining your idea.

It was in an other forum, where members argued upon to give controller total control or not. Mentioning ESP8266/Arduino based nodes.

I suppose both approaches are appropriate for some contexts (e.g. Hue Bulbs physically do not lend themselves to being controlled in any way except through the controller), but my personal HA philosophy is that if you have to resort to a User Interface it is an HA failure. And if you do have to resort to human interaction (e.g. triggering a garage door opener) it should be as easy or easier to trigger than the traditional non-automated way.

So, to continue the garage door opener, the automation better be as simple as pressing a button on the remote strapped to your sun visor or it is a regression. I handle this through some automation on my phone which senses when I approach the house and opens a dialog asking if I want to open the garage. BUT this doesn’t work for my wife’s iPhone so I’m still in search of a button. I’ll probably end up just replacing the ancient garage door so I can get working remotes again.

Given this, I only use my OH sitemap for debugging purposes or to have the option to set/control stuff when I’m not home. And any time we need to manually interact with something that is automated by the controller, we do so through the traditional means (e.g. flip a light switch).

So I would probably fall on the same side as you regarding whether everything should go through the controller or not.

Couldn’t agree more.

For my property’s gate I will use the same old RF remore transmitter, since it is still hanging on the keyfob, AND migrate its functionality to the controller through a node, a couple of relays and reed switches to monitor its state.

A little late to this but I have connected two instances of OpenHab 1.8 using MQTT. Though I am only in early testing the results have been encouraging.

A little background. I use Insteon for lights and garage door control. For one reason or another I have an extremely difficult time getting Insteon commands into my garage. It is very intermittent. (Yes I messed with phase connections for days and all works except the garage.) Finally I picked up a hub and put it and a RPi in my garage. On the RPi I run my “slave” OpenHab and my original OH is on a PC in my den.

Each instance has the same set of items but the master does not have Insteon bindings for those controlled by the slave and vice-versa. I tried to omit the devices that the slave really doesn’t care about but I saw Java exceptions when I did that.
Only the master has any rules defined.

I connected them via MQTT. The slave in the garage only publishes states to MQTT and subscribes to commands and conversely the master only publishes commands to MQTT and subscribes to states so I end up with this in my openhab.cfg files:




I also wrote a one line script that proved very helpful in debugging these issues. Basically it subscribes to the MQTT messages but additionally time stamps them and colors them. Here it is for what it’s worth:

mosquitto_sub -t master/\#  -t slave/slaveGarage/\# -v | sed -e "s/\(master.*\)/`tput setaf 4 & tput bold`\1`tput op`/g" -e "s/\(slave.*\)/`tput setaf 1`\1`tput op`/g" -e "s/^/$(date +%F\ %T) /"

This also addresses the question I’ve seen in a variety of places asking if it is possible to bridge Insteon over TCP/IP. It does just that. However, I currently only have a small number of devices. I don’t know how well it will scale should I install 100+ Insteon devices. Time will tell I guess.

1 Like

I’m using a similar master/slave approach between a 1.8.x OH instance and an OH 2 instance. This way I can incrementally test and integrate OH2 extensions into my system.

1 Like