Connecting distributed openhab instances

Hi,

I have a RasPi with openhab 1.7 in the basement which controls the jalousies using the weather and exec bindings running a python script.

Now I’m planing to install a 2nd Pi next to the fence to realize a doorbell and a camera to see who is outside the house.
For this I want to use openhab with the GPIO binding and maybe some others, but I have to connect the two instances of openhab.

From the wiki: Openhab Wiki

Nonetheless, as the OSGi EventAdmin service can also be used as a remote
service, it is possible to connect several distributed openHAB
instances via the Event Bus.

  1. Has anybody ever done this, I can’t find any document?
  2. any advice how to use the PiCam?

br
Daniel

I know I read that you could connect the Event Bus somewhere but never could find it again. However, as the rest of that paragraph says (emphasis mine):

It is important to note that openHAB is not meant to reside on (or near) actual hardware devices which would then have to remotely communicate with many other distributed openHAB instances. Instead, openHAB serves as an integration hub between such devices and as a mediator between different protocols that are spoken between these devices. In a typical installation there will therefore be usually just one instance of openHAB running on some central server.

I did some brief searching and while it is theoretically possible to combine the event services, I can find no way to configure openHAB to actually do this out-of-the-box. You may need to edit some code to make this happen. See this posting on the OSGI site for further information.

Because openHAB is not designed to work in this way, I believe most people install an MQTT broker (Mosquitto seems popular) and have the remote devices communicate with openHAB via MQTT rather than trying to combine the Event Buses of two or more instances of openHAB. Another approach is to web enable the remote devices and openHAB accesses them through a REST API call using the HTTP binding.

I personally have a hybrid approach where my sensors communicate via MQTT using a Python script and actuators (in this case a couple of relays) are exposed via a REST API using WebIOPi. Eventually I plan on moving the WebIOPi stuff off and just use MQTT for everything but I’ve not gotten around to that yet.

I’ve never used a PiCam so I’ve no advice there.

Hi Rich,

thanks for the MQTT tip, the Event Bus Binding is what I was looking for!

I must be tired. I logged for an event bus binding on the wiki and didn’t see it. Good luck.

1 Like

@hollladiewaldfee You could try looking into Node Red (http://nodered.org) for your distributed raspi instances. It supports usage of the gpio out of tge box and can speak back to openhab via MQTT.

Hi Daniel,

i’m interested in this possibility too and have made basic config of MQTT but i’m not sure if both instancec now alreay connect together. I would appreciate if you could share more details how you realized it.

Thanks
Stefan

Hi,

so far I installed a MQTT Broker and connected the event bus of both instances to it.
The openhab configuration is the same for items, sitemaps, rules…

This is the openhab.cfg:

MQTT general

mqtt:pi2.url=tcp://someIP:1883
mqtt:pi2.clientId=pi1 for the 1st
mqtt:pi2.clientId=pi2 for the 2nd

MQTT event bus:

case 1:
I configured both instances to Publish/Subscribe to the same topic, like so:

mqtt-eventbus:broker=pi2
mqtt-eventbus:commandPublishTopic=home/openHAB/pi1/${item}/command
mqtt-eventbus:commandSubscribeTopic=home/openHAB/pi2/${item}/command
mqtt-eventbus:statePublishTopic=home/openHAB/pi1/${item}/state
mqtt-eventbus:stateSubscribeTopic=home/openHAB/pi2/${item}/state

both are updating the the MQTT Broker which can be seen with:

mosquitto_sub -v -t ‘home/openHAB/pi1/mySwitch/command’
home/openHAB/pi1/mySwitch/command OFF
home/openHAB/pi1/mySwitch/command ON

but the other instance is not updated via MQTT.

case2:
I configured the instances to Publish/Subscribe vice versa:

Pi1:

mqtt-eventbus:broker=pi2
mqtt-eventbus:commandPublishTopic=home/openHAB/pi1/${item}/command
mqtt-eventbus:commandSubscribeTopic=home/openHAB/pi2/${item}/command
mqtt-eventbus:statePublishTopic=home/openHAB/pi1/${item}/state
mqtt-eventbus:stateSubscribeTopic=home/openHAB/pi2/${item}/state

Pi2:

mqtt-eventbus:broker=pi2
mqtt-eventbus:commandPublishTopic=home/openHAB/pi2/${item}/command
mqtt-eventbus:commandSubscribeTopic=home/openHAB/pi1/${item}/command
mqtt-eventbus:statePublishTopic=home/openHAB/pi2/${item}/state
mqtt-eventbus:stateSubscribeTopic=home/openHAB/pi1/${item}/state

the result is that the first update on the bus ends up in a loop.
Pi1 updates one item, which is published to the MQTT broker, which sends it to the Pi2, which sends it back to the broker, which sends it back to Pi1…

Does anybody see the problem?

I see the problem but I’m not sure I know the solution. The problem is because EVERYTHING gets published to the bus when Pi1 (for example) reads in and processes an event from Pi2 it publishes that back to the bus and around and around we go.

I can see four ways to potentially do this:

  1. Rather than trying to keep the states of items in the two openHAB instances in sync (i.e. the same Items with the same states in both instances) instead keep them separate. For those items that Pi1 needs from Pi2, name them differently so when the events from the bus are processed it doesn’t get republished to the bus.

  2. Add some custom logic to the system to either keep openHAB from republishing the events that came from the other Pi or to filter the events so that the second time around they are not processed and the loop stops. I see hints in the back of my mind that this might be possible through the rules but I suspect it would require changes to OH’s core code.

  3. Abandon the eventbus approach and implement the publish and subscribe of the items between the two Pis directly through Items using the MQTT binding.

  4. Abandon one of the OH instances and implement the sensor reporting and actuator work through some other means (e.g. Python, Node-Red, etc).

This is not working either, I removed all items, sitemaps, scripts, rules from Pi2 and configured the eventbus on both Pi’s to publish/subscribe to the same tropics. As soon as any item on Pi1 is receiving a command the loop is starting again.

The next approach was to publish all eventbus messages from the “fence Pi” to one MQTT topic and subscribe the other Pi to it.

mqtt-eventbus:commandPublishTopic=home/openHAB/${item}/command
mqtt-eventbus:commandSubscribeTopic=home/openHAB/${item}/command

This is working if you want to push all updates from one OH to another, but I want to push some item updates in one direction and some in to other direction.

Therefore I’m now using the MQTT Binding and added outbound strings to the doorbell item at the “fence Pi” and inbound strings at the other Pi.

Switch doorbell {mqtt=“>[pi2:/openhab/doorbell:command:ON:ON],>[pi2:/openhab/doorbell:command:OFF:OFF]”}
Switch doorbell {mqtt=“<[pi2:/openhab/doorbell:command:ON:ON],<[pi2:/openhab/doorbell:command:OFF:OFF]”}

On the PiCam in the outside I would consider a camera that was designed for such environment, I would expect it will have ifrared for nightvision and the housing to withstand the elements. Or go for IP based doorphone perhaps…

@JjS: Your thoughts are good but not really related to the discussed topic of MQTT communication setup between multiple instances of openhab.

Does someone solved the loop problem?
It might be that it can work if we just send command instead of states also?

Hi Alberto,

I also thought about that, but I think it would loos e the whole idea in that way. Thinking from the point of the Broker I created multiple incoming queues and only one outgoing so that all MQTT clients get the same states back. But I’m not yet in a halfway usable production environment and therefore I’m not able to say it works or not.

Regards
Stefan

Here is one approach:

Hello Gentlemen,

I have my openhab hardly running in my workshop, but my plans are to build a sytem for my home as well later.
Now my workshop and my home are at a half town distance away from each other it is obvious I’ll need two separate sytems running (alarms and remote gate openers involved etc.).
I wonder what is the right configuration to access both from my single smartphone (or one system from the other location) while I’d like to give limited access to my workmates.

Watou’s method seems reliable but is there any of you might have an idea how to figure this out?

Thank you in advance.

@Gorgo, I think a lot depends on all of your use cases and requirements as to what approach would be the best.

For example, given that you want to give some access to your workmates but not give unlimited access (presumably you would not want them to be able to access your house controls) I don’t think watou’s approach would be appropriate (without some additions) because openHAB really doesn’t have a mechanism for providing limited access to the sitemap based on user/password.

I’ve not done this myself so everything I’m about to say is notional and theoretical. I need to move to Theory, everything works there. :smiley:

Anyway, what I would do is apply @watou’s technique to get everything controllable from one central OH instance. Then create separate sitemaps, one for you and one for your workmates. Finally, and this is the special sauce, you need to set up a reverse proxy (ngnix, Apache, etc) and implement your user authentication there. With this setup you can have separate authentication on two URLs and the reverse proxy forwards the web requests to the appropriate OH sitemap. With this you should be able to limit the workmates to only access the limited sitemap while you can access the master sitemap.

Well, I think the first and main issue is that I’d like to have a controller set up locally for each location.
Somehow I wouldn’t find it too convincing to have the controller at the very other corner of the town. For me it means if my net connection failed I wouldn’t be able to turn on the light in my kitchen… no way.
As I explained the other day -talking about kitchen lights- I want to have my sytem built up from standalone nodes (wall switch in the kitchen-local node-relay --bonus:state reported to controller and possible remote controllig through it), instead of giving the controller full control (like wall switch-node-controller-node-relay).

All I want is to access the two systems.

Filtering access and security is the second step ahead.

Perhaps I should move towards nodered?

In watou’s solution you do not have to have a master/slave relationship with the controllers. A controller is set up for each location and I think watou centralized all of his rules to make maintenance of the rules easier. The controllers just share/duplicate their state and allow for them to interact with each other across a message bus.

Like I said, you don’t have to have one master controller. And even if you do, you can host a local sitemap on each instance and should the network between them go down you have a backup. But given your desire to keep your switches from being completely dependent on the controller it seems like this would be a non-issue. No controller, no problem, just use the local node relay directly.

I haven’t seen your other posts on this so I don’t yet know your reasoning or what risk you are trying to mitigate with your stand-alone nodes. But I will comment that many many devices follow this pattern. For example, all of my zwave devices can be controlled locally and it is only when I want to control them through OH (i.e. through the sitemap or through a rule) that the “controller” comes into the mix.

The limitation in OH’s phone apps is they can only connect to one OH instance at a time, and there is no way to limit what users can see and do based on log in. So by setting up a reverse proxy you are addressing both issues by applying separate authentication on a per URL basis and managing transfer of the web access from one URL to the appropriate URL on your OH instances. But this is itself introducing a single point of failure as the reverse proxy needs to be hosted at one location.

Again, it depends on your use case but you might try something like the following:

At your workshop set up the reverse proxy to do the authentication for your workmates and you. Also set it up so you can bring up your home sitemap when you are at the workshop. The reverse proxy’s URL should be configured for your local URL in the app. So when you are at the workshop you are accessing your OH instance there locally and not dependent on the network between your workshop and your house unless you want to get to your house’s HA, which the reverse proxy will forward it to.

At your house do the same only in reverse (i.e. the proxy forwards the traffic to the workshop instead of the house’s HA). Make sure the two URLs are the same on the two local networks.

Finally, pick one to be your main server for when you are not at either location and configure that URL as the Remote URL.

With this configuration when you are at a location you are only dependent on the network between locations when you try to remotely connect or when you are not at either location. The OH instances are also completely independent of each other. Finally, because you are using the same local URL you are able to access both locations from the same app without needing a Master Controller.

About two minutes of Google shows that you are likely to have the same problems with NodeRed.

You must be right, so I guess I’ve misunderstood you at some point.
I’ll do my research to find out wether this setup is operational or not.
Thank you for your help and explaining your idea.

It was in an other forum, where members argued upon to give controller total control or not. Mentioning ESP8266/Arduino based nodes.