Multi-homed openHAB

Hi there,

First of all I would like to thank the entire group of highly dedicated creators and supporters of this fascinating software! I never thought it would get a grip on me that fast, but the endless possibilities …

Now my question - although I am beyond this 1st hour for this section, I didn’t find an appropriate place to post it:
I have two homes to automate, my regular house and a cabin. What are the best options to have all automation in one UI?
So far I have openHAB on a raspi 3 in my house, and a few sensors connected to another raspi 3 running pilight (my 1st experiment with HA) in my cabin. This binding basically works better than expected, but I feel sooner or later it will be difficult to have the same type of control over everything connected indirectly through pilight.
Any opinions?
An ‘openHAB Binding’ that provides access to all things of another instance could solve this … :wink:
For just DS1820 sensors it should be possible to share /sys/bus/w1/devices via SMB and let openHAB read it directly, but this would not work for other things I guess.

BTW, there is one problem with the pilight binding. Both locations are VNCed together. As usual in Germany both DSL routers ( are disconnected once a day. Everytime this happens, I have to restart openhab2.service as the binding seems to lock. basicUI continues to work, but all sensor readings from pilight are frozen.


Hi Schwabix,

I’m also a beginner in with openHAB, but as I had some ideas reading your post, I will try my best to share these ideas:

  1. As you are using Fritzboxes you might be able bring those together in a Fritzbox federation using a VPN:

  2. As I haven’t worked with pilight by now, I only could think of a workaround, which you might already use: You can plan when the Fritzbox should do the disconnect. By using the Fritzbox TR-064 binding you can check, when you got reconnected and the issue a cli command to restart the service. I know that’s not a very nice solution, but at least you wouldn’t have to touch your systems…


There are a number of approaches but in general there are two over all approaches:

  1. Somehow give one openHAB instance access to all the devices in both locations.

  2. Run two openHAB instances and federate the two using MQTT 2.5 Event Bus.

In both cases you need to provide some way for the two locations to reach each other over the Internet. My recommendation would be using a VPN. I assume that is what you already are doing a “VNCed” is a typo.

The first approach works best for cloud services. If you have local only control devices like Zwave or Zigbee, you can use something like Share Z-wave dongle over IP (USB over IP using ser2net / socat ) guide to make the USB device on the remote machine appear like it’s plugged into the local machine. Some people have had great luck with this approach, others have problems.

The second approach has you install an MQTT broker. Then you will have a copy of all the Items defined on your remote openHAB on your “main” openHAB instance. Then with the event bus and update and/or command on the duplicated Item gets sent to the remote OH’s version of that Item and also the other direction.

Thank you for this fast response.

Understood, #1 requires remote access to the IO channel like GPIO or even direct WiFi (tunneled through VPN). Good to know local RF technologies like ZigBee, Z-Wave or FS20 fail here

#2 is some extra complexity, although I’ve heard MQTT is no to hard.

Any idea about the frozen pilight binding? (Should I open a different thread for this?)

I have multiple locations running multiple OH and have used MQTT to do federation. It works, but the configuration is a fair amount of work and testing – it is relatively easy to get yourself into a “race” condition when a physical device has multiple logical controllers, whether they are proxy control interfaces running on the same OH instance or MQTT-connected OH instances.

IF you only have two locations and what you want is a UI which allows you to get to both easily but you are not doing cross-location automation, there is a pretty simple trick you can do using the OpenHAB iPhone app.

  • Setup MYOPENHAB on your primary home, exporting whatever you want in the configuration
  • In the OpenHAB iPhone app, set the local URL to be the address of the CABIN’s OH instance
  • In the OpenHAB iPhone app, set the remote URL to be the myopenhab address of your primary HOME.

If you do this, when you are at the CABIN, the iPhone app will default to the cabin configuration. When you are not at the CABIN (ie at home or anywhere else), the iPhone app will fall thru to the remote MYOPENHAB configuration pointing at your home.

I do this as a handy way to have the OpenHAB app pointed at my immediately local instance when I go back and forth between home and my in-laws. With my phone in it’s Wi-Fi ON setting, it automatically talks to my in-laws OH instance if I am there. When at my in-laws, if I want to take a peek at my home, all I have to do is turn off wi-fi on my phone for a second and the iPhone app is talking to my home OH instance. If I am anyplace other than my in-laws, it falls thru to my home.

I use MQTT extensively for federating metrics but unless you need cross-OH automation the headache of debugging federated control over MQTT event bus is not worth it (obligatory IMHO goes here).

ADDENDUM: if both home and cabin routers support named instance in url or you have the same IP NAT subnetting at both locations, this can be even simpler because your local openHAB iphone URL can be the same across locations so you would be talking to your home openHAB instance when you are at home and myopenhab would only be involved when you are remote from your home. It would be mildly faster when you are at home, but either way myopenhab is generally fast enough for human-speed actions.


Neat trick to use the backup URL … however …

My original intent for HA is to have remote access to the cabin’s heating sensor and switches. Started with the somewhat limited pilight, then found OH, and got lots of ideas what could be added, from presence activated WiFi speakers to fill-level monitoring of my pellets heating.

So far the VPN is between two otherwise independant subnets. Was always on my ToDo to dive into pros and cons of one shared name server, static routing, and whatever else might be involved. During setup of VPN the Fritzbox insisted on a separate subnet. I guess that’s easier for the avarage user as otherwise you would have to deal with default name server handed out by your DHC server, or go to static configs …
Sounds like I postpone this until I outgrow the capabilities of the pilight binding.

If you use an Android phone, you can install the release and beta versions of the Android app and point them at different myopenhab accounts, so that you can monitor both instances at all times. I’m not sure if you can do this with iOS.

1 Like


I think you might be misunderstanding Bob’s message.

Are you looking to create automation which depends on sensors in both locations, or would like to combine the data into a unified UI? Or is having UI access to both locations independently sufficient?

The first scenario is relatively complex. The others much less so. Make sure you have identified your problem.

If you are relatively new to Openhab and/or home automation, taking on true multi-home automation is a huge step.

Hi Craig,

I don’t see where a misunderstanding might have been. From Bob’s suggestions MQTT sounded non-trivial, and fallback URL as a neat trick, that does not help me.

Although I have no dependencies of items between both locations (yet?), a combined UI would be ideal, to allow charts showing data from both homes. If I only use Things accessible through network (i.e. no RF) I would get along with only one OH instance.

I still think it might be an interesting project (and possibly useful for some) to create an openHAB binding that mirrors Items from one instance into another. May be it is too challenging at my level of experience, but scanning documentation it might not too far off, when you combine a binding with the OH REST API.
The openHAB Remote Binding would expose the Things and Items it got reported through the REST API of the remote instance as if they were local. State queries and even subscriptions can be used to keep state in sync. All state changes from local UIs would have to be communicated back through the REST API to the remote instance.
Is my idea simplifying too much?
[Ok, this is no longer a Thread for the Beginner section :wink:]

For my 2c

I think you need to decide on a few key points.

  • Exactly what data do you need from the remote location

  • Do you want a 24/7 live 2 way connection between the two properties, with 1 UI seeing everything

  • Would on-change only messages be enough?

My reasoning being, if I monitor all the bus data here, I realise that there is a lot of reductant data that is important to the protocol, but unused in a UI.

If I were to approach this, I would consider filtering the status’ / data that I want to see remotely and just pro-actively send that to the main site.

Then any changes I wanted to make at the remote location would be sent on demand.

Either via a weak TCP packet over a VPN, or some more secure method over an open pair of ports.


Did you know that NodeRed can connect to Multiple openHAB2 instances?

Also that the charting and control in the NodeRed Dashboard is rather good.

There are many ways to skin a cat, it just depends on the tools at hand.


You are correct my characterization as misunderstanding was presumptive. I think I just saw warning bells

I think at end of day I just wanted to stress that having the locations synced up is a complex scenario.

There are options such as a cloud accessible influxdb along with Grafana which could provide you unified dashboards. It just depends a lot on what you want to do.

I actually have multiple locations with openhab installs myself, and have found no real need to connect them. So maybe that is biasing my suggestion to be wary of attempting to connect.

Good luck, and enjoy the journey.

I wrote the Event Bus rules and configuration above specifically to avoid this sort of problem. So as long as you use the Rules as written you will not encounter a loop. All you have to do is copy the code over and add the Items to the right Groups. Indeed, if you “roll your own” MQTT federation you need to pay very special attention to avoiding loops. But I’ve already done that for you in the link above.

MQTT really isn’t that hard and you would be reusing code that has already taken the concerns Bob has into account. In short, it’s not a problem you would have to solve or be worried about in a typical setup (i.e. you have proxy Items representing the remote OH that are not linked to anything in your main OH instance).

I believe the intended approach for OH 3 will be to use the MQTT EvenBus posted above. That post was used as part of the justification for dropping support for OH 1.x bindings in OH 3 at least. I have it on my todo list to convert it to a Rule Template so you can just import it and add stuff to the right Groups and be done, but I’m waiting for OH 3 to firm up so I know the best way to do it.

It is true, it takes a good deal of work if you wanted to set this up manually. But I wrote the EventBud code above so that you don’t have to do that work. All you have to do is “install the Rules”, set up the Channels, and add the Items to the right Groups.

No. Except for the REST API stuff this is exactly what the MQTT EventBus above does. The advantages of using OH instead of MQTT is that it provides some additional features that the REST API can’t:

  • very clean reconnection when networking is lost between the two instances
  • notification when the remote OH instance goes offline via the LWT message
  • uses retained messages for updates so if the main OH goes offline, it get’s the latest states of all the Items from the remote OH by default
  • you can pick and choose which Items get exposed on the event bus, it’s not all or nothing
  • you can pick and choose what sorts of events to put on the event bus (e.g. maybe you only want updates from the remote OH instance and no commands).

If you have the VPN set up, they don’t need to be cloud accessible. There are lots of options.

I have not used Rich’s new MQTT bus rules — I did try using the unified MQTT bus configuration from earlier versions a year or so back and found it essentially unworkable.

It was in that light that I realized that I was hard-pressed to think of a use case where I needed near real-time event-driven automation across OH instances. I could maybe think of a few “lazy-time” automation cases, but mostly what I cared about was unifying measurements rather than a unified event bus. So all my instances propagate via MQTT queues the current counts of ON/OFF via various proxy interfaces ( I use ten or so proxy I/F per device) to a unified measurement server. (I did a lot of operant psych lab research in grad school – so am little obsessed with the signal paths that get taken in switching a device – YMMV).

1 Like

Having helped you some with your Rules in the past, I think I can safely say that your overall approach is somewhat unique. I can’t comment on whether my event bus would work for you but I have pretty thoroughly tested it for the more typical way most of us create our configurations.

For an example, let’s say we have a Switch named Foo on the remote OH instance that we want to see the status of and control from a local OH.

First of all, it’s important to understand that the the key assumption is that the remote OH is autonomous. The Foo on the local OH instance is merely a proxy to an Item on the remote OH but the “real” action takes places on the remote OH instance. Therefore, we do not want to process commands on local Foo that originate from the remote OH instance. Similarly, we don’t want to send updates on local Foo to remote OH. This is what avoids the infinite loop (I may need to make this more clear on that posting for the Event Bus tutorial).

Remote OH:

  • publishes all updates to the Switch Item to a well defined (and configurable) topic as a retained message: remote/out/Foo/state
  • subscribes for commands on that Item at remote/in/Foo/command

(NOTE: the subscriptions are done using a Channel trigger on the MQTT Broker Thing with a wild card subscription to remote/in/#, messages trigger a Rule and the Item name is parsed out of the topic and the Item updated or commanded based on the message).

If you are using the Rules I wrote, to configure an Item to be shared in this way simply add that Item to the right Group on the remote OH. Depending on the Group membership you can set this up to be one way (e.g. remote only publishes) and handle only updates or only events.

Local OH:

  • publishes all commands to Foo to remote/in/Foo/state
  • subscribes to updates to Foo from remote/out/Foo/state
  • has a proxy Item named Foo of the same type as Foo on the remote OH

Again, add Foo to the right Group on local OH. It’s a good idea to configure local Foo with autoupdate=false as well but not required. That way when you send local Foo a command, it will not change state until remote OH processes the command and publishes it’s update.

So if Foo on the local OH receives a command ON:

  1. local Foo receives command ON
  2. “ON” is published to remote/in/Foo/command
  3. remote OH receives ON message and triggers event bus Rule
  4. event bus Rule parses “Foo” from the topic and calls sendCommand(“Foo”, “ON”)
  5. autoupdate, or the device connected to Foo updates the Item to ON
  6. the event bus Rule triggers from the update and publishes ON to remote/out/Foo/state
  7. local OH gets the message and triggers the event bus Rule
  8. event bus Rule parses “Foo” from the topic and calls postUpdate(“Foo”, “ON”)

Thus there are no loops and the local OH Item can control and remain in sync with the remote OH instance. Configuring which Items are published and subscribed to is all controlled by Group membership.

The ability to easily control which Items publish and subscribe to what updates and events was the big reason why, IMHO, the event bus configuration with the MQTT 1 binding was unworkable and resulted in infinite loops easily.

If, for some reason, you do want to process commands from the remote OH on the local OH or you want to receive updates from the local OH on the remote OH instance than you will need to modify the event bus Rules to not update or command an Item that is already in a state that matches the received message. That too should avoid infinite loops, though it won’t work for all Item types as there are some commands that do not translate into an Item’s state.


I could suggest to OpebHAB developers to look on the ioBroker Multihost implementation.
It is like If “slave” OpenHab instance could send the item changes directly to main OpebHAB instance.

The advantages of it:
You have only one GUI for both instances, one logic, one historian module and only a small peace of code on slave to get the data from GPIOs/serial ports and send it to master.

Of course the network connection still must exist. :slight_smile:

That covers only one of the use cases for why someone may want to have multiple OH instances. The more common case is where one has two or more physically separated homes (in my case the remote OH is 100 miles away and autonomously controlling my dad’s house). These remote instances are themselves full installations of OH which are intended to control that home. But for monitoring and/or interacting with the house from afar, some or all of the sensors and actuators need to be made available on the local OH. So having two UIs, two sets of Rules, etc. is a requirement, not a complication to try and eliminate.

For you case I would really use the mqtt connection.
But believe me, there are many use cases, when you must place the RasPi on the roof, because of physical proximity of measured instance and the main server is in the basement.

I agree that is a use case too. But the MQTT Event Bus supports both use cases. the iobroker approach only supports the one use case.

For you case, the ioBroker has mqtt broker and mqtt-client modules. You can activate via GUI on the client side, the items, that should be sent to broker and on the broker side consume them.

Not sure about “unique” actually.

Canonical case for multiple proxies is a switch (i.e. ON/OFF), where you have the physical switch itself, OH primary device for that switch, and various proxies that are called for (1) ON due to motion detection (2) ON due to timer action (3) ON due to cron action (time-of-day etc), (4) ON due to user switching via iPhone app, (5) ON due to switching from myopenhab, (6) ON as a conditional downstream from some other RULE/SCRIPT action (7) ON due to a Hey Google interception, (8) ON due to an Alexa intercept…etc etc. By looking at the distribution of ONs across the defined proxies vs the total ON operations (including physically manipulating the switch) I can make a determination of how saturated the automation is (or is not) on the particular device.

Physical manipulation, and proxies 4,5,7,8 are all user-initiated…the others (1,2,3,6) are embedded automation where other sensor combinations are handling the automation activity. A pattern of manual user-initiated ONs indicates an OPPORTUNITY for potential further embedded automation. Conversely, a predominance of embedded ONs is more indicative of a more “baked” set of use-cases. (Embedded ONs quickly followed by user-initiated OFFs is overfitted automation.)