Multiple Openhab servers and MQTT v2

I am planning a new smart-home installation. I am aiming at having one OpenHab 2 server (“agent”) running in a specific subnet (DMZ), integrating through X number of bindings. Then, through a embedded MQTT broker expose all items and values (both “read” and “write”). This broker will be “hardened” and made available to a server on a different and more secure subnet.

Then, from the separate subnet - connect a Mosquitto broker using its “bridge” feature to pick up all events and/or publish commands to the agent running on the DMZ through its exposed MQTT broker.

This pattern will be repeated a couple of times (into different “DMZ”) to enforce strict separation and proper security between what is worth protecting, and forming a “secure” core MQTT broker with all items available from all “agents”.

So… Is this even possible? I was hoping that the new MQTT binding would make it possible without setting up each and every item, or otherwise be bogged down in hours of tedious work with hundreds of channels…

I know this question has been discussed before, but I cannot see that there is any recent updates bringing specific instructions to how this actually is set up. Especially concerning recent developments to the MQTT binding. So here goes…

Definetely doable. Also has been done by yours truly in the shared setup between my home and our Fablab.

This really sounds like something you want to configure textually, via things / items files. Maybe even write a few scripts so you can auto-create the appropriate items for each location / DMZ segment. This is basically remodeling the auto-discovery with external setup.

You are planning a pretty complex network setup, so I assume that you also have the proper Unix / shell / scripting knowledge to do the above. If you go this way, maybe even think about creating a full-blown docker image for each DMZ segment (which includes the items / things for that segment) as I am pretty sure you will have to reconfigure / restart each installation many many times until everything is just right.

1 Like

I think I’m a little confused. I’d put the MQTT servers in the DMZ and isolate and protect the OH server as much as possible (i.e. put it on a more secure network instead of an exposed DMZ network). OH is where the mischief can take place and sadly it isn’t all that secure on it’s own so I’d want to protect it instead of exposing it in a DMZ.

I don’t know what you are trying to do with this setup specifically so I could be off base.

I’d put an MQTT broker and perhaps an instance of openHAB Cloud or a reverse proxy in the DMZ, hardened and configured with ACLs and such of course. I’d put OH in a more protected and isolated network behind the DMZ and only allow specific connections to/from the DMZ.

But like I said, I don’t know why you are looking for this approach so there may be something else driving the approach.

There is theoretical support for this in the MQTT 2 binding. See https://www.openhab.org/addons/bindings/mqtt.generic/#synchronise-two-instances. The MQTT 1.x binding has an event bus binding that will allow you to federate multiple instances of OH.

1 Like

I second @rlkoshak, not sure I understand what you are doing. His proposal is where I would head should I want such a setup.

Thank you guys, and I find the functionality @rlkoshak points at interesting. I will look into that.

Just to clarify; the reason for doing it this way is security. I may have rushed out a description that is a bit fuzzy; let me clarify.

The less privileged network
The thing is that I wish to have an “agent” running in a less privileged network. The network is a client net consisting of “edge” devices as laptops, phones and things that need to broadcast/multicast to devices like Chromecasts and Sonos. The reason is usabiltiy and availability through existing apps. The “agent” is a hardened openhab instance that just have a few relevant integrations towards Chromecast and Sonos. Nothing more. By running a instance of a MQTT broker on that very network (as a part of the “agent”), no ports will be opened into a more privileged network, but a well-understood service will be made available for more privileged services to easily consume.

One can discuss if the hardening of agents on this network is necessary, as it integrates to services that is already open. There are no secrets. And the only reason would be to limit the number of attack vectors. There is no good security on this network anyway. It’s a “DMZ”. Sort of.

The cloud network
This is another network but with client isolation, and internet access only. This is where I have my cloud-only devices. They don’t see each other. Examples are cloud connected heaters, ovens, washing machine etc. As a security and integration “benefit” I run an “agent” on this network as well, integrating to the mentioned devices’ cloud services. The reason is that I have no grounds for trust towards the producers or integration providers of OpenHab, so the network acts as a “DMZ” as well - exposing only a MQTT server as described for the less privileged network above. Why not have these on the less privileged network? 1) They need secrets. I don’t want secrets on the less privileged network 2) I don’t want them to be able to contact or do anything harmful if they ar compromised. In here they are isolated from each other and everything else. 3) I don’t need discovery for these devices through broadcast or multicast. It is merely done via the cloud and custom services/apps.

The more privileged network
This network contains an MQTT server that has privileges to bridge against the MQTT servers on the two less privileged network as described above. A OpenHab server runs here as well, picking up integrations from the agents through MQTT in addition to doing integrations itself (like Z-wave or Ikea or custom built devices).

We will end up with a protected OpenHab installation centrally, that is unreachable from less safe areas, and that is secure enough to integrate against locks, alarms and other more security sensitive devices. This and only this instance will contain scripts, automations etc. that really is worth protecting.

What about the GUI and controls from a users perspective? This is made available through my own secured API, with proper authentication, firewall rules etc. Another story :slight_smile: I hope it makes a bit more sense now? Thanks again!

@rlkoshak Btw, can you help me finish https://www.openhab.org/addons/bindings/mqtt.generic/#synchronise-two-instances?

There is still

rule "Publish all"
when 
      Channel "mqtt:broker:myUnsecureBroker:myTriggerChannel" triggered
then
   // TODO
  1. Here we need to get the value of the trigger channel
  2. split it by the "#" character to data[0], data[1]
  3. Update (Command) the item with the name in data[0] to the value of data[1]
end

I don’t see the distinction. In order to make available a well-understood service to the more privileged network you need to open ports for that service. Unless you are using a next gen firewall or something that is more protocol aware.

If the services in the cloud network can only integrate through cloud services (i.e. OH cannot talk to the devices directly, only through a cloud service) then it really doesn’t matter where you put the OH so long as the OH has access to the internet. I wouldn’t put these devices on the instance of OH in the DMZ, but there is no security benefit and significant extra complexity and work to host an OH in this network. If security is the only thing driving putting an OH here then I might reconsider.

Why not just have the OH instances in the other networks just use this MQTT server, or have the privileged network use one of the others. In order to bridge brokers you will have to open the MQTT ports between the two networks anyway. You don’t have to have the username/password to the protected broker on the less privileged networks but you can assign a different username/password to the OH instance in each network and change that periodically and get about the same level of mitigation of risk with much less cost in time and complexity.

Unless your ultimate goal is to lean how to do this type of stuff, you need to assess the cost of the mitigation compared to the impact of the risks you are mitigating. The standard formula is risk = likelihood of an attack/event occurring * impact should the attack/event be successful.

Any mitigation should cost less than the result of that formula. Usually, money is used to quantify the impact so often you need to convert things like your time to set it up and long term maintenance into a monetary amount.

For a home system, this looks to me like it is going to cost way more than the risks it addresses. So I would only recommend pursuing this architecture if you have some other goal over just security, such as a learning exercise. Of course, perhaps you are personally the target of a state actor or the like, in which case don’t do home automation in the first place.

It is high on my to do list actually. I have an MQTT 1 event bus configuration set up between OH instances (one at my home and one at my dad’s. I intend to update the docs once I get a chance to figure it out.

The code would look something like the following (not tested yet):

    // 1. Get the value of the trigger channel 
    val event = receivedEvent // I think this is just the event as a String so that's what we need to split

    // 2. Split by "#"
    val parts = event.split("#")

    // 3. Command the item
    sendCommand(parts.get(0), parts.get(1)
1 Like

I have created a PR (https://github.com/openhab/openhab2-addons/pull/4936). If it requires small changes, we do that in a follow up PR.

2 Likes

The distinction lies in MQTT using TCP and me having a stateful firewall. Opening up a hole in the firewall DMZ->Internal is less preferable then doing it the other way around as in Internal->DMZ of obvious reasons. If you compare this in addition to investing time in setting up a proxy on the internal network, in comparison to setting up a mqtt on the DMZ: then the sum is in favor of the latter. To my knowledge MQTT bridging is done through usage of a single TCP port on the remote broker. In this case the agent. The central broker acts like a client. So it’s a one way street with two-way traffic.

Yes, you are right. But again, the fact that the network has client isolation makes it a tad more safe when it comes to the storage of secrets. Separation gains security. This is also more for the pure fun of it, since the technical solutions are the same as above. So no real cost, but a small benefit. Nothing big.

For me, the cost in infrastructure is minimal as I already have a full range of Ubiquiti Unifi devices with central management of VLANs etc. Provisioning a new topology is quite easy (everything is relative). Part of this is actually doing it the paranoid way, just to learn more about the practical work involved, and the practical limitations/possibilities in the end.

Well, I don’t agree with you there. Subnets are non-negotiable. The details in which way the doors of the ports are opened doesn’t really make a huge difference on the work involved.

I’m all for doing crazy things to learn. That will always trump the risk calculation.

I never said the subnets are overkill. But the number of copies of the same service you have running that all need to be maintained and updated separately is an ongoing cost to your time. Setting them all up in the first place is a cost to your time. Is it worth that cost? Obviously I can’t answer that because I don’t know what threats you are trying to mitigate, the likelihood that the threats will become realized, and the impact to you should they become realized.

It’s probably because I have to deal with this all the time, but “Secure all the things!” is rarely a good enough argument. Secure them against what? How much is it going to cost? How much will we lose if the threat is realized? Only if the amortized cost is less than the impact is a mitigation worth doing.

Without knowing what threats you are trying to mitigate against and the impacts, I can’t say wither this is for sure overkill or not. But for a home system it sure seems like it.

1 Like

Well I appreciate your inputs, and I will play around with a couple of different alternatives including this one to see what I end up with. The described setup is not necessarily the safest either depending on a few factors, but also a fun one that handles discoverability out of the box.

In any case, having sub-instances of Openhab put together in a network is probably something that is supported through MQTT, and that is what I asked for. Thanks.

BTW; When fiddling with one of the alternative paths here, I sorta remembered one of the original motivations for this; Getting Sonos to work over separate subnets. Is this expected to work out of the box on OpenHab over two subnets? (out of relevance for mqtt, but relevant for the rest of the discussion)

Depends on your router, I guess. TCP is not using broadcast mechanisms, so subnet boundaries are no problem, as long as the IPs are routed. The MQTT binding does not limit this in any way.

Looking into the binding docs and looking into the protocols used I found that setting Sonos up relies on multicast/broadcast - specifically SSDP. It’s even specified in the docs that TTL manipulation or relays must be used to fix this over two subnets. So… My solution wasn’t that alien after all :slight_smile: