MQTT vs Binding

With the rapid adaption of MQTT it almost seems that this will replace the need for some (many) bindings. This would solve some of the 1.x issues (ie Insteon).


How is mqtt and insteon related? Can you elaborate on statement?

A binding will always provide better support and easier setup for a new comer to Openhab. If a device uses 10bit resolution a binding can perfectly hide this from the setup of the device and seamlessly map this to the 0-100% controls of Openhab. If the device needs a log scale for the movement the binding can take care of this as well. A binding may be more complex, but that is its strength it can do far more and hence give a better result to the end user.

There is stand-alone Insteon to MQTT bridge

Additionally, many other HA devices now have MQTT bridges. I could largely eliminate the need to plugins to support different devices.

At first glance, I would agree. However, if an MQTT bridge implements the HA or Homie standard, that would eliminate a lot of setup issues. And many of these HA standards like Insteon and Z wave require software that manages those networks.

Yes this is true the mqtt to insteon. Issue I had when I checked it out was that it didn’t support the insteon hub only plm. Not sure if that changed. However I prefer to have a more direct solution when available.

Personally, I don’t see them necessarily replacing bindings any time soon if ever. The biggest problem is then support for various technologies ends up being outsourced to whom ever writes these external bridges. It can be a great way to provide support for technologies that are not supported well by OH yet. And perhaps in those cases a native binding will never be built.

But the requirement to install some other third party bridge program, maintain it, and if the bridge doesn’t implement Homie, manually create all the MQTT connections in OH is far less usable than installing a binding and checking the Inbox.


@rlkoshak, Maybe.

It would provide a [relatively] easy-out if the outcome of the Removal of the OH 1.x Compatibility Layer thread is to have the OH 1.x Add-ons rewritten.

If that’s the collective choice taken, then there’ll already be a whole bunch of code-changes, testing, documentation, and a significant user-transition impact.

… and no guarantee that the same fate won’t befall OH 2.x Add-ons in the next few yrs.

This will be especially true for the larger (Bridge-style) Add-ons like mine (MiOS) that pull over 100’s of Items. I have 600+ Items bridged to a MiOS/Vera unit and, based upon the (Generated) Items files I’ve seen, it’s fairly typical for openHAB/MiOS users.

So if we end up having to rewrite these larger Add-ons, then might as well go all-in and use something like MQTT, with the existing (Homie) convention(s), and completely bypass the OH framework.

Federating in this manner provides a few benefits for Add-on Authors and Consumers:

  • a) a versioned, and language-neutral, contract for the Add-on Author (which is what I was looking for)
  • b) greater functionality sharing across the different HA/Controller environments (write once, in anything, use everywhere)
  • c) cross Controller migration enablement (eg. from/to openHAB, Hubitat, Hass, SmartThings, Homie, Vera etc)
  • d) decoupling Java as the coding/language requirement.
  • e) decoupling Eclipse as the editor (pseudo-requirement)
  • f) decoupling/modularizing/componentizing the other build components

Apart from pure process-management, there’s a lot to be said for writing the larger Add-ons in this manner, and using the discovery components of OH’s MQTT v2 to discover, and long-term support, the contract.

Forward-looking, I’d imagine (language-specific) frameworks popping up to do the Application-protocol lifting. Once framework’d, Add-ons written in this manner could be exposed through other Protocol conventions (like Weave’s binary form) and not just MQTT/Homie (or hass)

For those who’ve seen CORBA, or any of the since-then variants (XML-RPC, JSON variants, etc), this is all nothing new.

Side-bar: Having recently attempted to get a fresh Eclipse environment up and building, I wouldn’t wish that on anyone… it’s clearly in transition, given the hours wasted there. In it’s current state it’s a significant deterrent to entry, unless you’re a daily user… but that’s another topic :stuck_out_tongue:

1 Like

I’m down to 90 errors. It’s getting better every week xD I’m still refraining from doing core development atm, but a build system change is never easy. The new build system works theoretically in Intellij as well, at least.

1 Like

That’s definitely good to hear. At the time I tried it there were also errors unless you had a very specific (ie. Not December) version of Eclipse… and even then it needed to be touched/fixed if you restarted Eclipse.

Luckily all documented as work-arounds by various community members here, but full of landmines at that time.

For someone like me, that mostly works in other programming languages these days, that setup is a massive barrier to entry. I’m glad to hear that it’s simplification is progressing.

Don’t get me wrong, I’m not against these MQTT bridges by any means. As a bridge developer coding it as a stand alone X to MQTT has a lot of attraction including you can instantly support users of other home automation hubs. It also is indeed an option for those faced with the OH 1.x Compatibility Layer issue, though the current work around for that (federate the event buses) is essentially an OH 1.x to MQTT bridge in and of itself. So if that is where we go, why go through any effort at all and let OH implement the bridge part for you?

But speaking from a strictly OH user perspective, it will forever be more intuitive, less work, and less to keep up with to have the capability as a binding instead of a separate bridge. If the average user has 10 bindings (totally made up number not based on anything) do we really think it’s easier or even acceptable for them to have to install and manage let’s say 5 (half, again totally made up number) separate X to MQTT bridges from 5 different sources with 5 different configuration and management approaches?

That’s all I’m really saying. It’s good for the developer perhaps but that comes at the expense of the user.

But for anyone considering building their own X to MQTT bridge, :+1 to Homie! :smiley:

If the consensus solution is for compat to be dropped, then that’s the route Id’ take if it’s maintained moving forward. If it’s built-in, then grand, I’ll leave well-enough alone. (aka “aint broke, don’t fix it”)

In a simple manner, MQTT + Discovery (eg. via Homie’s style) + Filtering is equivalent to the core openHAB Message Bus - just with more of an “out of VM” experience, and more data-marshalling costs.

I have a pipeline of other things I’d like to write so, for those, it’s very likely I’ll do them in the style outlined. I have a few objectives specific to those:

  • a) I have multiple Controllers, and want to be able to switch and/or federate across
  • b) ultimately consolidate to a commercial controller to run the bulk of the house
  • c) prototype functionality quickly (so…, it’s other languages with less overhead)
  • d) minimize exposure to future compat issues, partially by relying upon more standards-based stuff

For (b), It’s the eternal discussion in HA, or “could I hand this off to the next house-owner?” … unless you paid/maintain a Control4 system of course :wink:

OH seems to be getting further away from that as we speak… starting with the loss of ESH, but time will tell.

That maybe the way it is, but not necessarily the way it needs to be. There are multiple installation and containerization frameworks that could be used to install, and run (often securely), 3rd party “bundles” from somewhere.

eg. All the way from TR-069 through Docker and the component primitives along the way: lxc, runc, apt, etc

In some ways, OH has this problem with things like it’s (licensed?) Java dependancies and/or OS-level depends - at least it will when it’s more appliance-like.

It also provides 3rd parties with an opportunity to “sell hardware” (which they love) by baking this code into something that is remote from OH, and performs auto-updates. Since it would be purpose-built, it’ll be a lot easier to get right, and not have interference from the peer software running on the Machine | JVM

I have many small, purpose-made, devices like that in my Alarm Panels (Paradox, DSC), Energy Monitoring (Custom MQTT), Log server (RaspPI), etc. In many ways, they’re simpler to manage than a single large-hairball deployment (which I also run as a series of VM’s on a NUC :slight_smile: )

That you or I can manage but the average Joe user isn’t going to. And I’m not saying they can’t, though it may be beyond some, but they won’t.

And it isn’t so much that the consensus is to drop the comp layer as it is that in the five years it’s been around only one developer has ever maintained it and he didn’t want to any more.

I suspect if anyone at all were to step up and take on its maintenance it’d be around for a good long time. Without that, what can we expect?

Whilst in some use cases Homie is great and I am looking at it in depth at the moment to use in a project, so I do like the idea that it is trying to achieve don’t get me wrong. In other use cases using human readable MQTT as a ‘man in the middle’ is a shocking idea. Consider DMX as an extreme example, it only takes 23ms for a full 512 light states to be transmitted, this is very fast. By using human friendly readable MQTT messages, it would give us at least 140 times slower response. This renders DMX crippled for what it is designed for. The below link to the home assistant forum also touches on this increased overhead…

Each piece of hardware needs to be considered in what is the best approach and yes in some cases using mqtt is fantastic and in others it is not. This is why different protocols exist so a hardware designer can choose what is going to give the end user the best experience within the goals of the project.

As someone who has written a MQTT based binding for a MQTT bridge (see espmilighthub) I can tell you first hand if I was to throw it all away and start from scratch or Openhab 3 or 4 forces me to rewrite, I would still choose the same path and use a binding unless the status quo changes.

In this case the hub/bridge is written and maintained by mostly HASS users. Creating an alternative firmware using homie is unlikely in the short term and far more work than writing a simple binding. I actually save huge amounts of time compared to creating a separate firmware, the choice is easy. It is about getting Homie accepted and implemented in more places then Openhab before it can take off. Hopefully the license for Homie and other features allows it to take off. There are many examples in the past of protocols invented that are great that never take off.

Here is an example discussion.

… and without a UI, using Karaf is a PITA, and the average user can’t do anything.

If the model above can be proven to work, then the components just need to be wrapped into another KAR service deployment… for openHAB.

I’m guessing this is similar to how the C-based Serial Driver is being packaged/delivered today, given that it shows up in the OSGi console, but clearly has C libraries at it’s core.

Anyhow, I’ll cross that bridge when I get to it…

Fair enough, for whatever reason the layer is not carried forward. Ultimately every project has it’s tech-debt, based upon decisions that were right at that time. Working out how, and when, to eliminate those has to factor a bunch of different issues.

But that’s for the other thread…

BTW, it’s about 4 yrs since it was in a semi-working state:

@watou and I were some of it’s earliest (external) users.

That’s partly why I reference Weave above. It has the binary-compressed format (TLV) to handle compact transmissions.

I’ll start with Homie, and maybe’s format, both to prove out the model and because there are existing entities (notably openHAB 2.4, Hubitat, that know how to process it. In the case of, it seems like it’s missing a bunch of entities, so the mapping might be lossy for a broader usage.

Anyhow, availability of lib’s and clients for Weave isn’t there just yet.

Ultimately, there will be need to support multiple of these pseudo-standards… which is why I make the framework comments above.

An “X” larger wire format doesn’t necessarily translate to the same (“X”) ratio of slowness… it depends upon a lot of other scale/perf/resource factors. But I get what you mean.

Many of these formats are compact for a bunch of PHY-layer constraints, or the tech of the time.

Some concrete examples…
My Alarm Panel uses ~16char codes for it’s events, mostly as it’s bus (Paradox COMBUS) is capped at 56k and it’s attempting to limit bus queueing whilst remaining responsive. I source this data from my Vera, via my Binding, which is expanding it into (compressed) JSON - that I currently pull into openHAB.

So while I could have an issue, the rate of house-hold “alarm panel events”, in a real-world deployment, it’s not an issue in terms of bandwidth - and openHAB is very responsive to what’ going on with the motion/contact sensors in the house. Some of these themselves are RF, luckily most are wired :wink:

The bigger challenge here is that I cannot stream the processing in the overall pipeline, as the (JSON) format used isn’t amenable to that. Each node in the processing chain is effectively adding latency.

Similarly, I use MQTT for my Energy data. That’s ~100 channels (Power, Energy and Current across ~35 channels + Volts) every 5s (~72,000 metrics/hour). The wire format of the data-stream is similar to what Homie uses, and I batch the MQTT pushes (using paho.mqtt.publish.multiple) for efficiency.

It’s not using a material amount of bandwidth, nor CPU, and the stream isn’t (yet!) compressed. The devices this data is sourced from also uses a much more compact data format (Brultech GEM/ECM-1240). It’s running BTMon on an really old RaspPi B (1st gen), although it has less requirements on real-time data… I pump it into a LAN-based InfluxDB for analysis.

It won’t be a stretch, or a size increase, to make a MQTT-Homie variant, with about the same message sizing.

Cool, thanks for the pointer!

@mjcumming Let me know if this is off-kilter with your intent for this thread. I think it’s all in keeping, but happy to split it out if it’s not what you intended.

1 Like

@guessed, great discussion.

@rlkoshak, I understand the desire for dedicated bindings but they are several advantages to using MQTT as enumerated above. For instance, Insteon and zWave both require a means to manage their networks (at least Insteon does, especially if you are using more than 10-20 devices). Duplicating that functionality in OH is an enormous amount of work. The 2 primary bindings I use are not being actively developed anymore. They have occasional problems and can bring down OH when the fail. An MQTT Bridge would in theory have a larger user base, wouldn’t bring down OH if it failed. Additionally, there are the additional benefits enumerated above such as being able to develop in different languages.

It would also free resources for OH developers to work on the bigger opportunity of making home automation easier (rules, interfaces, voice, intelligent learning, etc)

I agree that external applications are not a bad idea if they have an interface, that openHAB can understand.

Personally I’m using the deCONZ software for example, which is abstracting zigbee. deCONZ speaks the Hue protocol (+ a real-time extension for sensors) and I could easily integrate it into my openHAB setup. (Maybe not the best example, because I ended up writing a binding to support the real-time extension :sweat_smile:)


I wish I had been smart enough to put it that way on the other thread.

I’m not arguing against the benefits of listed for external bridges like the ones described. I’m just point out that for a sizable portion of the current user base, having to run an external application to get support for a given technology would be a deal killer. So moving to an “external bridges only” approach would essentially exclude those users from considering OH.

There is absolutely nothing preventing anyone who wants to use this X to MQTT bridge sort of approach from doing so now. Just look at the population using zigbee2mqtt. But also look at some of the comments on the removal of the compatibility layer thread. There are lots of threats to leave OH if the solution to continue to support OH 1.x bindings is by running something separate from OH itself.

To repeat the same comment I’ve used several times over the past week, why should your preference trump the preference of others when both approaches can be supported?

I’m not trying to push any preference. Just pointing out that there really has been an evolutionary change (maybe not that dramatic) with MQTT and either HA or Homie implementations. It can remove your preference for a HA software solution where you are picking software based on device support rather than picking a solution that does great automation through user interface, automations, etc. In some ways MQTT has/will change the playing field for HA solutions that have previously relied on saying we support xxx protocols. While I don’t use a lot of MQTT in my home, that is rapidly changing and I could drop most of the OH bridges I use and move to MQTT only.

1 Like