It uses http and the same REST API used by the user interfaces. I understand it also receives events from the OH2 server.
From a dev perspective for others looking at doing this, the theory of replacing SmartThings API is relatively simplistic.
Let’s say you have 100 OpenHAB devices, each of them on a VPN with a reachable IP. As long as you know the IPs and can identify which device is which, you can map user accounts with them. So in creating an API, you need your users to be associated with, or have permissions to, one or more devices - each with a JWT token to authenticate with.
If our VPN IP range of our devices is 10.0.0.1 - 10.0.0.100, the calls are easy:
GET http://10.0.0.1:8080/rest/items/My_Item" --> user A has permissions on this GET http://10.0.0.2:8080/rest/items/Another_Item" --> user A has permissions on this also GET http://10.0.0.3:8080/rest/items/Other_Item" --> user B has permissions on this GET http://10.0.0.4:8080/rest/items/Random_Item" --> user C has permissions on this // and so on
All that’s required is to know which host URL to call, as the parent API is a wrapper around the OpenHAB API.
Tailscale a big tech company ??
It’s a couple of nerds trying to turn their contribution to the Open Source world into some money. The identity provider is for your admin ID only. Just create a functional account and you’re set there.
HTTPS is now supported.
As mentioned in the Git issue, DEBUG logs could probably help the analysis.
Not Tailscale. Google, Microsoft, etc. I’m all for what Tailscale is doing. Just don’t want to deal with big “identity providers” like these now their true colours are on full display. I’m actively removing them from projects we work on and services we use.
It appears you are using this in the enterprise.
There are people here (developers) that offer enterprise level services apart from this forum. some use OH based solutions. PM me if you want me to connect you with a couple of them.
There are very few here on the forum experienced with using smart home technologies in a commercial setting.
Not yet, but soon. We have a lab set up to toy with some of this to figure out better ways to do things. The offer is appreciated though!
Interesting article for those interested in doing more with OH:
“Scaling Home Automation to Public Buildings: A Distributed Multiuser Setup for OpenHAB 2” by Florian Heimgaertner, Stefan Hettich, Oliver Kohlbacher, and Michael Menth
(University of Tuebingen, Department of Computer Science, Tuebingen, Germany)
Looking at the University site it was envisioned to interconnect several campus buildings. so the public Internet was not involved from a security perspective.
The principle should be the same though if the boxes are configured to only communicate over a VPN. That solves the issue of being able to access multiple device REST APIs from one application, although it may not necessarily be the best option in the long term (a distributed or federated bus of some kind is going to be essential if you pushed over 10,000 nodes).
Not saying it’s a blueprint, but the comments earlier have pointed out there’s not a huge amount of data related to building multiple instances out into a wider network.
I’ve shared it elsewhere and even have posted some tutorials. At a high level:
I’ve two OH instances, one at my home and one at my dad’s home.
I run pfSense as my firewall and use the openVPN server built into that. If I were not running that I’d probably set up a Wireguard service on a VPS somewhere using one of the several scripts that makes that easy. So I do have a port exposed on the internet on this pfSense box so that I can connect to the VPN when not at home so I also have a bunch of other stuff to monitor and generate alerts also running. I also require certificate based authentication.
My remote openHAB RPi machine connects to this VPN at boot time. The routing is configured so that all the machines connected to the pfSense machine and all the machines connected to the VPN can see each other. Therefore, for all intents and purposes, that remote openHAB instance is on my LAN.
All our phones are configured with an openVPN client and using Tasker they automatically connect to the VPN when not on the home network. Therefore even the phones can see both instances of openHAB as if they were local.
However, to get push notifications the openHAB instances either need to be connected to myopenhab.org or one needs to use something other than the built in notifications (email, email to SMS gateway, Telegram, etc.).
No possible. openHAB has text file based configs which are stored in /etc/openhab (for OH 3, $OH_CONF) and /var/lib/openhab (for OH 3 $OH_USERDATA). A lot of people, myself included, check all of $OH_CONF and most of $OH_USERDATA into source control.
As an aside, if you haven’t yet, you should look into Ansible for building and deploying all this stuff.
openHAB can persist it’s Item states for analysis and charting to any supported database that can be reached on the network the openHAB instance is connected to.
openHAB uses log4j2 as it’s logger. There are a number of appenders that can be configured.
I’ve been using the GoControl HUSZB-1 for years for both zwave and zigbee. I’ve never had a complaint. It works and is rock solid in my experience. Keep in mind that zigbee is prone to interference from WiFi. In an apartment situation that could become more of a problem.
That journal article went nowhere. The authors did not work with us and did not contribute anything back to the community. It’s a dead end and was based on an old version of openHAB when the paper was published. It would have been wonderful but once they go their publication they disappeared. They didn’t even try to contribute it back.
Some potentially relevant tutorials:
- Remote Access: pfSense + HAProxy + LetsEncrypt
- A quick intro to Ansible
- An Ansible 'Getting Started' Guide
- Ansible Revisited
- Guacamole for remote access to your machines
- MQTT 2.5 Event Bus
- HOW To setup remote logging (ELK stack) and reduce MicroSD writes
- New Add-on bundle for Prometheus health Metrics
- Git based non-public versioning and deploy workflow
You can log into Tailscale with your email. No need to feed the big data leechers. [ or what do you call them in English? Krakens ? ]
I wish! --> https://tailscale.com/kb/1013/sso-providers
Can I sign up with an email address?
We don’t support sign-up with email addresses. By design Tailscale is not an identity provider: there are no Tailscale passwords, account recovery, etc.
gmail.com You missed that part of the page.
gmail.com addresses are treated specially: they always authenticate through Google without needing to be configured first.
Legendary reply - thank you! I know Ansible very well. As fun as it will be to get into the detail of OpenHAB, we’re looking at a fairly straightforward template we can use for multiple OpenHABian devices and generally work upstream. The battle is to get the boxes reporting their data and receiving commands.
It seems https://github.com/mp911de/logstash-gelf/ can connect log4j2 to Graylog and Seq, but there’s no native appender for the latter: https://docs.datalust.co/discuss/577edbb1c6a4f00e0052de58
I’ve found Z-Wave on HUSZB is generally OK. But the Zigbee just won’t work. It’s stubborn about how it won’t work on OpenHABian. It’s ugly and slightly prone to being knocked over with that thing sticking out of it. That said, I’m now on our second Razberry because the first has exhausted all avenues to work other than a hardware fault. What i’d love is a component shield for the v4 Pi which did what the USSZB does, right out of the box without needing configuration.
In terms of MQTT, i’m having a bit of trouble configuring a local Pi to replicate SmartThings’ webhook events (post i added was here Logging changes/events to remote MQTT server). The trick is to get the 20 boxes sending up their activity onto the cloud server so a bunch of other things to subscribe to the data stream. Then we have a MongoDB cluster, load-balanced web app to read from it, and a Centrifugo server (https://github.com/centrifugal/centrifugo) and REST API to talk to API clients. The part which is proving difficult is connecting these things together to ensure scalability as we go.
HiveMQ looks good for joint brokerage and storage, but there’s no MongoDB plugin openly available yet for cloud persistence. And there’s no reliable FOSS CLI tool i can find to subscribe to the MQTT channels and forward them as HTTP POST payloads to a web app (mosquitto_sub + curl is a no-go because of the concurrency).
Another concern i have is breaking changes in versioning. Is there a release date for the stable branch of OH3? It looks good, but i’m wondering about the overhead of re-flashing a bunch of Pi boxes to something new.
As I suggested then get a dummy account. Or explicitly choose one of those smaller ID providers help them fight big G.
I can’t imagine it would take more than a few hours to code something up in Python. I’m a little concerned that you are building this relatively massive and complex system and the lack of a FOSS script to publish MQTT messages to Mongo is the sticking point.
Last I saw it was December, but released doesn’t mean bug free and sometimes it doesn’t necessarily mean complete.
Appreciated! Self-hosted OpenVPN seems to be the way to go here, even if Tailscale claim Wireguard performance is better. They seem very enterprise-orientated (understandably) rather than M2M.
OpenVPN is a lot more up-front config, but there seems to be more long-term control. The other providers are just as expensive. Deploy 100 boxes and you’re looking at $200/month, or you’re sharing account devices you have trouble separating. OpenVPN Cloud itself is $75/month for 10 user accounts.
With that kind of money, you’re better off spending it on a few VPS nodes you can have IT people monitoring.
We’re tinkering with the architecture, not building out just yet. We could easily build something in Go or Python, naturally, but it would be easier to have a CLI tool which takes a nice .conf file on boot and runs without much interference - first rule is never to build for yourself what someone else has already built. It’s a really simple thing, which is why it’s surprising there is no CLI tool for it! We had a similar issue when playing with Crossbar (crossbar.io) but it includes a REST bridge: the problem there was authentication between the browser Websocket client and the API. It was just a maze.
HiveMQ takes care of a lot of that (e.g. https://github.com/ckurze/mongodb-hivemq-iot-demo), as if we can get the MQTT messages into cloud storage, we can simply read from it as we go. The issue is separation between the backend and frontend processes, so the whole thing doesn’t go down if there’s a network problem. What we’re sending to a Websocket client or what they’re reading from the API is different from what the devices are streaming to the Bus/DB.
This topic was automatically closed 41 days after the last reply. New replies are no longer allowed.