openHAB runs in a docker container, so is OS agnostic, yes?

I’m about to jump in to this home automation thing. I’m an embedded programmer for 20 years and am getting into IoT at work, so the myriad of solutions to home automation is bewildering. I already can’t see the forest through the trees as it is! This thread is mostly helping me to see the obvious.

While reading the forums and documentation, it becomes clear there are several intertwined topics having to be addressed which are not overtly stated:

  1. What and why Dockers
  2. openHAB is installed via Docker container for version and dependency control
  3. Which OS and why
  4. How the openHAB docker interacts with the OS and the network
  5. Communication Protocols, what they are and which brands use what
  6. Probably more obvious things I’m overlooking

Question 1:
I’ve got an Intel Atom miniITX computer laying around that I’m going to use and I see no problems with running openHAB on it since it can run windows and almost any linux I want, right? I’ve had a few of these things and another was the pfsense firewall at work.

  • Platform information:
    • Hardware: Intel Atom Cedar Trail-D Processor D2550, 4GB ddr3
    • OS: the latest Ubuntu (but will try Clear first)
    • Java Runtime Environment: will install latest needed for new install
    • openHAB version: latest

Question 2:
Is ubuntu the OS of choice because it is a linux and probably the easiest linux with the most online support? I’m thinking about trying Clear Linux (I use it at work, etc) and if it can install all the softwares I need, I think it will be great. Thus, if it will run the openHAB docker image, there isn’t any direct reason this wouldn’t work, yes?

Question 3:
Looking through the support requests, I’m seeing the majority of the issues being getting a software package, something plugged in to host computer, or service to work correctly (I consider an OS topic) or to get interaction with a protocol or specific piece of hardware (which is hacking things to talk to each other). Is that generally correct?

My first steps:

  • I have a SmartLife power strip that I’d like to control locally including scheduling and automation.
  • We were gifted a bunch of Feit bulbs and I’d really like to automate its color changing on a schedule or via a button in software or physically. The threads on the Feit give me hope.


Actually any Debian-based Linux is the OS of choice since OH distributes packages compatible with those OSs.

Docker is not as well supported currently, in my opinion. I have recently started testing Docker installations hoping to improve the documentation and perhaps the experience.

I am not familiar with the devices you wish to start with so it is difficult to comment much further. I would recommend you also be familiar with Docker & containers before using it here. There will likely be times you need to work from within the container.

The file paths are different for Docker installs than for Debian package-based installations, so that would present some challenges in translating some solutions found here.

I personally prefer just a Debian CLI environment with an Linux installation of openHABian… It handles some of the initial configuration complexity and I am lazy. :wink:

Understanding openHAB concepts helps relieve some of the issues. Most issues lately have been related to changes cause by the recent upgrades. I do not expect that to be an issue for a new user for a year at least.

1 Like

Now that @Bruce_Osborne has covered the OS question well, I’ll add some thoughts.

This is generally correct, in that you’ve covered the vast majority of things people want to do with home automation. :wink: I don’t think I’d consider “service to work correctly” to be an OS topic, but I might be misinterpreting you.

Focusing on newer users (which includes me), I think the way I’d characterize requests for help is:

  • Getting OH running…and then keeping it running
  • Getting devices working with OH
  • Getting external services (e.g. Google Assistant, Alexa, DarkSky) working with OH
  • Implementing rules to automate devices
  • Controlling devices (e.g. UIs, voice assistants, apps)
  • Getting data from devices (e.g. charts, graphs)

Many new users jump in too quickly without fully understanding what they’re doing, so they get frustrated when things don’t “just work”. So, it’s great that you’re asking questions up front.

I’m not familiar with the SmartLife brand or the Feit bulbs, but from other posts it sounds like they’re both re-brands of Tuya devices. If so, they might not be the first things you want to work with, since there’s no binding for them. I’d suggest starting with a device that has an OH binding available, so that you can see how things typically work. I like to recommend TP-Link Kasa smart plugs, because they’re inexpensive, reliable, and work well in OH (but any device with an OH binding will do).

Seems a reasonable machine. OH only requires specs equivalent to an RPi 2. This machine seems way more than that.

I don’t think the Atom processor would be a problem.

It’s one of the more popular dsitros. I run it because it’s easiest to find support for stuff. You can always find an article or the like telling you how to do something or fix something in Ubuntu. But Ubuntu is based on Debian so pretty much anything that is written about Ubuntu applies to any Debian based distro. I don’t know about Clear Linux (just read about it for the first time today) but if it’s Debian based you should be fine. If not, you may need to do some translations. Its worth noting a lot of OH users also run OH on CentOS/Fedora as well. It kind of doesn’t matter really.

As for Docker, it is most certainly not OS agnostic. I’m actually not at all certain that the OH Docker instance will work on Windows, for example. But if you are running it on Linux, you can kind of think of it as being OS agnostic. But I’m not sure that’s the best way to think about it. What it is is like a great big chroot on steroids (if you know Linux that sentence makes sense, if not don’t worry about it). It provides a way for a bunch of separate environments to share the same kernel. Depending on where you draw the line on where the OS starts and stops will probably dictate whether you treat a container as OS agnostic.

I think a better way to think about it is it comes with it’s own OS except for the kernel which is shares with the host.

In what ways. I’ve been running and supporting Docker trouble shooting requests, and have even contributed a little bit to the code, for years. What may be missing is that we don’t teach you to use Docker. That’s outside the scope so you need to go “learn you some Docker” on your own.

It’s the manual install with “the directory you unzipped OH to” being /openhab2 (maybe without the 2, I don’t remember) so, for example $OH_CONF would be /openhab2/conf/. But that only really matters if you are trying to use paths from inside the container (e.g. running executeCommandLine or the like) which, if that’s a requirement, Docker probably isn’t the best choice anyway. Any time you are interacting with the files you are doing so from where ever they exist on your host usually which, I recommend using the default manual installation location (i.e. /opt/openhab2/conf and /opt/openhab2/userdata).

Anyway, back to the main topic, @Chris_K, definitely spend some time with the docs and see How to get started (there is no step-by-step tutorial) for a bunch of other getting started resources.

If you want to use a bunch of commonly used third party tools like Mosquitto, InfluxDB, Grafana, etc, you might consider the manual isntallation of openHABian on a Debian based distro (Ubuntu works). This will get you more than just OH. The Docker image is just OH.


With Home Assistant, for instance, it is one simple command. No making users & directories.

docker run --init -d --name=“home-assistant” -e “TZ=America/New_York” -v /PATH_TO_YOUR_CONFIG:/config --net=host homeassistant/home-assistant:stable

No, the official instructions install in /opt/openhab outside the container. I just did that on a Raspberry Pi 3B+ for testing and it was unusably slow.

Go with the distro you are already familiar with, unless you want to learn using a new distro.

I would highly suggest running OpenHAB inside a docker. It is the cleanest, easiest way to do it. There are so many benefits to using docker, particularly the ease of upgrade / downgrade. I am not aware of any significant downside.

If you use docker, you do not need to install JRE. But if you don’t want to use docker, you’ll need JRE 8, not the “latest” java 11,12,13 or whatever.

I would be willing to bet that you can use Dockers on Clear Linux.

My personal preference is to use docker-compose to make it even easier, so your docker configuration is stored in a docker-compose.yaml. I’ll be happy to share my docker-compose.yaml so you can use it as an example.

My preference would be to flash it with Tasmota and control it locally with openhab (no cloud connection). And if you want to control it over the phone, you can look at myopenhab and the openhab android/ios app. This way you will no longer have any link to the third party / Tuya cloud

The majority of issues I guess is lack of understanding. This can apply to any level from beginner to advanced. Many people here are happy to help.

Just jump in and tweak/learn as you go.


That’s not a Docker thing. That would be an openHABian thing if openHABian used Docker. But for various well discussed reasons, openHABian decided not to go the Docker route.

The folder you unzip to is irrelevant with a manual install. Some people manually “install” to their home directories. Doesn’t make it any less of a manual install. Inside the container, OH is installed using the manual instructions to the /openhab2 directory.

There is no reason for oh to perform any differently with a manual install versus an apt or yum install. It’s the same software running the same way the only difference is the location of some of the files and some of the configuration is done for you. Similarly, OH runs just as fast when running with Docker, but it does require significantly more RAM which is why openHABian doesn’t use it.

It is literally running as a process like any other. You can see the Java process in top just like when it runs outside the container. It’s running on “bare metal” just like when it’s directly installed. The only difference is it’s isolated from the host so it can’t share some of the host’s resources, hence the need for more RAM and the unavailability of stuff like python.

More RAM on a machine with to little ram to begin with like an RPi is a good reason. OP should have plenty of really though. Another good reason is if one wants to take advantage of openHABian and get some of the stuff it does with third party apps too.

OpenHABian is only for raspberry pi though?

For RaspberryPi of course docker isn’t ideal, unless you use 4GB RPi 4 or something, and even so, that’s for advanced user, since openhabian would be so much easier.

For a generic linux computer with 4GB+ RAM, docker is ideal.

I’m curious, what other stuff does it do?

That is what I am toying with. Seeing if it makes sense to automate and simplify Docker installs.

I did try the smaller Alpine Linux image. Perhaps that was an issue.

The image is but you can install OpenHABian on any Linux, I believe. I know it works on Debian. That is what I use in a VM on my primary system.

It simplifies installing Frontail for log viewing and Samba for external file access. I think it has options for MQTT and a few other things I do not use.

It also handles Zulu Java installation and other dependencies well.

It does work on Windows or macOS. Though there are some limitations eg host networking does not work. Using Docker Desktop on Windows you can choose if you want to run Linux or Windows based containers. You can use the same Docker commands on the command line as you’re used to on Linux. I tried it myself recently. :wink:

Everyone, thanks for your welcome and your input! And it seems I stumbled into the near-religious debate about Docker vs not Docker. From my reading, it wasn’t obvious there wasn’t a Docker solution. I say this to highlight to “y’all” that as the absolute new-to-openHAB visitor, there is opportunity for improved clarity.

Ah, the advice of running debian / ubuntu for the apt install makes sense. I might lean towards Docker install more because I tend to neglect my linux installs and don’t do too much maintenance on the OS. The formality of a Docker means openHAB will work even if I totally screw the OS and need a rush install of the latest linux.

@JimT thanks for your answer and advice - it’s info like “flash with Tasmota” that I need right now. My problem is how much I don’t know and that I don’t even know how much I don’t know.

@rlkoshak Clear is not debian based. It is a systemd based distro that is totally homegrown by Intel. It has astounding performance because it doesn’t try to be anything other than efficient. It also has a nearly empty /etc folder because they don’t supply samples of default files littered all over the filesystem. I’m pretty jaded with technology. So many things I see in Clear make me happy. But Clear isn’t totally easy like ubuntu / debian are. But I switched to it with my work server and have been able to get anything done that I’ve needed.

I’ll check back with an update. Might be a minute, I’m trying to sort out another computer then will look at the openHAB box. :slight_smile:


1 Like

At the risk of telling you what you already know, here goes:

Regarding Tasmota: I’m sure you can find a lot of info on it. Tasmota is a custom firmware that can be flashed onto any ESP8266 based iot device. To do the flashing, usually you can do it “over the air” using “tuya-convert” (if you google it, you’ll end up at their github page with further instructions). “tuya-convert” is just a utility to flash a custom firmware onto iot devices that were originally created by the company called Tuya (who made the SmartLife ecosystem). tuya-convert is quite handy that it ‘bundles’ the Tasmota firmware with it so you don’t have to go hunt for the firmware on your own separately.

I would suggest running Tuya-convert on a RaspberryPi but do whatever works for you.

Once you’ve got Tasmota up and running on your power strip, you’ll need to connect it to your wifi, then to your MQTT broker. Oh yes, you need to set up an MQTT broker too. It is a separate component to OpenHAB. You might come across a “Embedded broker” in openhab (in the addons section). Don’t use it. Instead, install Mosquitto separately - either directly on your Clear Linux installation, or inside a docker.

If you aren’t familiar with MQTT I’d suggest getting up to speed with it. It’s not a big deal really once you understand it.

So it will go like this
Openhab <----> MQTT Broker (mosquitto) <----> Your powerstrip running Tasmota

In other words, openhab and your powerstrip (running Tasmota) communicates via MQTT. The Broker is the “communication hub” for MQTT messages. Everyone connects to the broker, not directly to each other.

Going from that, you’ll need to install the MQTT (version 2) binding onto openhab. Then you set up your Things and Items.

I know it’s a lot to take in, and it will be quite a struggle to do all the above at first, but I promise you once you’ve done it once, adding more stuff is very quick and easy.

Then you’ll start your journey into the fun part of writing rules… then you’ll be constantly thinking, what else can I add…

1 Like

You can follow the manual instructions to install openHABian it works on just about any Debian hard distro. There are a few options you would want to run like zram and moving the root fs, but all the installation steps should work. Off doe course, ymmv

Install and configure to work with OH:
- mosquito MQTT broker
- InfluxDB time series database
- Grafana charting
- MiFlora service for cheap bt sensors
- FronTail to look at logs through the browser
- FireMotD to see system Stewart’s when you log in but ssh
- zram to move folders with lots of writes to ram
- Amanda for incremental backups
- Reverse proxy using ngnx to add authentication
- LetsEncrypt in case you want to expose your oh to the internet through the reverse proxy instead of using

I’m sure there are some things I’m missing. With Docker all you get is OH.

Shouldn’t be. If you were out of ram and had swap enabled that could explain it. If you were mounting volumes that reside on a network share that could make a difference to since you are accessing the files over the network. But as long as the architecture is right (i.e. you are not trying to run an ARM images on an x86 machine) the performance of the running program should be the same.

The differences been the various images has to do with what libraries, operating system files, and third party programs are part of the image. For example, the Debian base image uses the Debian version of the os libraries and programs that are included. Alpine uses the Alpine versions of the libraries and such.

Ideally, you want the image to contain just enough to run the one program. In practice the images will have a pretty generic base with a full POSIX environment and bash and a bunch of other stuff as well.

But when this is all running, is running directly on the CPU, not through a virtualization layer. So there isn’t anything to allow things down beyond the very quick check in the kernel to see if there container is allowed to perform the operation, which is a check I think it does even without containers.


Good to know. I figured it’d work on OSX but I’ve encountered many Linux containers that don’t even start on Windows.

Exact same image, etc. works with the default Ubuntu image though. The Alpine image uses OpenJDK instead of Zulu.

That would explain it, but that’s not caused by Docker but is caused by OpenJDK.

1 Like

Hi @Chris_K,

welcome to the rabbit-hole that is “Home Automation” :wink:

I would like to give my two cents about the setup I am running locally and the experience I have gathered over the last 1,5 years of running openHAB to control my home (and I went all in with this appartment, even removing all switches from the wall, only letting me turn the lights on via OH2)

My setup was initially on a RPi 3+ “bare metal” on Raspbian. I experienced sub-acceptable experience, due to the performance of the RPi (I have some very heavy rules for my light control, which executed very poorly from time to time). I switched to a Xeon based Homeserver, so I am not sure about your Atom computer, but if it is 20%-50% faster than the RPi 3+ you should have enough headroom.

When switching hardware I also went to an all docker setup for every application/service running on the server for one main reason: Ability to move my server apps and backup ALL relevant data easily. As you probably know apt based installations hide the complexity and file paths of the installation, catching every relevant file in a backup is hard. Docker enables you to specify relevant directories that are mapped to your host and can hold all necessary data for a move/backup.

version: '2.2'                                                                     
    image: openhab/openhab:latest                                  
    container_name: home_openhab                                                   
    restart: unless-stopped                                                        
      - "9125:9125" # XML-RPC callback                                             
      - "9126:9126" # BIN-RPC callback                                             
      - "/etc/localtime:/etc/localtime:ro"                                         
      - "/etc/timezone:/etc/timezone:ro"                                           
      - "/opt/steilerGroup-Docker/home/volumes/openhab/conf:/openhab/conf"         
      - "/opt/steilerGroup-Docker/home/volumes/openhab/userdata:/openhab/userdata"
      - "/opt/steilerGroup-Docker/home/volumes/openhab/addons:/openhab/addons"  
      OPENHAB_HTTP_PORT: "8080"                                                    
      OPENHAB_HTTPS_PORT: "8443"                                                   
      EXTRA_JAVA_OPTS: "-Duser.timezone=Europe/Berlin"

As you can see in my docker-compose.yml all relevant OH data (/openhab/conf, /openhab/userdata, /openhab/addons) is transparently mounted to my directory of apps. One command (docker-compose up -d) and my instance is running.

You should definitely read up on docker and docker-compose before proceeding, but understanding the benefits will improve your personal setup drastically!

I personally would not agree with @Bruce_Osborne, since in my experience the Docker install has never had any specific problems compared to an apt install or even a manual install.

I hope that helped, otherwise please ignore :wink:


Well on a sidenote to add to the confusion, openHABian actually is using (or going to use, PR in the pipe) Docker builds for its automated testing. But that’s not visible for end users and not on RPi, for obvious reasons. openHABian keeps targeting SBCs as these are cheap and energy efficient thus ideal for a 24/7 home automation server.
Docker to me is a tool to automate and scale cloud deployments.
Then again, openHABian on Docker on Windows :thinking: … nah.

I don’t know if I would dismiss it entirely. Given that OH in Docker works on Windows, I wonder if a Docker Compose based deployment with perhaps some other scripts that run at first run to do the config stuff might be an elegant way to support a more comprehensive deployment on platforms that wouldn’t be targeted by openHABian like Windows, iOS, non-apt Linux distros, the various BSDs, etc.

If anyone likes the idea, feel free to run with it. If not, I’ll pick it up when I have time and see how it goes.

just my 2cents on the topic.
I’m running OH in docker for half a year now on production environment. And I had 0 outtakes/issues/whatever. It’s just works.
And it’s WAY better than any openhabian fully dedicated rpi ever will be. Simple fact you usually do run docker on more powerfull machines not viable to SD/USB issues mainly. Let’s not count those attempts to run OH in docker on RPI which does not make much sense.

From original run instruction on docked-hub I’ve tweaked slighly some parameters to suit my preferences a bit, but that’s about it.
All settings/addons and stuff is outside docker, so update to new version is painless and flawless as well as managing backups/editing files and so.

Docker for my is best solution how to run and manage production OH instance.