[SOLVED] Openhab Docker or not?

Here is the Repo:


Hey man that’s very nice of you thanks.

Going to try this out very soon.

For those who are interested here is the weblog I was talking about to stream an ipcamera to chromecast and the specific docker.

Please report back if you had problems or something


I used the work networking mode, networking-mode needs to be host otherwise the docker container won’t be able to scan the local net for in example a Phillips Hue.

I use Docker on an Ubuntu server I built for openHAB and several other services. I decided to run openHAB inside Docker along some other lightweight services like Mosquitto. I run node-RED and Shinobi CCTV software outside of Docker, however (node-RED for access to the command line and Shinobi for easier access to the GPU for hardware acceleration).

I’m by no means an expert. I did struggle at first with some of the docker stuff but it came to me eventually. I think it’s good to keep track of the docker run commands you use for easy editing later. I also recommend using Portainer or something similar to keep track of your containers.

1 Like

Hard to answer because it depends on your preferences, knowledge and HW.
The main advantage is in being able to rollout fast, roll fore and roll back OH and other server instances and OH versions.
This will help minimizing SW dependencies if you want to colocate OH with other services on a shared server but comes at a price. Some system-near stuff such as Exec binding or GPU access won’t work, and it’s consuming quite some resources so I wouldn’t want to run it on a RPi2/3 which otherwise is fine as a (dedicated) OH server.
The standard recommendation is a RPi with openHABian. I, too, would separate OH and your RTSP-whatever converter and run them on dedicated HW.

1 Like

It’s not so much that it doesn’t support the Exec bindings but that the Exec binding isn’t all that useful.

The whole purpose of a container is to provide everything that a single program needs to run in one package. In some ways, you can look at a container as a wholly separately running operating system.

But, the container also only includes just what the program running inside it needs to run. There are no extras available, like Python. Some containers don’t even have bash. So most of the things you will want to have access from the Exec binding won’t be there. And even if stuff like Python were there, it will be limited in what it can do outside of the container.

Personally, I run everything in containers. It makes installation, upgrade, and backup pretty simple. I don’t have to worry about incompatibilities with libraries or ports or the like.

Or outsource that stuff to some other app you can command more indirectly such as through MQTT.

With one exception (Calibre which doesn’t offer an official Docker container) I always use the “official” images from the software provider unchanged.

You might look into Ansible. I have all of this (minus NodeRed) running in containers installed and managed by Ansible playbooks.

If they have the option to install this software not in a container you will probably have better performance. Converting media streams tend to be RAM and CPU intensive and containers do increase the amount of RAM needed to run a set of software.

On a Pi1 that could make a real difference.


Absolutely. Especially because you can integrate your config files there as well and put everything into a git repository for versioning.

What I assume is that docker could get more complicated with USB related stuff. At least there was a time some years ago I tried that on my Synology (running docker and using USB for zwave) and I failed.

I don’t find it to be all that complicated. But perhaps my approach inadvertently avoided problems. I always make sure there is an equivalent user on the host as the user the program running inside my containers are running as (i.e. my host has an openhab user and it’s the same uid as the openhab user inside the container) and I added the host’s openhab user to the proper groups for read/write access to the devices. Then I just need to pass the device into the container with the right arguments. In Ansible that’s supplying valued for “devices”.

- name: Update openHAB docker
    detach: True
      - "/dev/ttyUSB0:/dev/ttyUSB0:rwm"
      - "/dev/ttyUSB1:/dev/ttyUSB1:rwm"
      EXTRA_JAVA_OPTS: "-Xbootclasspath/a:/openhab/userdata/jython/jython.jar -D                                                               python.home=/openhab/userdata/jython -Dpython.path=/openhab/userdata/lib/python"
      CRYPTO_POLICY: unlimited
    hostname: argus.koshak.net
    image: "{{ openhab_version }}"
    log_driver: syslog
    name: openhab
    network_mode: host
    pull: True
    recreate: True
    restart: True
    restart_policy: always
    tty: yes
      - /etc/localtime:/etc/localtime:ro
      - /etc/timezone:/etc/timezone:ro
      - "{{ openhab_data }}/conf:/openhab/conf"
      - "{{ openhab_data }}/userdata:/openhab/userdata"
      - "{{ openhab_data }}/addons:/openhab/addons"

Nice. Thanks for the hint!

You can spin up a HABApp docker container alongside and and try out easy and sane automation :wink:

Have you tried this binding as it can provide the HLS stream and I have my cameras streaming to my Chromecast enabled Sony TV. Sorry if I have missed something special about your use case as I have not used docker before, as far as I know you can run Openhab in docker and then use this binding to do what you wish. The binding does need access to ffmpeg…

Hi All,

I am reading up the documents on Openhab docker as I would like to set it up when I get my hands on a RPi4. Just wondering if there is anyway to setup everything via docker compose and config files?

I read that you can have bindings installed by specifying in the config file, and I have all the items/sitemap/rules in text files as well.

However, I thing I am uncertain is Things. For my current setup (non Docker), I add Things using paper UI, then use the channel ID on the my items. Is there a way to automate on adding things? And then link it to an item?


Of course. Just mount a conf folder into the container over /openhab2/conf inside the container and the OH in the container will use those config files. You also need to mount a userdata folder over /openhab2/userdata to preserve anything you may do through PaperUI, embedded persistence (mapdb, rrd4j) and any binding specific persisted data.

This is why you want to mount a userdata into the container as well. All this gets stored in the JSONDB which is in files in the $OH_USERDATA/jsondb.

If you mount both conf and userdata than you can use openHAB however you want, be it using PaperUI or through text configs.

I run this way and have done so for years.

The automation for the adding of Things is automatic discovery which you do through the Inbox. You can define Things using .things files, but there will be no automatic discovery. You will have to research each binding and figure out how to define the Things properly in a .things file.

There is no way to automatically link them to an Item because how is OH going to know what Item to link it to? You can put the link on the Item in your .items file, is this what you are asking?

Anyway, the tl;dr is just mount volulms for both userdata and conf and use OH how ever you are used to it.

I will also try to use OH in Docker on RPI 4. I am noob in Docker but there is a lot of information and I think i will manage it. Recommendations are welcome.:wink:

Are there any recommendations for Backup the SSD after Installation?

I’d refrain from doing so. There’s no general benefit in using Docker that would justify this.
And you’re using almost everything which is new (Docker is new to you) and untested (openHABian on RPi4/buster is not yet implemented). Seems like you are asking for trouble.

I’m currently working on getting openHABian to run inside Docker and there’s quite a number of issues.

Sure, Amanda is part of openHABian and recommended to use for all backup purposes.

Thank you for your answers. I thought it would be a good way to prepare my system for the future. Run OH2.4 in a Docker, Later OH3, have a Docker for Milestone Builds, …

I just started with informing me about Docker and the first few Info´s looked good for me.
After reading your post I think I have to decide whether I will do it or not. It looks to be a big workload, especially if there is no experience from my side.

Puh, hard to decide.:thinking:

If you plan on running other services on the RPi there are indeed some benefits. And with the more ram options on the RPi 4, running the services in Docker is a perfectly reasonable way to run these services in a controlled and isolated manner.

I wouldn’t do so on anything less than 2Gb RAM because Containers do increase the ram requirements slightly.

Now this is prompts a question, why? That really isn’t how containers are supposed to work. I could see changing it to install the services (OH, Mosquitto, frontail, etc) as containers but I can’t imagine how running openHABian in a container itself makes sense. As a VM, yes, but not as a container.

If you are not planning on using this machine for any other purpose than to run the stuff that openHABian installs, there benefits to running in a container are minimal. If you will be running other stuff, the isolation between processes offered by containers is useful.

Ok, this wasn’t meant to be a general statement.
But the benefit-cost ratio isn’t great especially if you’re a Docker newbie like the OP. And you can’t run openHABian then.

Now to contradict myself in the next statement …

It’s for the purpose of automated testing of the openHABian code. Clearly not targeted at end users.

:+1 that’s a good use for a container. Though I can imagine it’s very difficult. A typical container doesn’t even have an initialization daemon like systems running. For testing purposes have you considered using Vagrant and a VM for testing? It might be easier over all.

1 Like

Why? Just start with a Debian container.