[SOLVED] Openhab Docker or not?

It’s not so much that it doesn’t support the Exec bindings but that the Exec binding isn’t all that useful.

The whole purpose of a container is to provide everything that a single program needs to run in one package. In some ways, you can look at a container as a wholly separately running operating system.

But, the container also only includes just what the program running inside it needs to run. There are no extras available, like Python. Some containers don’t even have bash. So most of the things you will want to have access from the Exec binding won’t be there. And even if stuff like Python were there, it will be limited in what it can do outside of the container.

Personally, I run everything in containers. It makes installation, upgrade, and backup pretty simple. I don’t have to worry about incompatibilities with libraries or ports or the like.

Or outsource that stuff to some other app you can command more indirectly such as through MQTT.

With one exception (Calibre which doesn’t offer an official Docker container) I always use the “official” images from the software provider unchanged.

You might look into Ansible. I have all of this (minus NodeRed) running in containers installed and managed by Ansible playbooks.

If they have the option to install this software not in a container you will probably have better performance. Converting media streams tend to be RAM and CPU intensive and containers do increase the amount of RAM needed to run a set of software.

On a Pi1 that could make a real difference.


Absolutely. Especially because you can integrate your config files there as well and put everything into a git repository for versioning.

What I assume is that docker could get more complicated with USB related stuff. At least there was a time some years ago I tried that on my Synology (running docker and using USB for zwave) and I failed.

I don’t find it to be all that complicated. But perhaps my approach inadvertently avoided problems. I always make sure there is an equivalent user on the host as the user the program running inside my containers are running as (i.e. my host has an openhab user and it’s the same uid as the openhab user inside the container) and I added the host’s openhab user to the proper groups for read/write access to the devices. Then I just need to pass the device into the container with the right arguments. In Ansible that’s supplying valued for “devices”.

- name: Update openHAB docker
    detach: True
      - "/dev/ttyUSB0:/dev/ttyUSB0:rwm"
      - "/dev/ttyUSB1:/dev/ttyUSB1:rwm"
      EXTRA_JAVA_OPTS: "-Xbootclasspath/a:/openhab/userdata/jython/jython.jar -D                                                               python.home=/openhab/userdata/jython -Dpython.path=/openhab/userdata/lib/python"
      CRYPTO_POLICY: unlimited
    hostname: argus.koshak.net
    image: "{{ openhab_version }}"
    log_driver: syslog
    name: openhab
    network_mode: host
    pull: True
    recreate: True
    restart: True
    restart_policy: always
    tty: yes
      - /etc/localtime:/etc/localtime:ro
      - /etc/timezone:/etc/timezone:ro
      - "{{ openhab_data }}/conf:/openhab/conf"
      - "{{ openhab_data }}/userdata:/openhab/userdata"
      - "{{ openhab_data }}/addons:/openhab/addons"

Nice. Thanks for the hint!

You can spin up a HABApp docker container alongside and and try out easy and sane automation :wink:

Have you tried this binding as it can provide the HLS stream and I have my cameras streaming to my Chromecast enabled Sony TV. Sorry if I have missed something special about your use case as I have not used docker before, as far as I know you can run Openhab in docker and then use this binding to do what you wish. The binding does need access to ffmpeg…

Hi All,

I am reading up the documents on Openhab docker as I would like to set it up when I get my hands on a RPi4. Just wondering if there is anyway to setup everything via docker compose and config files?

I read that you can have bindings installed by specifying in the config file, and I have all the items/sitemap/rules in text files as well.

However, I thing I am uncertain is Things. For my current setup (non Docker), I add Things using paper UI, then use the channel ID on the my items. Is there a way to automate on adding things? And then link it to an item?


Of course. Just mount a conf folder into the container over /openhab2/conf inside the container and the OH in the container will use those config files. You also need to mount a userdata folder over /openhab2/userdata to preserve anything you may do through PaperUI, embedded persistence (mapdb, rrd4j) and any binding specific persisted data.

This is why you want to mount a userdata into the container as well. All this gets stored in the JSONDB which is in files in the $OH_USERDATA/jsondb.

If you mount both conf and userdata than you can use openHAB however you want, be it using PaperUI or through text configs.

I run this way and have done so for years.

The automation for the adding of Things is automatic discovery which you do through the Inbox. You can define Things using .things files, but there will be no automatic discovery. You will have to research each binding and figure out how to define the Things properly in a .things file.

There is no way to automatically link them to an Item because how is OH going to know what Item to link it to? You can put the link on the Item in your .items file, is this what you are asking?

Anyway, the tl;dr is just mount volulms for both userdata and conf and use OH how ever you are used to it.

I will also try to use OH in Docker on RPI 4. I am noob in Docker but there is a lot of information and I think i will manage it. Recommendations are welcome.:wink:

Are there any recommendations for Backup the SSD after Installation?

I’d refrain from doing so. There’s no general benefit in using Docker that would justify this.
And you’re using almost everything which is new (Docker is new to you) and untested (openHABian on RPi4/buster is not yet implemented). Seems like you are asking for trouble.

I’m currently working on getting openHABian to run inside Docker and there’s quite a number of issues.

Sure, Amanda is part of openHABian and recommended to use for all backup purposes.

Thank you for your answers. I thought it would be a good way to prepare my system for the future. Run OH2.4 in a Docker, Later OH3, have a Docker for Milestone Builds, …

I just started with informing me about Docker and the first few Info´s looked good for me.
After reading your post I think I have to decide whether I will do it or not. It looks to be a big workload, especially if there is no experience from my side.

Puh, hard to decide.:thinking:

If you plan on running other services on the RPi there are indeed some benefits. And with the more ram options on the RPi 4, running the services in Docker is a perfectly reasonable way to run these services in a controlled and isolated manner.

I wouldn’t do so on anything less than 2Gb RAM because Containers do increase the ram requirements slightly.

Now this is prompts a question, why? That really isn’t how containers are supposed to work. I could see changing it to install the services (OH, Mosquitto, frontail, etc) as containers but I can’t imagine how running openHABian in a container itself makes sense. As a VM, yes, but not as a container.

If you are not planning on using this machine for any other purpose than to run the stuff that openHABian installs, there benefits to running in a container are minimal. If you will be running other stuff, the isolation between processes offered by containers is useful.

Ok, this wasn’t meant to be a general statement.
But the benefit-cost ratio isn’t great especially if you’re a Docker newbie like the OP. And you can’t run openHABian then.

Now to contradict myself in the next statement …

It’s for the purpose of automated testing of the openHABian code. Clearly not targeted at end users.

:+1 that’s a good use for a container. Though I can imagine it’s very difficult. A typical container doesn’t even have an initialization daemon like systems running. For testing purposes have you considered using Vagrant and a VM for testing? It might be easier over all.

1 Like

Why? Just start with a Debian container.

Indeed, and systemd needs to run as PID 1 else systemctl spits errors. But I think I’m getting there.

Container != Virtual Machine. That Debian image has just enough of the OS to run a single process in the foreground. That is how containers are mean to be used: one process per container.

It’s actually a lot of work and not at all easy to get a container to run more than one process because a lot of the supporting system daemons like systems are not available.

1 Like

Thank you for your suggestions. As the RPi 4 runs hotter than I would like, I spun up a VM to dockerize Openhab and other stuff.

It is taking longer than I expected, mainly because I would like to decouple Openhab and python, so need to rework all my exec binding.

You could look into LXC or LXD containers. Those give you more of a ‘VM like’ experience and still give you the quick startup times of a container because they share the kernel of the host. You can create snapshots as well so you can easily revert back.

Many moons ago I was using Xen on Ubuntu Server 6.06 LTS (because it supported PCI forwarding) and later KVM to run VMs. I then learned about and switched to LXC containers because they are much lighter than VMs. Then Docker came along.
I plan to reinstall my server next week (Ubuntu Server 14.04 LTS is EOL since April) and intend to use a number of LXD containers and run Docker, Kubernetes and OKD (OpenShift) in them. Not sure yet if that is going to work. If not, I can always spin up a KVM VM as a last resort.

Anyway, this is all a bit off-topic of course.

It would be nice to read about your setup and experience you made, when you are done.
A Setup bases on LXD/LXC, Docker, Kubernetes and OKD sounds interesting.

If you got the time maybe create a post about this setup :slight_smile: