Move installation from VM to Docker


my actual setup of my OpenHAB installation looks like this:
Ubuntu-Server with virtualbox
In one VM runs openHAB, MySQL and InfluxDB for persistence, grafana for visualisation and mosquitto.

I would like to get rid of virtualbox and migrate all components to separate docker-images which run directly on the server.

So I have two questions:
1.) Are there any disadvantages when using openHAB in a docker-image, which are not mentioned in the github-Information?

2.) What is the easiest way to migrate all configuration data?
(I would expect, that the migration of the MySQL- and Influx-data is no problem, right?)

Thanks for the help!

I don’t remember what is listed in the GitHub but here are my experiences:

  • it used to be you couldn’t use some of the advanced features of the Network binding like dhcplisten but I think that was fixed and there are instructions for how to do that now
  • Exec binding is basically useless
  • I’ve had some problems using the HTTP actions but never researched the cause and it could have been a problem unrelated to running in Docker
  • updates do not work very cleanly; you essentially either need to delete your userdata/cache and userdata/tmp folders every time you update the container or you have to go into the karaf console and upgrade each of the addons individually.

For OH 2, I would first move to using addons.cfg to define your installed add-ons rather than installing them through PaperUI. Then it is a simple matter of mounting your conf folder and a userdata folder into the container. There is one additional folder you need to mount if you are using the Nest binding.

I’ve no experience with MySQL nor InfluxDB but I can say you might have some issues with user IDs. The problem is, for example, the MySQL user inside your Docker container may not match the user id of the MySQL user on your host (if you even have one). So managing the file ownership and permissions can be a challenge.

Some containers have a nice way to deal with this by mounting /etc/passwd into the container which will cause the container to use the uids of your host rather than its internally configured uids, but that doesn’t always work.

The first naive approach would be to just mount your existing /var/lib/mysql and /var/lib/influxdb folders into the respective containers, after either making the respective user and group IDs match those on your host or changing the ownership of those folders to match the user and group IDs in the container.


sory for my absence and thanks for the answer.

I also think that this was fixed.

Why is it useless?
For example, I want to use a script for controlling the broadlink controller.

What’s the problem of deleting the mentioned folders?

It is possible to export data from the old installation and import it to the new installation?
(Just like moving only from one db-server to another)

Perhaps I will try to move the data first to another db instance, without touching the openhab-installation.
And I will create a backup of my vm…

Any other tipps before I crash my installation…? :wink:


Because the container, by definition, has no access to the software and hardware on your host, unless you give it access and the container comes configured with the bare minimum necessary for it to run OH, nothing more.

Almost everything interesting you may want to do with the Exec binding will require exposing more hardware resources to the container (e.g. USB dongles) and/or installing new software into the Docker image (e.g. you would need to install Python). This will require creating, building, and maintaining your own custom Docker Image.

By the time you figure out how and implement everything you need to do to customize the Docker Image most of the benefits (simplicity of deployment, ease of upgrade, ease of migration) go away and you would spend less time and effort just installing via apt-get.

The you will need to create a custom Docker Image that includes the scripting language you want to use, add the broadlink controller device to the container or make sure the networking is configured correctly to be able to access the controller. I don’t know how these controllers work. It may be as simple as using --host for the networking and creating bash shell scripts. Or it could be a nightmare.

You lose your configurations on your bindings. The big one for me is zwave. When one deletes the userdata/cache and userdata/tmp folder you will need to recreate the zwave serial device. And if you are not careful to give the serial device the same name all of your Things and Channel IDs will change so you will need to update all of your Items. All of your zwave devices will then need to be reimported from the inbox. You will need to reinstall any binding that was installed through PaperUI (which is why I use addons.cfg). It can be a lot of work. It is much easier to upgrade the bindings in place through karaf and not lose your configs and bindings.

You mount a volume into a container to keep the data. So moving the database from one container to the next is as simple as mounting this same volume into the new container.

There is no magic here. You will have to manually export and import data if you want to change from using MySQL to InfluxDB. they are fundamentally different programs and store data in completely different ways.

And your question doesn’t really have anything to do with the point of my quote above, which is that your file ownership and groups will look weird and be hard to deal with because the user and group ids in the container will not match those on the host.


After many days of working on this topic it’s time for a short report:

I moved many base-services from the vm to a separate docker container, so that now only openhab itself is running in the vm.

I started a docker instance for mysql and influxdb and after stopping the openhab instance I could do an export of the old database and import it in the docker version.

So I only had to change the database-configuration in openhab and startup the service.

Same thing with grafana and the mqtt_broker.

Instead of using the exec-binding I use mqtt to communicate with a script, which controls the broadlink rm3 mini.

So till now everything is fine…

The next step will be, to test an openhab-installation in a docker container.
But before I can do this, I think I have to move the item-configurations from paperui-definition to the cfg-files.

I will give a new report, if I was succesfull…


I don’t think this is true. If you mount your existing userdata and conf folders into the Docker container all your PaperUI configurations will be preserved.

You may want to go through the upgrade process first, which entails deleting a certain set of files from userdata. See the Windows Installation/Upgrade instructions for the full list of files to delete.

@nasi_be - your confit sounds near identical to mine - even you binding (broadling, exec(, addons and persistence (influxdb, grafana) and host platform (virtualbox) are identical.

That said, I’m toying with trying Docker (for no real reason other than to learn something new)

How did you get on? Did you keep using docker and every manage to get everything shifted across? And in retrospect, would you still go Docker if you were to rebuild?

I have a lot of automation wrapped in my current solution… would be very unhappy to make a mess of it