Fresh Installation does not start up


I am becoming desperate:

  • I wanted to make a fresh installation of my Raspberry PI and downloaded the latest version of openhabian.
  • Before I made a backup of my userdata and config data
  • After having openhabian installed I directly restored my backup files (Stoppen Openhab service before)
  • While starting openhab for first usage I see following errors in the logs:

2017-07-31 23:48:41.937 [WARN ] [url.mvn.internal.AetherBasedResolver] - Error resolving artifact org.openhab.core:org.openhab.ui.paperui:jar:2.1.0: [Could not find artifact org.openhab.core:org.openhab.ui.paperui:jar:2.1.0 in openhab (]

It is obvious that the 2.1 jars are not in the 2.0 repo, but why is it searching in the 2.0 repo?

Is something in the backup files pointing to 2.0, which I need to remove?

Thx in advance!

Best Regards,


First look in the OH runtime folder for an upgrade script (I think it is called I don’t know when it was released, it may not be available in the version you are using. If there is one, stop OH and run that script.

If there isn’t, try to stop OH and remove all the files listed here from /var/lib/openhab2. Then restart OH.

You’d need to replace those above files with fresh version. Simply deleting them won’t work.

Did you install using apt-get? won’t work for that. Instead you can force a reinstall ( it will keep your settings) with:

sudo apt-get install --reinstall openhab2

Hi Benjy,

that has worked so far for the basic UI! In paper ui everything is empty and I get 404 errors.

Thank you very much!

All in all I am not sure if I understood the concepts of configurations in OH2. In OH1 it was all more explicit and not so much magic around the paper UI. It feels strange that restoring configurations is not working such smooth as in OH1.

Or did I miss good tutorials / best practices around these topics?

Best Regards,


So if deleting those files is not sufficient, how should those of us using Docker properly upgrade?

The upgrade procedure for any installation is to completely remove and replace the runtime folder with the runtime folder in the new version. The select files you mentioned above are the only files that need to be replaced in the userdata folder. The runtime won’t add them back if they’re not found on restart.

I believe the goal in the future is to minimise the files in the userdata folder.

@tempo32, what version did you backup from and what are you on now?

Thx for helping!

The backup I did was on OH version 2.0:


What I have now freshly installed is OH 2.1. I did no update / upgrade to 2.1, I completely set up my Raspberry Pi from scratch with 2.1. After that I restored my backup files.


Alright, that shouldn’t be a problem now that you’ve reinstalled, but let’s make sure openHAB has the correct permissions. I would:

sudo systemctl stop openhab2
sudo rm -rf /var/lib/openhab2/cache/*
sudo rm -rf /var/lib/openhab2/tmp/*

sudo chown -R openhab:openhab /var/lib/openhab2
sudo chown -R openhab:openhab /etc/openhab2
sudo systemctl start openhab2

Then have a look at the logs again. Any difference?

These procedures are impossible, or at least impractical with Docker.

So for Docker, we are still stuck deleting the entire contents of the userdata folder so the container with the new version can recreate it and then recreating the automatically discovered Things. :frowning: I’ve not tried it in awhile but just copying back the JSON database folders does not fully restore the Things. For example, the last time I tired I still had to create a new Serial Thing for the Zwave binding and then all my zwave devices were rediscovered and the old Things were redundant and had to be deleted.

The problem is that userdata is completely separate (i.e. a folder on the host) from the environment where OH itself runs (i.e. in the container) and userdata gets mounted over the “default” userdata. So there is essentially no viable upgrade path for Docker except to start over from scratch on each upgrade. If the userdata folder is mounted into the container, the new versions of the userdata files do not exist in the container. If the userdata folder is not mounted into the container then the new versions of the userdata files exist but there is no way to copy them out of the container and these files do not get persisted when the container needs to be recreated (e.g. to upgrade to a new version).

Luckily, if one uses text based configs for the majority of the settings the amount of rework is relatively minimal. But it still feels like a broken upgrade process and I supposed I assumed/hoped that the new upgrade script and such addressed this problem. That explains why cniweb hasn’t written upgrade procedures for Docker yet.

Hi Benjy,

yes, it has worked out!

THX a lot!

But still it feels that the restore behavior is a bit bumpy.



@tempo32, glad you’re up and running! The plan is to include backup and restore scripts inside the distribution. See this github issue for more information about it:

@rlkoshak I’m afraid I have no knowledge of Docker. I’ve had a quick read about it and from what I gather, docker containers are immutable, but you can specify docker volumes for configuration files? I hope that soon it will be possible to have everything related to updates in the runtime folder of the distribution (see @Kai’s comment here).That way the docker container would contain the runtime folder (and perhaps a Java instance?) and it would specify Docker Volumes for the conf and runtime folders without you ever needing to worry about overwrites in them.

Currently, the only way I see it working (and again I may have completely misunderstood the Docker concepts) is to have docker leave userdata alone, and have a post docker update script that changes those suspect 12-15 files within the userdata/etc folder and then deletes the tmp and cache directories.

With regards to restoring the userdata folder for zwave, bindings can have their own folders inside userdata. zwave .xml information is stored inside userdata/zwave.

Not quite correct. It might be better to say a docker container is ephemeral.

With Docker one has an Image. The Image is what one downloads from docker hub or builds and the Image contains the full runtime environment including the operating system and the installed and configured program(s) that run in that container. If you are more familiar with VMs, a Docker Image is like a VM template.

When one calls docker run a new Container is created which represents the running program and the program’s environment. It is basically an instantiation of the Image and is like the running VM in the simile with VMs. One can create and run more than one Container from one Image.

The programs running in the Container can write and make changes as much as they want to the Container. You can even bring up a shell and make changes in the Container. But the problem is if you upgrade the Image (e.g. upgrade the OS, upgrade the main program) you have to create a new Container from the Image. Consequently, all the changes that were made to the Container get lost.

Obviously, this is not acceptable in most cases so Docker provides a way to map a file or folder from your host file system into the running Container. This lets you replace the defaults of certain files in the Container with customized ones and gives the Container a place to write out data that needs to be persisted even when a new Container is created.

For OH that means at least conf and userdata (it can also include .java if you are using the Nest binding). So that means that I have access to all the files in conf and userdata on my host machine and these folders “hide” the defaults of these files that exist in the Container.

So, because the Image comes already fully installed and configured with the updates an upgrade for a dockerized OH would be roughly equivalent to creating a brand new SD card for your Pi, installing the new OH, and then plopping your existing conf and userdata folders over to this brand new environment.

The point I’m trying to get across is that the upgrade of the runtime has already been done before the Image even gets downloaded to my machine. So there really is no way delete some files and copy them over the ones that exist because:

  • I don’t have easy access to those updated files to copy over into userdata/etc
  • I don’t have a way to tell the entry script that launches OH that this is an upgrade rather than just a creation of a new Container of the same version so it would know to copy over the updated config files in userdata/etc.

Right now, the entrypoint script checks to see if userdata is empty. If it is empty it will populate userdata with all the default files for the version of OH running in the Container. Thus, the only way I have to get at the updated files in /etc (and elsewhere) is to mount an empty userdata volume into the Container so the start script can recreate it from scratch.

Then I can try to restore my config but thus far I’ve been unsuccessful in doing that. In particular, I’ll restore the JSONDB folder but none of my Things, in particular, the zwave serial Thing that represents the Controller needs to be recreated. And then that leads to a recreation of all the other zwave Things to get back to a working environment. I only have a handful of zwave devices and literally everything else I do is through text based configs so this isn’t a really big deal, but if I had dozens or more zwave devices the process would be broken for me.

These problems with Docker will also be problems for QNAP and Synology based installs I suspect.

Unfortunately, that won’t work because the update of the OH runtime occurs before I even download the Image. So in essence, the “upgrade” is really a brand new install in a virgin environment. Then, when I create and run a Container from that Image, I “dirty” that pristine environment with the files on my host that I mount into the Container.

So there needs to be a way, from a script, to determine whether the userdata that it sees was for the current version of OH within the container or a different one. If it is for a different one then it needs to copy over the current versions of the appropriate files and deletes cache and tmp. But I’m not sure how I can determine that from the shell script that kicks off OH within the container. I’m sure it might be possible.

But as it is right now, I have to provide an empty userdata folder so the script knows to recreate it. And there is something wrong with the “backup and restore” procedures that prevents me from restoring all my PaperUI done stuff like my Things and such during this process.

That’s a lot to get my head round, but I think I understand, at least I hope I’m on the right track.

It looks as though this line is responsible replacing userdata if it is not found.

What it should be doing is that, AND then even if a folder exists, copy over the problem files in userdata\etc.

That part of the function would therefore look something like this:

      # Initialize empty host volumes
      if [ -z "$(ls -A "${APPDIR}/userdata")" ]; then
        # Copy userdata dir for version 2.0.0
        echo "No userdata found... initializing."
        cp -av "${APPDIR}/userdata.dist/." "${APPDIR}/userdata/"
        # userdata was found, but version specific files need replacing...
        cp -avu "${APPDIR}/userdata.dist/etc/all.policy"                "${APPDIR}/userdata/etc/"
        cp -avu "${APPDIR}/userdata.dist/etc/"        "${APPDIR}/userdata/etc/"
        cp -avu "${APPDIR}/userdata.dist/etc/"         "${APPDIR}/userdata/etc/"
        cp -avu "${APPDIR}/userdata.dist/etc/"         "${APPDIR}/userdata/etc/"
        cp -avu "${APPDIR}/userdata.dist/etc/"            "${APPDIR}/userdata/etc/"
        cp -avu "${APPDIR}/userdata.dist/etc/org.apache.karaf.*"        "${APPDIR}/userdata/etc/"
        cp -avu "${APPDIR}/userdata.dist/etc/org.ops4j.pax.url.mvn.cfg" "${APPDIR}/userdata/etc/"
        cp -avu "${APPDIR}/userdata.dist/etc/profile.cfg"               "${APPDIR}/userdata/etc/"
        cp -avu "${APPDIR}/userdata.dist/etc/"       "${APPDIR}/userdata/etc/"
        cp -avu "${APPDIR}/userdata.dist/etc/"   "${APPDIR}/userdata/etc/"
        cp -avu "${APPDIR}/userdata.dist/etc/"         "${APPDIR}/userdata/etc/"
        cp -avu "${APPDIR}/userdata.dist/etc/"        "${APPDIR}/userdata/etc/"
        cp -avu "${APPDIR}/userdata.dist/etc/"  "${APPDIR}/userdata/etc/"
        cp -avu "${APPDIR}/userdata.dist/etc/"         "${APPDIR}/userdata/etc/"
      if [ -z "$(ls -A "${APPDIR}/conf")" ]; then
        # Copy userdata dir for version 2.0.0
        echo "No configuration found... initializing."
        cp -av "${APPDIR}/conf.dist/." "${APPDIR}/conf/"

That is how I read the script too.

Well, we have to be a little careful here. Would there be any problems caused by replacing these files every time OH starts? That is essentially what that change would do. I can’t think of anything off hand but am not fully knowledgeable about all the files in question.

But that does seem promising.

We still need to come up with a strategy for dealing with cache and tmp. I’m not sure we want to delete both every time OH restarts, though that might be fine to do as well. It would make it take longer for OH to come back up as it redoes all the stuff in the cache and tmp but I personally can live with that. Though it would add some risk of something going wrong during a restart.

@cniweb, do you have any thoughts on this?

Probably not, but the -u flag for cp will make it so that it only updates if it’s newer. So that it really shouldn’t be a problem. Have edited the above.

I always forget about -u. So long as the clocks are not way off between the host and the container that should work like a charm.

Please make a pull request, thank you!

Done. I actually had some time today to work on it today.