New Docker Upgrade Ability

Thanks to lots of help from @cniweb, @wborn, @martinvw, @Benjy, and @andrey-yantsen the OH Docker Images will now automatically perform an upgrade on your userdata folder for you. No longer do we have to use the manual process of letting the container build a new userdata and copying our configs back over. Upgrading is now as simple as it is for all other installation methods.

The way it works is if userdata/etc/version.properties is different between the Container and the userdata folder mapped into the container it will assume that an upgrade needs to be performed.

First, it will create a full backup of your entire userdata folder, placing it as a dated tar file into userdata/backup.

Next, it copies the necessary files to userdata/etc, the same as apt-get/yum/upgrade script does.

Finally, it clears the cache and tmp.

In short, except for doing the backup differently and not redownloading OH itself, it does the same steps as the upgrade script.

This means users can now take advantage of tools like Watchtower to keep their openHAB Image up to date. It will also aid those who like to keep up with the nightly builds.

It should work from any 2.0 version forward and of course, if something goes wrong you have the full backup of your userdata folder.

It was just merged in so this should become available in the next build on dockerhub.

Happy upgrading!

NOTE: A PR with updated upgrade instructions has been submitted to openhab-docs.

10 Likes

Thank you too @rlkoshak! It was my no.1. annoyance with the OH Docker container. :smile:

Mine too. It’s why I stayed on build #920 something until starting work on the PR. :slight_smile:

I tried to upgrade from 2.1.0 to 2.3.0 (openhab/openhab:2.3.0-snapshot-amd64-debian) . Looks good , besides logging . I found this

https://community.openhab.org/t/logs-not-working/33374/4

but I dont got org.ops4j.pax.logging.cfg.dpkg-dist file.

I tried copy content from here ,but logging dont work. Any idea?

Do all the lines in your existing org.ops4j.pax.logging.cfg start with “log4j2”?

Tail syslog or use journalctl to see of there is something spit out to standard error or standard out during startup to see if that tells us anything.

You can filter the logs by grepping for the container id.

Thanks Master , finally I downloaded this version of file and logging works nicely:

https://github.com/openhab/openhab-distro/blob/master/distributions/openhab/src/main/resources/userdata/etc/org.ops4j.pax.logging.cfg

Great stuff, this really makes using Docker for OpenHAB even more interesting. Thanks!!

Hi, This is great and worked great apart from the Logging config which I had to copy across.

Would it be possible to version the docker containers?
Having multiple things released with the same version number is generally not great.

Something like

openhab/openhab:2.2-1-amd64-debian
openhab/openhab:2.2-2-amd64-debian
openhab/openhab:2.2-3-amd64-debian

You can file a feature request on the github.

Though I will say that the images are already versioned. The only images that are not versioned are the SNAPSHOTs because, frankly, it is not tenable to create a new version label for every single nightly build, nor is it all that helpful. If you are running on the SNAPSHOT, you should be running on the latest SNAPSHOT, not some specific build from some specific day.

The only version of 2.2.0 you should be running is the release which is 2.2.0.

openhab/openhab:<version>-<architecture>-<distributions>

Version

  • 1.8.3 Stable openHAB 1.8 version
  • 2.0.0 Stable openHAB 2.0 version
  • 2.1.0 Stable openHAB 2.1 version
  • 2.2.0 Stable openHAB 2.2 version
  • 2.3.0-snapshot Experimental openHAB 2.3 SNAPSHOT version

Architecture:

  • amd64 for most desktop computer (e.g. x64, x86-64, x86_64)
  • armhf for ARMv7 devices 32 Bit (e.g. most RaspberryPi 1/2/3)
    -arm64 for ARMv8 devices 64 Bit (not RaspberryPi 3)

Distributions:

  • debian for debian stretch
  • alpine for alpine 3.7

When/if there is ever a backport of some bug fixes to 2.2, there will be a new 2.2.1 version added.

hi :slight_smile:

1 Like

I think what I’d like it an indication of the version of the Dockerfile and scripts. as if you pulled the docker image openhab:2.2 at different times you would get different things. This means that there is no way to have reproducable tests. If I some tests a couple of months ago with the older image upgrade from 2.1 wouldn’t work. If I run it now it will work.

Maybe something like

openhab/openhab:---

openhab/openhab:2.2.0-amd64-debian-1
openhab/openhab:2.2.0-amd64-debian-2
openhab/openhab:2.2.0-amd64-debian-3

This would also make it easier to diagnose issues as you know exactly what image is running.

Honestly, the number of tags that are there already violates Docker’s best practices for versioning images. Adding yet another variable will only make that worse.

Personally, if this is a need for you, you should clone the repo for a certain date and build the image yourself even if yet another version tag were added to the images.

But longer I said, you can open an issue or even better submit a PR.

what is it exactly that I should do with etc/version.properties?

I changed online repo to: https://dl.bintray.com/openhab/mvn/online-repo/2.3
and everywhere I saw 2.2.0 to 2.3.0 and save it, restarted the container and version.properties reverts back to 2.2.

You need to provide more context. If you are using the built in upgrade capability (which is part of entrypoint.sh) you shouldn’t be messing with version.properties or anything else.

If you are not using the upgrade capabilities built into entrypoint.sh (i.e. you created your own Dockerfile, I recommend looking at entrypoint.sh in the official github for the steps you need to take to upgrade a container. IIRC, what I used to do is backup then delete the contents of userdata, run the new container and entrypoint.sh would see that userdata was empty and repopulate it. Then OH would load addons.cfg and install all the needed bindings.

I don’t know what you are trying to do so it is hard to answer specifically.

Hi, I’ve got a 2.2 setup via the docker Debian image that’s been going fine for months now. Looking for an easy way to get to 2.3. Would it be easiest to kill the 2.2 container, keep the volumes and reattach those volumes to a new 2.3 container?

Obviously back up your volumes first. But yes, since the addition of the upgrade ability this entire thread is about, just mount the old volumes to the new container and the entrypoint.sh will see that the version in userdata doesn’t match the software in the container and update and delete everything in userdata necessary for the upgrade.

Essentially entrypoint.sh now does the same thing that apt-get does during an upgrade and that the upgrade script in the OH bin directory does for manual installs.

So the tl;dr is start a new container with the old volumes and everything will be taken care of for you.

1 Like

Thanks, that worked much easier than expected

Does this still work? I tried on a 2.3 basic install, then told the container to run version 2.4, but I see it cycling.

2018-12-18 19:59:36.076 [WARN ] [org.jline                           ] - Unable to create a system terminal, creating a dumb terminal (enable debug logging for more information)
2018-12-18 19:59:39.419 [WARN ] [raf.features.internal.osgi.Activator] - Error starting activator
java.lang.IllegalStateException: BundleContext is no longer valid
        at org.eclipse.osgi.internal.framework.BundleContextImpl.checkValid(BundleContextImpl.java:989) ~[?:?]
        at org.eclipse.osgi.internal.framework.BundleContextImpl.registerService(BundleContextImpl.java:468) ~[?:?]
        at org.eclipse.osgi.internal.framework.BundleContextImpl.registerService(BundleContextImpl.java:487) ~[?:?]
        at org.eclipse.osgi.internal.framework.BundleContextImpl.registerService(BundleContextImpl.java:1004) ~[?:?]
        at org.apache.karaf.util.tracker.BaseActivator.register(BaseActivator.java:388) ~[11:org.apache.karaf.features.core:4.2.1]
        at org.apache.karaf.util.tracker.BaseActivator.register(BaseActivator.java:376) ~[11:org.apache.karaf.features.core:4.2.1]
        at org.apache.karaf.features.internal.osgi.Activator.doStart(Activator.java:180) ~[11:org.apache.karaf.features.core:4.2.1]
        at org.apache.karaf.util.tracker.BaseActivator.run(BaseActivator.java:275) [11:org.apache.karaf.features.core:4.2.1]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:?]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:?]
        at java.lang.Thread.run(Thread.java:748) [?:?]

I just upgraded to OH 2.4 release yesterday and it worked without a hitch.

I’ve never seen that particular exception before. My first thing to try would be Clear the Cache.

Yeah I did clear the cache and it made no difference. what ended up fixing it for me was I used docker compose tonpush it back to 2.3, then updated the version number in docker to 2.4 and it upgraded.

The first time I tried doing it on portainer. No idea what the difference was.