My plan for today was to crack the mystery of successfully upgrading OH from 2.2 to 2.3.0-SNAPSHOT.
I run OH in a docker container, so my first try was to simply recreate the container with the newest 2.3 snapshot image without changing any of my userdata.
And wouldn’t you know… it just worked.
Or so I thought.
weirdly enough when I log into the karaf console, it shows an old build. So I created a new container without linking in my userdata volumes (i.e. I just created a completely clean new installation). This actually did update the build number in the karaf console.
So, I thought, I actually have to do the manual update procedure that is described in the docker readme, who could have known?
So I turned back to the docker container that now is supposed to have the newest image but old userdata and deleted the /userdata/etc/ and /userdata/cache/ folders.
After doing this, openhab no longer starts successfully inside the container.
So I created another docker contaner of 2.3.0-SNAPSHOT and restored a backup of my userdata which I linked into the volume. Again it shows the old version in the karaf console, as is to be expected. My attempt now was to recreate the upgrade procedure described in this post: Docker update script / docker
But instead of creating a new container for the upgrading, I simply wanted to execute the upgrade in the current container and then throw that away and use the newly updated userdata for a new container.
So I used the /openhab/runtime/bin/update script and updated my installation.
The result is once again that OpenHAB does not start up.
The console output is this:
2017-12-28 14:26:18.450 [WARN ] [raf.features.internal.osgi.Activator] - Error starting activator
java.io.IOException: Unexpected end of input at 1:1
at org.apache.karaf.features.internal.util.JsonReader.error(JsonReader.java:337) [9:org.apache.karaf.features.core:4.1.3]
at org.apache.karaf.features.internal.util.JsonReader.expected(JsonReader.java:331) [9:org.apache.karaf.features.core:4.1.3]
at org.apache.karaf.features.internal.util.JsonReader.readValue(JsonReader.java:93) [9:org.apache.karaf.features.core:4.1.3]
at org.apache.karaf.features.internal.util.JsonReader.parse(JsonReader.java:58) [9:org.apache.karaf.features.core:4.1.3]
at org.apache.karaf.features.internal.util.JsonReader.read(JsonReader.java:52) [9:org.apache.karaf.features.core:4.1.3]
at org.apache.karaf.features.internal.region.DigraphHelper.readDigraph(DigraphHelper.java:90) [9:org.apache.karaf.features.core:4.1.3]
at org.apache.karaf.features.internal.region.DigraphHelper.loadDigraph(DigraphHelper.java:70) [9:org.apache.karaf.features.core:4.1.3]
at org.apache.karaf.features.internal.osgi.Activator.doStart(Activator.java:131) [9:org.apache.karaf.features.core:4.1.3]
at org.apache.karaf.util.tracker.BaseActivator.run(BaseActivator.java:242) [9:org.apache.karaf.features.core:4.1.3]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:?]
at java.lang.Thread.run(Thread.java:748) [?:?]
Any help on how to actually successfully update OH? It seems to me like this issue is not docker specific exactly.
Bad move. There are a lot of changes that get made to userdata during a normal upgrade that the Docker container is incapable of doing at this time.
That is because that build number comes from userdata/etc/version.properties and because you kept the same old userdata folder around, that is one of the files that didn’t get updated.
Unfortunately, that won’t work either. The way the container works is when it first starts the entrypoint.sh script looks to see if userdata is empty and copies over the default userdata files only if the folder is completely empty. Since you only deleted userdata/etc and userdata/cache you basically borked your install as the script won’t copy over the new etc files.
Actually, the issue is very Docker specific. I’ve an idea for improvements to the entrypoint.sh but cniweb seems hesitant to pursue it.
Anyway, my recommended procedure until something better gets implemented is:
stop the container
make a backup of userdata
delete the contents of userdata, the folder must be empty
run the updated image
wait for userdata to become repopulated and OH to finish coming up
stop the container
copy userdata/jsondb and re-edit any filed that you modified in userdata/etc (don’t just blindly copy them over as some of those files change formats and contents between versions)
restart the container
9 if you are not using addons.cfg to install bindings, reinstall your add-ons through PaperUI
You should be back up and running at this point. I’ve not done this procedure in a long time so I can’t be certain. You might need to recreate and/or deal with Things in your Inbox, though I think all that should be sorted out based on your jsondb restoration.
And for completeness, in case you or someone else has the time and skills, my and Benjy’s proposal for how to make this happen a little better for Docker is to modify entrypoint.sh to instead check to see if the version in version.properties is different between userdata and userdata.dist and if so do all the usual upgrade steps that a typical apt-get/yum upgrade would do (copy over the new userdata/etc, delete contents of cache and tmp, etc).
@rlkoshak Thank you very much for your detailed reply.
I had a hunch that I had already nuked my install by using the old data with a new image. The treacherous thing was that just using that old data and a new image seemed to work fine and I removed the old backup and created a new one from the userdata that was now screwed. So I had to start anew with an empty userdata folder. Always takes a while adding the things back in through PaperUI.
I didn’t know about the jsondb step, that will help in the future, so thanks for that.
It seems like updating your docker container is far from comfortable at the moment.
Just to verify: the method that I described, where I do the update script inside the original container and then create a new container with the newest snapshot and with the userdata from the previous one should work, right?
Modifying the entrypoint.sh seems like a proper way to solve this.
Currently I am using the software ‘Watchtower’ to automatically upgrade containers when the image changes on the docker hub. Sadly this is not possible with openhab at the moment as it would seem.
Another question in this regard: Since I have had to basically manually re-add everything, when I wanted to update Openhab in the past, due to my inability to reuse my userdata folder, I have already added all necessary bindings and such to the addons.cfg file but one thing that would be great would be to export the things that I add via paperUI to *.things files. Is that possible somehow without having to manually define them?
Also another question: Is it possible to have the secret and uid for openhab cloud in the conf directory somehow, so that the copying around of the files in userdata does not always have to be done?
It depends on what changed between the version you are running with and the new version. For example, if I were to run with my existing userdata on the new 2.2 release my logging would be broken because of the switch to log4j2. Sometimes you can get away with it and sometimes you can’t.
Also, I’m not certain that the bindings will be updated to the newer versions if you just use your old userdata folder without modification.
I see no reason why that wouldn’t work. I’ve not tried it though.
I’m going to have to look into this. I currently use Ansible scripts but am moving away from that for updates and upgrades right now. This tool might be useful.
That is the whole point of backing up and restoring the jsondb folders. PaperUI defined Things and Items gets stored there.
No, you will have to backup and restore those files like you should do with jsondb, or update the values on myopenhab.org for each upgrade.