New Docker Upgrade Ability

You need to provide more context. If you are using the built in upgrade capability (which is part of entrypoint.sh) you shouldn’t be messing with version.properties or anything else.

If you are not using the upgrade capabilities built into entrypoint.sh (i.e. you created your own Dockerfile, I recommend looking at entrypoint.sh in the official github for the steps you need to take to upgrade a container. IIRC, what I used to do is backup then delete the contents of userdata, run the new container and entrypoint.sh would see that userdata was empty and repopulate it. Then OH would load addons.cfg and install all the needed bindings.

I don’t know what you are trying to do so it is hard to answer specifically.

Hi, I’ve got a 2.2 setup via the docker Debian image that’s been going fine for months now. Looking for an easy way to get to 2.3. Would it be easiest to kill the 2.2 container, keep the volumes and reattach those volumes to a new 2.3 container?

Obviously back up your volumes first. But yes, since the addition of the upgrade ability this entire thread is about, just mount the old volumes to the new container and the entrypoint.sh will see that the version in userdata doesn’t match the software in the container and update and delete everything in userdata necessary for the upgrade.

Essentially entrypoint.sh now does the same thing that apt-get does during an upgrade and that the upgrade script in the OH bin directory does for manual installs.

So the tl;dr is start a new container with the old volumes and everything will be taken care of for you.

1 Like

Thanks, that worked much easier than expected

Does this still work? I tried on a 2.3 basic install, then told the container to run version 2.4, but I see it cycling.

2018-12-18 19:59:36.076 [WARN ] [org.jline                           ] - Unable to create a system terminal, creating a dumb terminal (enable debug logging for more information)
2018-12-18 19:59:39.419 [WARN ] [raf.features.internal.osgi.Activator] - Error starting activator
java.lang.IllegalStateException: BundleContext is no longer valid
        at org.eclipse.osgi.internal.framework.BundleContextImpl.checkValid(BundleContextImpl.java:989) ~[?:?]
        at org.eclipse.osgi.internal.framework.BundleContextImpl.registerService(BundleContextImpl.java:468) ~[?:?]
        at org.eclipse.osgi.internal.framework.BundleContextImpl.registerService(BundleContextImpl.java:487) ~[?:?]
        at org.eclipse.osgi.internal.framework.BundleContextImpl.registerService(BundleContextImpl.java:1004) ~[?:?]
        at org.apache.karaf.util.tracker.BaseActivator.register(BaseActivator.java:388) ~[11:org.apache.karaf.features.core:4.2.1]
        at org.apache.karaf.util.tracker.BaseActivator.register(BaseActivator.java:376) ~[11:org.apache.karaf.features.core:4.2.1]
        at org.apache.karaf.features.internal.osgi.Activator.doStart(Activator.java:180) ~[11:org.apache.karaf.features.core:4.2.1]
        at org.apache.karaf.util.tracker.BaseActivator.run(BaseActivator.java:275) [11:org.apache.karaf.features.core:4.2.1]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:?]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:?]
        at java.lang.Thread.run(Thread.java:748) [?:?]

I just upgraded to OH 2.4 release yesterday and it worked without a hitch.

I’ve never seen that particular exception before. My first thing to try would be Clear the Cache.

Yeah I did clear the cache and it made no difference. what ended up fixing it for me was I used docker compose tonpush it back to 2.3, then updated the version number in docker to 2.4 and it upgraded.

The first time I tried doing it on portainer. No idea what the difference was.

I use Docker commands directly to upgrade (well, I use Ansible to bring up the containers) so maybe there is something going on with that difference.

There have been some recent changes to automatically detect and build the correct architecture so we don’t have to specify that in the label anymore. Maybe something there went wrong.

What command do you use to start the container @psyciknz?

It was an edit of an existing 2.3 container and redeployed in portainer…so some of the environment variables etc would have been retained…

Where with the docker-compose file, it only specifies the volumes, user ids and image name…most of the other values much be set by the docker file.

It’s a bit hard to reproduce without any specific information. :mage:

No, I think the issue was entirely mine. Editing an existing container is probably not the way to go.

Just throw away the existing container :whale: and create a new one :whale2: when upgrading.
As long as you map your data directories/volumes in the new container it should work! :smiley:

1 Like

I have a feeling that something is going wrong here.
when upgrading 2.4.0 to 2.5.0-snapshot i get 1.6G files in the backup folder:

$ ls -l /datavol/openhab/openhab_userdata/backup/
total 1579480
-rw-r--r-- 1 root root 61132800 Jan 15 20:08 userdata-2019-01-15T20-07-36.tar
-rw-r--r-- 1 root root 61132800 Jan 15 20:08 userdata-2019-01-15T20-08-14.tar
-rw-r--r-- 1 root root 61132800 Jan 15 20:09 userdata-2019-01-15T20-08-54.tar
-rw-r--r-- 1 root root 61132800 Jan 15 20:09 userdata-2019-01-15T20-09-27.tar
-rw-r--r-- 1 root root 61132800 Jan 15 20:10 userdata-2019-01-15T20-10-08.tar
-rw-r--r-- 1 root root 61132800 Jan 15 20:11 userdata-2019-01-15T20-11-02.tar
-rw-r--r-- 1 root root 61132800 Jan 15 20:12 userdata-2019-01-15T20-11-47.tar
-rw-r--r-- 1 root root 61132800 Jan 15 20:12 userdata-2019-01-15T20-12-23.tar
-rw-r--r-- 1 root root 61132800 Jan 15 20:13 userdata-2019-01-15T20-13-11.tar
-rw-r--r-- 1 root root 61132800 Jan 15 20:14 userdata-2019-01-15T20-14-18.tar
-rw-r--r-- 1 root root 61132800 Jan 15 20:15 userdata-2019-01-15T20-15-02.tar
-rw-r--r-- 1 root root 61132800 Jan 15 20:16 userdata-2019-01-15T20-15-51.tar
-rw-r--r-- 1 root root 61132800 Jan 15 20:16 userdata-2019-01-15T20-16-28.tar
-rw-r--r-- 1 root root 61132800 Jan 15 20:17 userdata-2019-01-15T20-17-09.tar
-rw-r--r-- 1 root root 61132800 Jan 15 20:18 userdata-2019-01-15T20-18-04.tar
-rw-r--r-- 1 root root 61132800 Jan 15 20:19 userdata-2019-01-15T20-18-54.tar
-rw-r--r-- 1 root root 61132800 Jan 15 20:20 userdata-2019-01-15T20-19-29.tar
-rw-r--r-- 1 root root 61132800 Jan 15 20:21 userdata-2019-01-15T20-20-48.tar
-rw-r--r-- 1 root root 61132800 Jan 15 20:22 userdata-2019-01-15T20-21-50.tar
-rw-r--r-- 1 root root 61132800 Jan 15 20:22 userdata-2019-01-15T20-22-30.tar
-rw-r--r-- 1 root root 61132800 Jan 15 20:23 userdata-2019-01-15T20-23-12.tar
-rw-r--r-- 1 root root 61132800 Jan 15 20:24 userdata-2019-01-15T20-23-46.tar
-rw-r--r-- 1 root root 61132800 Jan 15 20:24 userdata-2019-01-15T20-24-37.tar
-rw-r--r-- 1 root root 61132800 Jan 15 20:25 userdata-2019-01-15T20-25-15.tar
-rw-r--r-- 1 root root 61132800 Jan 15 20:26 userdata-2019-01-15T20-26-04.tar
-rw-r--r-- 1 root root 61132800 Jan 15 20:27 userdata-2019-01-15T20-27-14.tar
-rw-r--r-- 1 root root 27934720 Jan 15 20:28 userdata-2019-01-15T20-28-35.tar

$ du -hs /datavol/openhab/openhab_userdata/backup/
1.5G	/datavol/openhab/openhab_userdata/backup/

all folders together before backup are about 132M

$ du -sh /datavol/openhab-initial/
132M	/datavol/openhab-initial/

what is beeing backed up with these scripts?

You should open a new thread for this.

You can open up the tar files and have a look. I just did a quick table of contents of the most recent backup I have (from this morning) and it looks like it is everything in userdata.

How big is your userdata folder (minus the backups folder of course, we don’t need recursive backups)?

On my system the backups are all in the half gig range which tracks with the overall size of the userdata folder.

rich@argus:/o/o/u/backup (master) ✗   ls -lh
total 7.1G
-rw-rw-r-- 1 openhab openhab 404M May  7  2018 userdata-2018-05-07T13-28-13.tar
-rw-rw-r-- 1 openhab openhab 430M May  9  2018 userdata-2018-05-09T11-03-12.tar
-rw-rw-r-- 1 openhab openhab 437M May 30  2018 userdata-2018-05-30T14-08-32.tar
-rw-rw-r-- 1 openhab openhab 475M Aug 15 15:31 userdata-2018-08-15T15-31-41.tar
-rw-rw-r-- 1 openhab openhab 479M Sep 14 14:48 userdata-2018-09-14T14-47-35.tar
-rw-rw-r-- 1 openhab openhab 485M Oct  4 12:08 userdata-2018-10-04T12-08-12.tar
-rw-rw-r-- 1 openhab openhab 483M Oct 24 16:04 userdata-2018-10-24T16-04-25.tar
-rw-rw-r-- 1 openhab openhab 489M Oct 30 11:49 userdata-2018-10-30T11-49-20.tar
-rw-rw-r-- 1 openhab openhab 514M Nov 13 11:24 userdata-2018-11-13T11-24-49.tar
-rw-rw-r-- 1 openhab openhab 519M Nov 14 14:42 userdata-2018-11-14T14-42-22.tar
-rw-rw-r-- 1 openhab openhab 521M Nov 23 09:27 userdata-2018-11-23T09-26-40.tar
-rw-rw-r-- 1 openhab openhab 518M Dec 17 13:38 userdata-2018-12-17T13-37-50.tar
-rw-rw-r-- 1 openhab openhab 361M Jan 14 10:28 userdata-2019-01-14T10-28-11.tar
-rw-rw-r-- 1 openhab openhab 551M Jan 14 10:30 userdata-2019-01-14T10-29-02.tar
-rw-rw-r-- 1 openhab openhab 531M Jan 15 08:14 userdata-2019-01-15T08-14-21.tar

And I just realized that I don’t have this folder in my .gitignore. Off to git to clean this up.

userdata is 131M only

Look at the backup dates. You’ve made ~30 backups in just 15 minutes. Guess your container failed to start while upgrading?

+ mkdir /openhab/userdata/backup
+ tar --exclude=/openhab/userdata/backup -c -f /openhab/userdata/backup/userdata-2019-01-16T09-55-47.tar /openhab/userdata
tar: Removing leading `/' from member names

the issue might be, because I put the volume on a folder which is managed by glusterfs distribute FS

have to check the docker service log next time

I ran upgrade from 2.4.0 to 2.5.0-snapshot and I getting in a loop:

openhab_openhab.1.gmit3677awkc@vevedock-02    | + '[' '!' -d /openhab/userdata/backup ']'
openhab_openhab.1.gmit3677awkc@vevedock-02    | + tar --exclude=/openhab/userdata/backup -c -f /openhab/userdata/backup/userdata-2019-01-17T12-46-54.tar /openhab/userdata
openhab_openhab.1.gmit3677awkc@vevedock-02    | tar: Removing leading `/' from member names
openhab_openhab.1.gmit3677awkc@vevedock-02    | tar: /openhab/userdata/cache/org.eclipse.osgi: file changed as we read it
openhab_openhab.1.m1n68nyy2l0i@vevedock-02    | ++ test -t 0
openhab_openhab.1.m1n68nyy2l0i@vevedock-02    | ++ echo false

Tar is geting a warning that file changed. I do not know why it is, cause it start at next line from the beinning.

Are the times synchronized between your container and the host? I don’t know why that would matter but I’ve no other ideas.