You need to provide more context. If you are using the built in upgrade capability (which is part of entrypoint.sh) you shouldn’t be messing with version.properties or anything else.
If you are not using the upgrade capabilities built into entrypoint.sh (i.e. you created your own Dockerfile, I recommend looking at entrypoint.sh in the official github for the steps you need to take to upgrade a container. IIRC, what I used to do is backup then delete the contents of userdata, run the new container and entrypoint.sh would see that userdata was empty and repopulate it. Then OH would load addons.cfg and install all the needed bindings.
I don’t know what you are trying to do so it is hard to answer specifically.
Hi, I’ve got a 2.2 setup via the docker Debian image that’s been going fine for months now. Looking for an easy way to get to 2.3. Would it be easiest to kill the 2.2 container, keep the volumes and reattach those volumes to a new 2.3 container?
Obviously back up your volumes first. But yes, since the addition of the upgrade ability this entire thread is about, just mount the old volumes to the new container and the entrypoint.sh will see that the version in userdata doesn’t match the software in the container and update and delete everything in userdata necessary for the upgrade.
Essentially entrypoint.sh now does the same thing that apt-get does during an upgrade and that the upgrade script in the OH bin directory does for manual installs.
So the tl;dr is start a new container with the old volumes and everything will be taken care of for you.
Does this still work? I tried on a 2.3 basic install, then told the container to run version 2.4, but I see it cycling.
2018-12-18 19:59:36.076 [WARN ] [org.jline ] - Unable to create a system terminal, creating a dumb terminal (enable debug logging for more information)
2018-12-18 19:59:39.419 [WARN ] [raf.features.internal.osgi.Activator] - Error starting activator
java.lang.IllegalStateException: BundleContext is no longer valid
at org.eclipse.osgi.internal.framework.BundleContextImpl.checkValid(BundleContextImpl.java:989) ~[?:?]
at org.eclipse.osgi.internal.framework.BundleContextImpl.registerService(BundleContextImpl.java:468) ~[?:?]
at org.eclipse.osgi.internal.framework.BundleContextImpl.registerService(BundleContextImpl.java:487) ~[?:?]
at org.eclipse.osgi.internal.framework.BundleContextImpl.registerService(BundleContextImpl.java:1004) ~[?:?]
at org.apache.karaf.util.tracker.BaseActivator.register(BaseActivator.java:388) ~[11:org.apache.karaf.features.core:4.2.1]
at org.apache.karaf.util.tracker.BaseActivator.register(BaseActivator.java:376) ~[11:org.apache.karaf.features.core:4.2.1]
at org.apache.karaf.features.internal.osgi.Activator.doStart(Activator.java:180) ~[11:org.apache.karaf.features.core:4.2.1]
at org.apache.karaf.util.tracker.BaseActivator.run(BaseActivator.java:275) [11:org.apache.karaf.features.core:4.2.1]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:?]
at java.lang.Thread.run(Thread.java:748) [?:?]
Yeah I did clear the cache and it made no difference. what ended up fixing it for me was I used docker compose tonpush it back to 2.3, then updated the version number in docker to 2.4 and it upgraded.
The first time I tried doing it on portainer. No idea what the difference was.
I use Docker commands directly to upgrade (well, I use Ansible to bring up the containers) so maybe there is something going on with that difference.
There have been some recent changes to automatically detect and build the correct architecture so we don’t have to specify that in the label anymore. Maybe something there went wrong.
Just throw away the existing container and create a new one when upgrading.
As long as you map your data directories/volumes in the new container it should work!
You can open up the tar files and have a look. I just did a quick table of contents of the most recent backup I have (from this morning) and it looks like it is everything in userdata.
How big is your userdata folder (minus the backups folder of course, we don’t need recursive backups)?
On my system the backups are all in the half gig range which tracks with the overall size of the userdata folder.
rich@argus:/o/o/u/backup (master) ✗ ls -lh
total 7.1G
-rw-rw-r-- 1 openhab openhab 404M May 7 2018 userdata-2018-05-07T13-28-13.tar
-rw-rw-r-- 1 openhab openhab 430M May 9 2018 userdata-2018-05-09T11-03-12.tar
-rw-rw-r-- 1 openhab openhab 437M May 30 2018 userdata-2018-05-30T14-08-32.tar
-rw-rw-r-- 1 openhab openhab 475M Aug 15 15:31 userdata-2018-08-15T15-31-41.tar
-rw-rw-r-- 1 openhab openhab 479M Sep 14 14:48 userdata-2018-09-14T14-47-35.tar
-rw-rw-r-- 1 openhab openhab 485M Oct 4 12:08 userdata-2018-10-04T12-08-12.tar
-rw-rw-r-- 1 openhab openhab 483M Oct 24 16:04 userdata-2018-10-24T16-04-25.tar
-rw-rw-r-- 1 openhab openhab 489M Oct 30 11:49 userdata-2018-10-30T11-49-20.tar
-rw-rw-r-- 1 openhab openhab 514M Nov 13 11:24 userdata-2018-11-13T11-24-49.tar
-rw-rw-r-- 1 openhab openhab 519M Nov 14 14:42 userdata-2018-11-14T14-42-22.tar
-rw-rw-r-- 1 openhab openhab 521M Nov 23 09:27 userdata-2018-11-23T09-26-40.tar
-rw-rw-r-- 1 openhab openhab 518M Dec 17 13:38 userdata-2018-12-17T13-37-50.tar
-rw-rw-r-- 1 openhab openhab 361M Jan 14 10:28 userdata-2019-01-14T10-28-11.tar
-rw-rw-r-- 1 openhab openhab 551M Jan 14 10:30 userdata-2019-01-14T10-29-02.tar
-rw-rw-r-- 1 openhab openhab 531M Jan 15 08:14 userdata-2019-01-15T08-14-21.tar
And I just realized that I don’t have this folder in my .gitignore. Off to git to clean this up.
+ mkdir /openhab/userdata/backup
+ tar --exclude=/openhab/userdata/backup -c -f /openhab/userdata/backup/userdata-2019-01-16T09-55-47.tar /openhab/userdata
tar: Removing leading `/' from member names
the issue might be, because I put the volume on a folder which is managed by glusterfs distribute FS