Openhab Docker | Upgrade 3.2 to 3.3

Check your envs.
You probably have there one stating the version.
If it says 3.2 then that’s your issue. Delete it and try again.

Where can I do this. Remember, the docker container should be independent from the others. And the docker image is set well …

Are you using portainer??

I can do. But I use docker-compose

Created a new plain test … all directories were created by docker. I didn’t had any addons / conf / userdata folder set for the last test.

$ cat docker-compose.yml 
version: '3'

services:

  openhab33:
#    image: openhab/openhab:latest
    image: openhab/openhab:3.3.0
    container_name: oh33test2
    restart: unless-stopped
    network_mode: host
    cap_add:
      - NET_ADMIN
      - NET_RAW
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - /etc/timezone:/etc/timezone:ro
      - ./openhab/addons:/openhab/addons
      - ./openhab/conf:/openhab/conf
      - ./openhab/userdata:/openhab/userdata
    environment:
      OPENHAB_HTTP_PORT: "8082"
      OPENHAB_HTTPS_PORT: "8445"
      USER_ID: "9001"
      GROUP_ID: "9001"
      EXTRA_JAVA_OPTS: "-Duser.timezone=Europe/Berlin"
    # The command node is very important. It overrides
    # the "gosu openhab tini -s ./start.sh" command from Dockerfile and runs as root!
    command: "tini -s ./start.sh server"

The OH3.3 image is used but OH3.2 is inside?

pi@pi4:~ $ docker ps | grep test2
17bbb5cedf5d   openhab/openhab:3.3.0           "/entrypoint tini -s…"   2 hours ago    Up 2 hours (healthy)                                                                    oh33test2

pi@pi4:~ $ docker exec -ti oh33test2 /openhab/runtime/bin/client
Logging in as openhab
Password:  

                           _   _     _     ____  
   ___   ___   ___   ___  | | | |   / \   | __ ) 
  / _ \ / _ \ / _ \ / _ \ | |_| |  / _ \  |  _ \ 
 | (_) | (_) |  __/| | | ||  _  | / ___ \ | |_) )
  \___/|  __/ \___/|_| |_||_| |_|/_/   \_\|____/ 
       |_|       3.2.0 - Release Build

Use '<tab>' for a list of available commands
and '[cmd] --help' for help on a specific command.
To exit, use '<ctrl-d>' or 'logout'.

openhab> list -s org.openhab.core
START LEVEL 100 , List Threshold: 50
 ID │ State  │ Lvl │ Version │ Symbolic name
────┼────────┼─────┼─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
151 │ Active │  80 │ 3.2.0   │ org.openhab.core

Something is wrong here. Are you 100% positive that you are not still running the 3.2 container somewhere?

The only way you could be running the 3.3 image and it tell you that it’s the 3.2 cute when you log in to the karaf console is if you are really actually running the 3.2 image after all, or the upgrade failed when starting the 3.3 image and the userdata/etc/versions.properties didn’t get replaced. But if that were the case, v the bundle would still say 3.3 even if the login welcome says 3.2.

So either something is seriously wrong with the original Docker image, which is unlikely or wise others would be complaining of the same purple l problem, or you are actually running the 3.2 image somehow.

Note, if OH 3.2 is still running it will have grabbed port 1801 (going from memory). And since both would be running with net=host, even if you are inside the 3.3 container, you’ll be connecting to the 3.2 insurance since it will have grabbed the ssh port first.

If you also change the container name with every version upgrade it will probably not remove the old container.

After you’ve pulled the new image, did you do docker-compose up -d ? Without this, it will keep using the old image.

During my current test another container is using OH32 it’s used for the current production verison.

But this should be independent because all Docker containers are independent from the other once, right? Btw. I also use different folder and different docker-compose files.

And … Tried in the first version of upgrading all in once. Kill the old container, use only the new. But there was the shit starting … :smiley:

Now I’m going to find what the reason can be. Therefore I try to use different setups for each test.

Different “projects” … different docker-compose files. I change the container name to not affect the old / other container.

Sure, have a look to the output. The oh33 image has been used.

Not when you are using net=host:

That makes it so that there is no isolation network wise between the container and the host. And if you have two containers running with net=host that means there’s no isolation network wise between those container either.

Since both use port 1801 to ssh to the karaf console which ever one comes up first will work and the second one will fail. You should be seeing “Bind” exceptions in the OH 3.3 instance’s logs when it attempts to bind to the ssh port (1801), the LSP port, and the broadcast port because your OH 3.2 instance is already connected to those ports.

Ok. You are right. I tried now the 3.3.0 and 3.4.0.M5 in separate containers without net=host and it seems working. I will duplicate my OH3.2 to a new container to test the automatic upgrade.

Additionally I found that Shelly Plus devices are only available starting with 3.4.0.M5. I need to check when OH3.4 officially will be released.

I need to use net=host for Amazon Dashbuttons furthermore. But I guess for the moment I will try this stand-alone.

A couple of weeks from now.

1 Like

I have shut down the OH3.2 and also my both tests.

I have copied the openhab data to the test folder. I’ve started a new OH 3.4.0.M5 test. But the same… Update will not run… Binding are not able to download, again. Internet from inside the container is available.

$ docker exec -it oh33test /bin/bash 
root@f2cf4162ecc3:/openhab# ping google.de
PING google.de (142.251.36.3) 56(84) bytes of data.
64 bytes from ams15s44-in-f3.1e100.net (142.251.36.3): icmp_seq=1 ttl=118 time=9.73 ms
64 bytes from ams15s44-in-f3.1e100.net (142.251.36.3): icmp_seq=2 ttl=118 time=9.39 ms
^C
--- google.de ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 9.385/9.555/9.726/0.170 ms
root@f2cf4162ecc3:/openhab# 

==> openhab/userdata/logs/update.log <==
Replacing userdata system files with newer versions...
Clearing cache...

Performing post-update tasks for version 3.3.0:

Performing post-update tasks for version 3.4.0:


SUCCESS: openHAB updated from 3.2.0 to 3.4.0.M5


==> openhab/userdata/logs/openhab.log <==
2022-12-05 21:54:32.182 [ERROR] [core.karaf.internal.FeatureInstaller] - Failed installing 'openhab-package-standard': Error:
	Error downloading mvn:org.eclipse.jetty.websocket/websocket-common/9.4.43.v20210629
	Error downloading mvn:org.eclipse.jetty.websocket/websocket-client/9.4.43.v20210629
	Error downloading mvn:org.eclipse.jetty/jetty-util/9.4.43.v20210629
	Error downloading mvn:org.eclipse.jetty.websocket/websocket-api/9.4.43.v20210629
	Error downloading mvn:org.eclipse.jetty/jetty-http/9.4.43.v20210629
	Error downloading mvn:org.eclipse.jetty/jetty-io/9.4.43.v20210629
	Error downloading mvn:org.eclipse.jetty/jetty-proxy/9.4.43.v20210629
	Error downloading mvn:org.eclipse.jetty/jetty-client/9.4.43.v20210629
2022-12-05 21:54:33.691 [ERROR] [core.karaf.internal.FeatureInstaller] - Failed installing 'openhab-binding-hue, openhab-binding-modbus, openhab-persistence-mapdb, openhab-binding-webthing, openhab-binding-amazondashbutton, openhab-misc-openhabcloud, openhab-binding-tplinksmarthome, openhab-transformation-javascript, openhab-persistence-influxdb, openhab-transformation-regex, openhab-ui-habpanel, openhab-transformation-jsonpath, openhab-automation-jsscripting, openhab-binding-shelly, openhab-binding-mqtt, openhab-persistence-rrd4j, openhab-transformation-map, openhab-ui-basic, openhab-binding-smartmeter, openhab-binding-astro, openhab-binding-telegram, openhab-transformation-jinja': Error:
	Error downloading mvn:org.eclipse.jetty.websocket/websocket-client/9.4.43.v20210629
	Error downloading mvn:org.eclipse.jetty/jetty-http/9.4.43.v20210629
	Error downloading mvn:org.eclipse.jetty.websocket/websocket-api/9.4.43.v20210629
	Error downloading mvn:org.eclipse.jetty.websocket/websocket-common/9.4.43.v20210629
	Error downloading mvn:org.eclipse.jetty/jetty-util/9.4.43.v20210629
	Error downloading mvn:org.eclipse.jetty/jetty-proxy/9.4.43.v20210629
	Error downloading mvn:org.eclipse.jetty/jetty-client/9.4.43.v20210629
	Error downloading mvn:org.eclipse.jetty/jetty-io/9.4.43.v20210629
2022-12-05 21:54:35.269 [ERROR] [core.karaf.internal.FeatureInstaller] - Failed to refresh bundles after processing config update
org.apache.karaf.features.internal.util.MultiException: Error:
	Error downloading mvn:org.eclipse.jetty.websocket/websocket-api/9.4.43.v20210629
	Error downloading mvn:org.eclipse.jetty/jetty-client/9.4.43.v20210629
	Error downloading mvn:org.eclipse.jetty.websocket/websocket-common/9.4.43.v20210629
	Error downloading mvn:org.eclipse.jetty/jetty-io/9.4.43.v20210629
	Error downloading mvn:org.eclipse.jetty.websocket/websocket-client/9.4.43.v20210629
	Error downloading mvn:org.eclipse.jetty/jetty-http/9.4.43.v20210629
	Error downloading mvn:org.eclipse.jetty/jetty-util/9.4.43.v20210629
	Error downloading mvn:org.eclipse.jetty/jetty-proxy/9.4.43.v20210629
	at org.apache.karaf.features.internal.download.impl.MavenDownloadManager$MavenDownloader.<init>(MavenDownloadManager.java:91) ~[?:?]
	at org.apache.karaf.features.internal.download.impl.MavenDownloadManager.createDownloader(MavenDownloadManager.java:72) ~[?:?]
	at org.apache.karaf.features.internal.region.Subsystem.downloadBundles(Subsystem.java:457) ~[?:?]
	at org.apache.karaf.features.internal.region.Subsystem.downloadBundles(Subsystem.java:452) ~[?:?]
	at org.apache.karaf.features.internal.region.SubsystemResolver.resolve(SubsystemResolver.java:224) ~[?:?]
	at org.apache.karaf.features.internal.service.Deployer.deploy(Deployer.java:399) ~[?:?]
	at org.apache.karaf.features.internal.service.FeaturesServiceImpl.doProvision(FeaturesServiceImpl.java:1069) ~[?:?]
	at org.apache.karaf.features.internal.service.FeaturesServiceImpl.lambda$doProvisionInThread$13(FeaturesServiceImpl.java:1004) ~[?:?]
	at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:829) [?:?]
	Suppressed: java.io.IOException: Error downloading mvn:org.eclipse.jetty.websocket/websocket-api/9.4.43.v20210629
		at org.apache.karaf.features.internal.download.impl.AbstractRetryableDownloadTask.run(AbstractRetryableDownloadTask.java:77) ~[?:?]
		at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
		at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
		at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
		at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
		at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
		at java.lang.Thread.run(Thread.java:829) [?:?]
	Caused by: java.io.IOException: Error resolving artifact org.eclipse.jetty.websocket:websocket-api:jar:9.4.43.v20210629: [Could not find artifact org.eclipse.jetty.websocket:websocket-api:jar:9.4.43.v20210629 in openhab (https://openhab.jfrog.io/openhab/libs-milestone/)]
		at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.configureIOException(AetherBasedResolver.java:803) ~[?:?]
		at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:774) ~[?:?]
		at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:657) ~[?:?]
		at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:598) ~[?:?]
		at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:565) ~[?:?]
		at org.apache.karaf.features.internal.download.impl.MavenDownloadTask.download(MavenDownloadTask.java:52) ~[?:?]
		at org.apache.karaf.features.internal.download.impl.AbstractRetryableDownloadTask.run(AbstractRetryableDownloadTask.java:60) ~[?:?]
		... 6 more
		Suppressed: shaded.org.eclipse.aether.transfer.ArtifactNotFoundException: Could not find artifact org.eclipse.jetty.websocket:websocket-api:jar:9.4.43.v20210629 in openhab (https://openhab.jfrog.io/openhab/libs-milestone/)
			at shaded.org.eclipse.aether.connector.basic.ArtifactTransportListener.transferFailed(ArtifactTransportListener.java:48) ~[?:?]
			at shaded.org.eclipse.aether.connector.basic.BasicRepositoryConnector$TaskRunner.run(BasicRepositoryConnector.java:368) ~[?:?]
			at shaded.org.eclipse.aether.util.concurrency.RunnableErrorForwarder$1.run(RunnableErrorForwarder.java:75) ~[?:?]
			at shaded.org.eclipse.aether.connector.basic.BasicRepositoryConnector$DirectExecutor.execute(BasicRepositoryConnector.java:642) ~[?:?]
			at shaded.org.eclipse.aether.connector.basic.BasicRepositoryConnector.get(BasicRepositoryConnector.java:262) ~[?:?]
			at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.performDownloads(DefaultArtifactResolver.java:489) ~[?:?]
			at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:390) ~[?:?]
			at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts(DefaultArtifactResolver.java:215) ~[?:?]
			at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifact(DefaultArtifactResolver.java:192) ~[?:?]
			at shaded.org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveArtifact(DefaultRepositorySystem.java:247) ~[?:?]
			at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:767) ~[?:?]
			at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:657) ~[?:?]
			at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:598) ~[?:?]
			at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:565) ~[?:?]
			at org.apache.karaf.features.internal.download.impl.MavenDownloadTask.download(MavenDownloadTask.java:52) ~[?:?]
			at org.apache.karaf.features.internal.download.impl.AbstractRetryableDownloadTask.run(AbstractRetryableDownloadTask.java:60) ~[?:?]
			at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
			at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
			at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
			at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
			at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
			at java.lang.Thread.run(Thread.java:829) [?:?]
	Caused by: shaded.org.eclipse.aether.resolution.ArtifactResolutionException: Error resolving artifact org.eclipse.jetty.websocket:websocket-api:jar:9.4.43.v20210629
		at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:413) ~[?:?]
		at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts(DefaultArtifactResolver.java:215) ~[?:?]
		at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifact(DefaultArtifactResolver.java:192) ~[?:?]
		at shaded.org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveArtifact(DefaultRepositorySystem.java:247) ~[?:?]
		at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:767) ~[?:?]
		at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:657) ~[?:?]
		at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:598) ~[?:?]
		at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:565) ~[?:?]
		at org.apache.karaf.features.internal.download.impl.MavenDownloadTask.download(MavenDownloadTask.java:52) ~[?:?]
		at org.apache.karaf.features.internal.download.impl.AbstractRetryableDownloadTask.run(AbstractRetryableDownloadTask.java:60) ~[?:?]
		... 6 more

Can you ping https://mvnrepository.com/? I think that’s where it’s trying to download from requirements some of your add-ons depend upon I think. Maybe there is something funky going on with your firewall, DNS, etc.

Yes. Ping is working …

$ docker exec -it oh33test /bin/bash 
root@f413dad7c239:/openhab# ping mvnrepository.com
PING mvnrepository.com (172.67.28.102) 56(84) bytes of data.
64 bytes from 172.67.28.102 (172.67.28.102): icmp_seq=1 ttl=55 time=7.24 ms
64 bytes from 172.67.28.102 (172.67.28.102): icmp_seq=2 ttl=55 time=7.13 ms
64 bytes from 172.67.28.102 (172.67.28.102): icmp_seq=3 ttl=55 time=7.46 ms
^C
--- mvnrepository.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 7.129/7.276/7.460/0.137 ms
root@f413dad7c239:/openhab# 

I tried now another test … stopped all OH containers.

  • Copied current “oh” data to the test (3.4.0.M5) .
  • Added .kar file to the addons folder.
  • chown openhab.openhab to all files/folders.

The shit is going on… Should I remove some data in another folder, too?
Cache cleaning will be done by the upgrade script.

If I call this URL (JFrog [release] or JFrog [M5]) using a UI brower I need to login… If I use the mouse back button I can see the listing. But first I get the login screen. Can this be also a reason? Or is OH using a (public) key to prevent the login screen?

Only if the version reported in userdata/etc/versions.properties is different from the version in the image. In this case, since it’s the same version no upgrade occurs.

There’s an “x” in the upper right corner to close the login screen. Click that and you’ll get the file listing. I’m not sure why jFrog added that annoying login screen like that.

It’s all built into the web page. It won’t prevent OH from downloading.

And thus far you are the only one reporting anything like these sorts of problems so we have to assume there is something unique about your environment that is causing the problems.

Ok. Tried from inside the container to download a file directly… This was working … It the same filesize as I already put from outside to this path.

root@f413dad7c239:/openhab/addons# wget https://openhab.jfrog.io/artifactory/libs-milestone/org/openhab/distro/openhab-addons/3.4.0.M5/openhab-addons-3.4.0.M5.kar
--2022-12-05 22:52:36--  https://openhab.jfrog.io/artifactory/libs-milestone/org/openhab/distro/openhab-addons/3.4.0.M5/openhab-addons-3.4.0.M5.kar
Resolving openhab.jfrog.io (openhab.jfrog.io)... 34.139.10.89
Connecting to openhab.jfrog.io (openhab.jfrog.io)|34.139.10.89|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 371638583 (354M) [application/octet-stream]
Saving to: ‘openhab-addons-3.4.0.M5.kar.1’

openhab-addons-3.4.0.M5.kar.1                                      100%[================================================================================================================================================================>] 354.42M  14.8MB/s    in 25s     

2022-12-05 22:53:01 (14.3 MB/s) - ‘openhab-addons-3.4.0.M5.kar.1’ saved [371638583/371638583]

root@f413dad7c239:/openhab/addons# ls -la
total 725880
drwxr-xr-x 2 openhab openhab      4096 Dec  5 22:52 .
drwxr-xr-x 1 openhab openhab      4096 Dec  5 22:50 ..
-rw-r--r-- 1 openhab openhab 371638583 Nov 27 18:46 openhab-addons-3.4.0.M5.kar
-rw-r--r-- 1 root    root    371638583 Nov 27 18:28 openhab-addons-3.4.0.M5.kar.1
root@f413dad7c239:/openhab/addons# 

I also restarted the container again. Maybe sometimes things happens after a 2nd start… But no success.

oh33test     | uuid' ']'
oh33test     | ++ cmp /openhab/userdata/etc/version.properties /openhab/dist/userdata/etc/version.properties
oh33test     | + '[' '!' -z ']'
oh33test     | + chown -R openhab:openhab /openhab
oh33test     | + sync
oh33test     | + '[' -d /etc/cont-init.d ']'
oh33test     | + sync
oh33test     | + '[' false == false ']'
oh33test     | ++ IFS=' '
oh33test     | ++ echo tini -s ./start.sh server
oh33test     | + '[' 'tini -s ./start.sh server' == 'gosu openhab tini -s ./start.sh' ']'
oh33test     | + exec tini -s ./start.sh server
oh33test     | Launching the openHAB runtime...
oh33test     | org.apache.karaf.features.internal.util.MultiException: Error:
oh33test     | 	Error downloading mvn:org.eclipse.jetty.websocket/websocket-api/9.4.43.v20210629
oh33test     | 	Error downloading mvn:org.eclipse.jetty/jetty-proxy/9.4.43.v20210629
oh33test     | 	Error downloading mvn:org.eclipse.jetty/jetty-http/9.4.43.v20210629
oh33test     | 	Error downloading mvn:org.eclipse.jetty/jetty-client/9.4.43.v20210629
oh33test     | 	Error downloading mvn:org.eclipse.jetty.websocket/websocket-common/9.4.43.v20210629
oh33test     | 	Error downloading mvn:org.eclipse.jetty/jetty-util/9.4.43.v20210629
oh33test     | 	Error downloading mvn:org.eclipse.jetty.websocket/websocket-client/9.4.43.v20210629
oh33test     | 	Error downloading mvn:org.eclipse.jetty/jetty-io/9.4.43.v20210629
oh33test     | 	at org.apache.karaf.features.internal.download.impl.MavenDownloadManager$MavenDownloader.<init>(MavenDownloadManager.java:91)
oh33test     | 	at org.apache.karaf.features.internal.download.impl.MavenDownloadManager.createDownloader(MavenDownloadManager.java:72)
oh33test     | 	at org.apache.karaf.features.internal.region.Subsystem.downloadBundles(Subsystem.java:457)
oh33test     | 	at org.apache.karaf.features.internal.region.Subsystem.downloadBundles(Subsystem.java:452)
oh33test     | 	at org.apache.karaf.features.internal.region.SubsystemResolver.resolve(SubsystemResolver.java:224)
oh33test     | 	at org.apache.karaf.features.internal.service.Deployer.deploy(Deployer.java:399)
oh33test     | 	at org.apache.karaf.features.internal.service.FeaturesServiceImpl.doProvision(FeaturesServiceImpl.java:1069)
oh33test     | 	at org.apache.karaf.features.internal.service.FeaturesServiceImpl.lambda$doProvisionInThread$13(FeaturesServiceImpl.java:1004)
oh33test     | 	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
oh33test     | 	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
oh33test     | 	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
oh33test     | 	at java.base/java.lang.Thread.run(Thread.java:829)
oh33test     | 	Suppressed: java.io.IOException: Error downloading mvn:org.eclipse.jetty.websocket/websocket-api/9.4.43.v20210629
oh33test     | 		at org.apache.karaf.features.internal.download.impl.AbstractRetryableDownloadTask.run(AbstractRetryableDownloadTask.java:77)
oh33test     | 		at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
oh33test     | 		at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
oh33test     | 		at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
oh33test     | 		... 3 more
oh33test     | 	Caused by: java.io.IOException: Error resolving artifact org.eclipse.jetty.websocket:websocket-api:jar:9.4.43.v20210629: [Could not find artifact org.eclipse.jetty.websocket:websocket-api:jar:9.4.43.v20210629 in openhab (https://openhab.jfrog.io/openhab/libs-milestone/)]

I’m totally confused …

So am I.

Hmmmm. I tried to go find that websoket-api library at jFrog and it’s not there! There’s a different version, 9.4.38.v20210224/.

This isn’t a Docker specific issue. And I have to believe some people are still installing 3.3 so it’s weird you are the only one having the problem.

My recommendation at this point is to reply on the OH 3.3 thread to see if maybe someone failed to add a library there sometime after the upgrade.

Then see what happens with the latest 3.4 milestone release (5 I think). If it works you can upgrade to 3.4 release in a couple of weeks with minimal effort as there won’t be that many changes to deal with for such a small jump.

Yeah. Maybe there is something specific at my end I don’t know …

Is there a way to can export all data?
If yes, I’d import this to a new docker container. Yes. That makes so sense… But in case there is something historic in the background it will gone away …

Do you mean a specific thread?
btw … Last test was made with 3.4.0.M5 … But results in the same issue as 3.3.0.

All the data is stored in conf and userdata. You already have both of those folders so you already have all the data.

If it’s still happening with 3.4, use this thread.

Short Update. I’m now using OH3.4 without main issues during the migration.