Jetty update, Karaf 4.1.3 upgrade and full LSP support

This might be a bit more of an issue when running in docker containers, as even when reusing/restarting the same container, OH doesn’t seem to start up. The only way so far that I’ve managed to get it start is by opening a shell into the container, and manually starting OH twice. But again things stop working as soon as I exit the shell and restart the same container.

Have the same problem with the Artefact Resolver

2017-11-19 13:47:46.198 [ERROR] [core.karaf.internal.FeatureInstaller] - Failed installing 'openhab-package-"standard"': No matching features for openhab-package-"standard"/0.0.0
2017-11-19 13:47:47.462 [WARN ] [url.mvn.internal.AetherBasedResolver] - Error resolving artifact org.openhab.binding:org.openhab.binding.homematic:jar:2.2.0-SNAPSHOT: [Could not find artifact org.openhab.binding:org.openhab.binding.homematic:jar:2.2.0-SNAPSHOT]
java.io.IOException: Error resolving artifact org.openhab.binding:org.openhab.binding.homematic:jar:2.2.0-SNAPSHOT: [Could not find artifact org.openhab.binding:org.openhab.binding.homematic:jar:2.2.0-SNAPSHOT]
        at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:720) [4:org.ops4j.pax.url.mvn:2.5.3]
        at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:659) [4:org.ops4j.pax.url.mvn:2.5.3]
        at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:600) [4:org.ops4j.pax.url.mvn:2.5.3]
        at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:567) [4:org.ops4j.pax.url.mvn:2.5.3]
        at org.apache.karaf.features.internal.download.impl.MavenDownloadTask.download(MavenDownloadTask.java:47) [9:org.apache.karaf.features.core:4.1.3]
        at org.apache.karaf.features.internal.download.impl.AbstractRetryableDownloadTask.run(AbstractRetryableDownloadTask.java:60) [9:org.apache.karaf.features.core:4.1.3]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:?]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:?]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [?:?]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [?:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:?]
        at java.lang.Thread.run(Thread.java:748) [?:?]
        Suppressed: shaded.org.eclipse.aether.transfer.ArtifactNotFoundException: Could not find artifact org.openhab.binding:org.openhab.binding.homematic:jar:2.2.0-SNAPSHOT
                at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:434) [4:org.ops4j.pax.url.mvn:2.5.3]
                at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts(DefaultArtifactResolver.java:246) [4:org.ops4j.pax.url.mvn:2.5.3]
                at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifact(DefaultArtifactResolver.java:223) [4:org.ops4j.pax.url.mvn:2.5.3]
                at shaded.org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveArtifact(DefaultRepositorySystem.java:294) [4:org.ops4j.pax.url.mvn:2.5.3]
                at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:705) [4:org.ops4j.pax.url.mvn:2.5.3]
                at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:659) [4:org.ops4j.pax.url.mvn:2.5.3]
                at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:600) [4:org.ops4j.pax.url.mvn:2.5.3]
                at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:567) [4:org.ops4j.pax.url.mvn:2.5.3]
                at org.apache.karaf.features.internal.download.impl.MavenDownloadTask.download(MavenDownloadTask.java:47) [9:org.apache.karaf.features.core:4.1.3]
                at org.apache.karaf.features.internal.download.impl.AbstractRetryableDownloadTask.run(AbstractRetryableDownloadTask.java:60) [9:org.apache.karaf.features.core:4.1.3]
                at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:?]
                at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:?]
                at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [?:?]
                at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [?:?]
                at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:?]
                at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:?]
                at java.lang.Thread.run(Thread.java:748) [?:?]
Caused by: shaded.org.eclipse.aether.resolution.ArtifactResolutionException: Error resolving artifact org.openhab.binding:org.openhab.binding.homematic:jar:2.2.0-SNAPSHOT
        at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:444) ~[?:?]
        at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts(DefaultArtifactResolver.java:246) ~[?:?]
        at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifact(DefaultArtifactResolver.java:223) ~[?:?]
        at shaded.org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveArtifact(DefaultRepositorySystem.java:294) ~[?:?]
        at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:705) ~[?:?]
        ... 12 more
2017-11-19 13:47:47.524 [ERROR] [core.karaf.internal.FeatureInstaller] - Failed installing 'openhab-persistence-mapdb", openhab-persistence-"influxdb, openhab-transformation-map", openhab-binding-"expire1, openhab-action-"pushover", openhab-transformation-jsonpath, openhab-binding-mqtt1, openhab-ui-"basic, openhab-misc-"openhabcloud, openhab-binding-tankerkoenig, openhab-misc-lsp", openhab-ui-habpanel", openhab-binding-sonos", openhab-transformation-"javascript, openhab-binding-astro, openhab-binding-homematic, openhab-ui-paper': Error:
        Error downloading mvn:org.openhab.binding/org.openhab.binding.homematic/2.2.0-SNAPSHOT

So none of the binding ist loading - openHAB is dead :frowning:

It’s super easy. So easy that I’ve immediately submitted PR #2857 to add it to more bindings! :smile:

1 Like

I think I’ll need to downgrade too :frowning:

2017-11-19 14:35:44.134 [ERROR] [org.eclipse.smarthome.io.rest.sse   ] - FrameworkEvent ERROR - org.eclipse.smarthome.io.rest.sse
org.osgi.framework.BundleException: Exception in org.eclipse.smarthome.io.rest.sse.internal.SseActivator.start() of bundle org.eclipse.smarthome.io.rest.sse.
	at org.eclipse.osgi.internal.framework.BundleContextImpl.startActivator(BundleContextImpl.java:795) [?:?]
	at org.eclipse.osgi.internal.framework.BundleContextImpl.start(BundleContextImpl.java:724) [?:?]
	at org.eclipse.osgi.internal.framework.EquinoxBundle.startWorker0(EquinoxBundle.java:932) [?:?]
	at org.eclipse.osgi.internal.framework.EquinoxBundle$EquinoxModule.startWorker(EquinoxBundle.java:309) [?:?]
	at org.eclipse.osgi.container.Module.doStart(Module.java:581) [?:?]
	at org.eclipse.osgi.container.Module.start(Module.java:449) [?:?]
	at org.eclipse.osgi.framework.util.SecureAction.start(SecureAction.java:470) [?:?]
	at org.eclipse.osgi.container.ModuleContainer.start(ModuleContainer.java:736) [?:?]
	at org.eclipse.osgi.container.ModuleContainer.applyDelta(ModuleContainer.java:727) [?:?]
	at org.eclipse.osgi.container.ModuleContainer.resolveAndApply(ModuleContainer.java:497) [?:?]
	at org.eclipse.osgi.container.ModuleContainer.resolve(ModuleContainer.java:443) [?:?]
	at org.eclipse.osgi.container.ModuleContainer.refresh(ModuleContainer.java:987) [?:?]
	at org.eclipse.osgi.container.ModuleContainer$ContainerWiring.dispatchEvent(ModuleContainer.java:1368) [?:?]
	at org.eclipse.osgi.container.ModuleContainer$ContainerWiring.dispatchEvent(ModuleContainer.java:1) [?:?]
	at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230) [?:?]
	at org.eclipse.osgi.framework.eventmgr.EventManager$EventThread.run(EventManager.java:340) [?:?]
Caused by: java.lang.LinkageError: ClassCastException: attempting to castbundleresource://41.fwk15105546/javax/ws/rs/ext/RuntimeDelegate.class to bundleresource://41.fwk15105546/javax/ws/rs/ext/RuntimeDelegate.class
	at javax.ws.rs.ext.RuntimeDelegate.findDelegate(RuntimeDelegate.java:146) ~[?:?]
	at javax.ws.rs.ext.RuntimeDelegate.getInstance(RuntimeDelegate.java:120) ~[?:?]
	at javax.ws.rs.core.MediaType.valueOf(MediaType.java:179) ~[?:?]
	at org.glassfish.jersey.media.sse.SseFeature.<clinit>(SseFeature.java:62) ~[?:?]
	at org.eclipse.smarthome.io.rest.sse.internal.SseActivator.start(SseActivator.java:44) ~[?:?]
	at org.eclipse.osgi.internal.framework.BundleContextImpl$3.run(BundleContextImpl.java:774) ~[?:?]
	at org.eclipse.osgi.internal.framework.BundleContextImpl$3.run(BundleContextImpl.java:1) ~[?:?]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:?]
	at org.eclipse.osgi.internal.framework.BundleContextImpl.startActivator(BundleContextImpl.java:767) ~[?:?]
	... 15 more
Exception in org.eclipse.smarthome.io.rest.sse.internal.SseActivator.start() of bundle org.eclipse.smarthome.io.rest.sse.

I’ve already removed /tmp/ and /cache/ as @lipp_markus shown. Also restarted openHAB service several times.

Besides, none of my zwave devices respond.
No events are logged from my Items updates.

I also started the downgrade to #1075, but unfortunately it seems that the bindings are still not working as they are not getting downgraded.

But found now a temporary solution - you should downgrade/revert back to #1078 - now everything is working fine for me again. Start took pretty long but it normalized everything after another restart.

Btw. i am running stretch in an lxc container -…

I downgraded to #1082 (apt-get install openhab2=2.2.0~20171116035510-1). Downgrading to an earlier release produced other errors (I guess some new bindings depend on new stuff of OH). I’m ok now (except for the Sonos issue).

@toxiroxi, @Dries, @Dibbler42 and others with issues: Please help analysing those issues that you see as a plain new installation does not seem to show any of such problems. It would be great if you could try to reproduce your issues on a separate installation from scratch and create an issue with a step by step explanation on how to reproduce it.

@Kai I’m also having problems with my docker install and I kept it separate until I accidently removed my production container :cry:

So my house just went black not good for the WAF…

I was curious you mention 1083, but my docker container is running (or trying to run) 1084

2017-11-19 15:16:25.630 [SEVERE] [org.apache.karaf.main.Main] - Could not launch framework
java.lang.RuntimeException: Error installing bundle listed in startup.properties with url: mvn:org.apache.karaf.features/org.apache.karaf.features.extension/4.1.2 and startlevel: 1
        at org.apache.karaf.main.Main.installAndStartBundles(Main.java:540)
        at org.apache.karaf.main.Main.launch(Main.java:273)
        at org.apache.karaf.main.Main.main(Main.java:179)
Caused by: java.lang.RuntimeException: Could not resolve mvn:org.apache.karaf.features/org.apache.karaf.features.extension/4.1.2
        at org.apache.karaf.main.util.SimpleMavenResolver.resolve(SimpleMavenResolver.java:59)
        at org.apache.karaf.main.Main.installAndStartBundles(Main.java:532)
        ... 2 more

Hello Kai,

i will duplicate my container and do those steps again. But i just upgraded the package as always. coming from #1078 do the new version#1083.
Never experienced such issues before - i will see if i can find anything which gives you hint why upgrade is not working.
I am happy for now that my productive instance is working again as usual after downgrade (#1083 => #1075 => #1078).

Will keep you posted whenever i am able to find any reason for that. Generally the errors always coming out of the karaf module which i have not really a knowledge about.

Here are the PRs for Zigbee and Z-Wave bindings.
CC: @chris

2 Likes

This clearly hints at the problem: The startup.properties file of the new distro clearly references version 4.1.3, not 4.1.2.
So this seems to be a bug in how the docker upgrades are done. Could you check?

I found that one, its one of the folders that is not cleared when upgrading, I’m looking further

@toxiroxi Sounds as if you are using Docker as well, so you might be suffering from the same issue as @martinvw.

I will mention that when doing Docker upgrades the only way I’ve found it to consistently work is to:

  • Make a backup of userdata
  • Delete the contents of userdata
  • Start the container
  • After it finishes coming up as a fresh install stop OH
  • Copy over jsondb and any changes files in userdata/etc (do not blindly copy everything over, only files you have actually changed). Do a dog to make sure the format it other settings in those files have not changed.

There is no automation built into the Docker image for upgrades and as we all should know, you can’t just use the userdata from an old OH in a new build, particularly when there is a major update in the core like the switch to log4j2 and this latest update.

I just removed the complete userdata folder (after making a backup), the first result was that I now get errors for all the addons which are in my addons folder. After disabling that folder as well, my log remains completely empty.

I hope the dockers are more broken than others but the most basic docker setup is broken:

docker run -it --name openhab --net=host openhab/openhab:2.2.0-snapshot-amd64-debian

It first boots till an openhab prompt and then a newline or something is echo-ed and it stops responding, however the java process keeps waiting for/in some FUTEX according to strace. It also does not claim any of the regular ports but it has some ago streams open, are we having problems with the server where we download our additional parts from? (@kai ??)

tcp6       0      0 127.0.0.1:42736         :::*                    LISTEN      19194/java      
unix  3      [ ]         STREAM     CONNECTED     6479442  19194/java          
unix  3      [ ]         STREAM     CONNECTED     6479441  19194/java          
unix  2      [ ]         STREAM     CONNECTED     6479429  19194/java          
unix  2      [ ]         STREAM     CONNECTED     6479436  19194/java          
unix  3      [ ]         STREAM     CONNECTED     6479439  19194/java          
unix  3      [ ]         STREAM     CONNECTED     6479440  19194/java        

I also do not recognize port 42736, this seems to be varying now I have a port 39818

strace: Process 19194 attached
futex(0x7fac98c069d0, FUTEX_WAIT, 171, NULL

— edit

After a lot of patience and the stream error already mentioned by @kai the basic docker command succeeded. I’m now switching back to reviving my normal container…

I am not running docker, but i am on a lxc container.

I could reproduce the error within the copy i made, but there are no more error logs or anything which are being showed what i have posted so far. I am not sure if i can really help here. just did the upgrade via apt-get and nothing more.

Here, debian stretch (x64), Java™ SE Runtime Environment (build 1.8.0_151-b12)
Build #1084, same error (Internal error, please look at the server's logs. in vscode)

Most intresting part is 'file:///q%3A/' which should be q:\ if I get it right.

Another question: when the LSP port is configurable in vscode, where to configure in openHAB2?

20:57:29.408 [INFO ] [del.lsp.internal.MappingUriExtensions] - Path mapping could not be done for 'file:///q%3A/', leaving it untouched
20:57:29.443 [ERROR] [.eclipse.lsp4j.jsonrpc.RemoteEndpoint] - Internal error: java.lang.reflect.InvocationTargetException
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
        at org.eclipse.lsp4j.jsonrpc.services.GenericEndpoint.lambda$null$0(GenericEndpoint.java:53) ~[102:org.eclipse.lsp4j.jsonrpc:0.2.1.v20170706-0855]
        at org.eclipse.lsp4j.jsonrpc.services.GenericEndpoint.request(GenericEndpoint.java:105) [102:org.eclipse.lsp4j.jsonrpc:0.2.1.v20170706-0855]
        at org.eclipse.lsp4j.jsonrpc.RemoteEndpoint.handleRequest(RemoteEndpoint.java:203) [102:org.eclipse.lsp4j.jsonrpc:0.2.1.v20170706-0855]
        at org.eclipse.lsp4j.jsonrpc.RemoteEndpoint.consume(RemoteEndpoint.java:139) [102:org.eclipse.lsp4j.jsonrpc:0.2.1.v20170706-0855]
        at org.eclipse.lsp4j.jsonrpc.json.StreamMessageProducer.handleMessage(StreamMessageProducer.java:149) [102:org.eclipse.lsp4j.jsonrpc:0.2.1.v20170706-0855]
        at org.eclipse.lsp4j.jsonrpc.json.StreamMessageProducer.listen(StreamMessageProducer.java:77) [102:org.eclipse.lsp4j.jsonrpc:0.2.1.v20170706-0855]
        at org.eclipse.lsp4j.jsonrpc.json.ConcurrentMessageProcessor.run(ConcurrentMessageProcessor.java:84) [102:org.eclipse.lsp4j.jsonrpc:0.2.1.v20170706-0855]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:?]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:?]
        at java.lang.Thread.run(Thread.java:748) [?:?]
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
        at java.lang.reflect.Method.invoke(Method.java:498) ~[?:?]
        at org.eclipse.lsp4j.jsonrpc.services.GenericEndpoint.lambda$null$0(GenericEndpoint.java:51) ~[?:?]
        ... 11 more
Caused by: java.lang.NullPointerException
        at java.util.concurrent.ConcurrentHashMap.putVal(ConcurrentHashMap.java:1011) ~[?:?]
        at java.util.concurrent.ConcurrentHashMap.put(ConcurrentHashMap.java:1006) ~[?:?]
        at org.eclipse.xtext.resource.impl.ChunkedResourceDescriptions.setContainer(ChunkedResourceDescriptions.java:121) ~[?:?]
        at org.eclipse.xtext.ide.server.ProjectManager.lambda$createFreshResourceSet$5(ProjectManager.java:152) ~[?:?]
        at org.eclipse.xtext.xbase.lib.ObjectExtensions.operator_doubleArrow(ObjectExtensions.java:139) ~[?:?]
        at org.eclipse.xtext.ide.server.ProjectManager.createFreshResourceSet(ProjectManager.java:155) ~[?:?]
        at org.eclipse.xtext.ide.server.ProjectManager.lambda$newBuildRequest$4(ProjectManager.java:130) ~[?:?]
        at org.eclipse.xtext.xbase.lib.ObjectExtensions.operator_doubleArrow(ObjectExtensions.java:139) ~[?:?]
        at org.eclipse.xtext.ide.server.ProjectManager.newBuildRequest(ProjectManager.java:141) ~[?:?]
        at org.eclipse.xtext.ide.server.ProjectManager.doBuild(ProjectManager.java:111) ~[?:?]
        at org.eclipse.xtext.ide.server.ProjectManager.doInitialBuild(ProjectManager.java:107) ~[?:?]
        at org.eclipse.xtext.ide.server.BuildManager.doInitialBuild(BuildManager.java:138) ~[?:?]
        at org.eclipse.xtext.ide.server.WorkspaceManager.refreshWorkspaceConfig(WorkspaceManager.java:142) ~[?:?]
        at org.eclipse.xtext.ide.server.WorkspaceManager.initialize(WorkspaceManager.java:113) ~[?:?]
        at org.eclipse.xtext.ide.server.LanguageServerImpl.lambda$initialize$3(LanguageServerImpl.java:213) ~[?:?]
        at org.eclipse.xtext.ide.server.concurrent.RequestManager.runWrite(RequestManager.java:71) ~[?:?]
        at org.eclipse.xtext.ide.server.LanguageServerImpl.initialize(LanguageServerImpl.java:216) ~[?:?]
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
        at java.lang.reflect.Method.invoke(Method.java:498) ~[?:?]
        at org.eclipse.lsp4j.jsonrpc.services.GenericEndpoint.lambda$null$0(GenericEndpoint.java:51) ~[?:?]
        ... 11 more

Udo are you running natively stretch or in a container/docker or whatever environment?
Thanks,

Ok, tonight I gave it another try, with success.
In the file “org.apache.felix.eventadmin.impl.EventAdmin.cfg” (stored in userdata\openhab2\etc) I added a new parameter:

org.apache.felix.eventadmin.Timeout=0

If you don’t have that parameter (like I did), it defaults to 5000ms (source). If you set it to “0” there is no timeout. In my case, the OH core got “blacklisted” because: “If an event handler takes longer than the configured timeout to process an event, it is blacklisted. Once a handler is in a blacklist, it doesn’t get sent any events anymore.

After setting this parameter, I upgraded back to #1084. On first boot, it took very long for OH to start. After 10 minutes, I initiated a restart. After that, OH started pretty quickly without any errors.

So now I have a working #1084 build. :slight_smile:

Not sure if setting that parameter is “best practice”. So I wanted to check: does one of you guys have a specific value set for this parameter?

3 Likes

Sorry, did forget to mention… I’m on a xen vm. But I’m pretty sure that this detail is not the source of the strange path error.

I did also test with the URI set in vscode instead of using the mapped samba share ("\\ip.of.my.openhab\openhab-conf\") and the message is slightly another, openHAB then complains about the URI containing authentication [string?]
And, yes, certainly the samba share is using authentication :slight_smile: