@toxiroxi Sounds as if you are using Docker as well, so you might be suffering from the same issue as @martinvw.
I will mention that when doing Docker upgrades the only way Iāve found it to consistently work is to:
- Make a backup of userdata
- Delete the contents of userdata
- Start the container
- After it finishes coming up as a fresh install stop OH
- Copy over jsondb and any changes files in userdata/etc (do not blindly copy everything over, only files you have actually changed). Do a dog to make sure the format it other settings in those files have not changed.
There is no automation built into the Docker image for upgrades and as we all should know, you canāt just use the userdata from an old OH in a new build, particularly when there is a major update in the core like the switch to log4j2 and this latest update.
I just removed the complete userdata folder (after making a backup), the first result was that I now get errors for all the addons which are in my addons folder. After disabling that folder as well, my log remains completely empty.
I hope the dockers are more broken than others but the most basic docker setup is broken:
docker run -it --name openhab --net=host openhab/openhab:2.2.0-snapshot-amd64-debian
It first boots till an openhab prompt and then a newline or something is echo-ed and it stops responding, however the java process keeps waiting for/in some FUTEX according to strace
. It also does not claim any of the regular ports but it has some ago streams open, are we having problems with the server where we download our additional parts from? (@kai ??)
tcp6 0 0 127.0.0.1:42736 :::* LISTEN 19194/java
unix 3 [ ] STREAM CONNECTED 6479442 19194/java
unix 3 [ ] STREAM CONNECTED 6479441 19194/java
unix 2 [ ] STREAM CONNECTED 6479429 19194/java
unix 2 [ ] STREAM CONNECTED 6479436 19194/java
unix 3 [ ] STREAM CONNECTED 6479439 19194/java
unix 3 [ ] STREAM CONNECTED 6479440 19194/java
I also do not recognize port 42736
, this seems to be varying now I have a port 39818
strace: Process 19194 attached
futex(0x7fac98c069d0, FUTEX_WAIT, 171, NULL
ā edit
After a lot of patience and the stream error already mentioned by @kai the basic docker command succeeded. Iām now switching back to reviving my normal containerā¦
I am not running docker, but i am on a lxc container.
I could reproduce the error within the copy i made, but there are no more error logs or anything which are being showed what i have posted so far. I am not sure if i can really help here. just did the upgrade via apt-get and nothing more.
Here, debian stretch (x64), Java⢠SE Runtime Environment (build 1.8.0_151-b12)
Build #1084, same error (Internal error, please look at the server's logs.
in vscode)
Most intresting part is 'file:///q%3A/'
which should be q:\
if I get it right.
Another question: when the LSP port is configurable in vscode, where to configure in openHAB2?
20:57:29.408 [INFO ] [del.lsp.internal.MappingUriExtensions] - Path mapping could not be done for 'file:///q%3A/', leaving it untouched
20:57:29.443 [ERROR] [.eclipse.lsp4j.jsonrpc.RemoteEndpoint] - Internal error: java.lang.reflect.InvocationTargetException
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
at org.eclipse.lsp4j.jsonrpc.services.GenericEndpoint.lambda$null$0(GenericEndpoint.java:53) ~[102:org.eclipse.lsp4j.jsonrpc:0.2.1.v20170706-0855]
at org.eclipse.lsp4j.jsonrpc.services.GenericEndpoint.request(GenericEndpoint.java:105) [102:org.eclipse.lsp4j.jsonrpc:0.2.1.v20170706-0855]
at org.eclipse.lsp4j.jsonrpc.RemoteEndpoint.handleRequest(RemoteEndpoint.java:203) [102:org.eclipse.lsp4j.jsonrpc:0.2.1.v20170706-0855]
at org.eclipse.lsp4j.jsonrpc.RemoteEndpoint.consume(RemoteEndpoint.java:139) [102:org.eclipse.lsp4j.jsonrpc:0.2.1.v20170706-0855]
at org.eclipse.lsp4j.jsonrpc.json.StreamMessageProducer.handleMessage(StreamMessageProducer.java:149) [102:org.eclipse.lsp4j.jsonrpc:0.2.1.v20170706-0855]
at org.eclipse.lsp4j.jsonrpc.json.StreamMessageProducer.listen(StreamMessageProducer.java:77) [102:org.eclipse.lsp4j.jsonrpc:0.2.1.v20170706-0855]
at org.eclipse.lsp4j.jsonrpc.json.ConcurrentMessageProcessor.run(ConcurrentMessageProcessor.java:84) [102:org.eclipse.lsp4j.jsonrpc:0.2.1.v20170706-0855]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:?]
at java.lang.Thread.run(Thread.java:748) [?:?]
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:?]
at org.eclipse.lsp4j.jsonrpc.services.GenericEndpoint.lambda$null$0(GenericEndpoint.java:51) ~[?:?]
... 11 more
Caused by: java.lang.NullPointerException
at java.util.concurrent.ConcurrentHashMap.putVal(ConcurrentHashMap.java:1011) ~[?:?]
at java.util.concurrent.ConcurrentHashMap.put(ConcurrentHashMap.java:1006) ~[?:?]
at org.eclipse.xtext.resource.impl.ChunkedResourceDescriptions.setContainer(ChunkedResourceDescriptions.java:121) ~[?:?]
at org.eclipse.xtext.ide.server.ProjectManager.lambda$createFreshResourceSet$5(ProjectManager.java:152) ~[?:?]
at org.eclipse.xtext.xbase.lib.ObjectExtensions.operator_doubleArrow(ObjectExtensions.java:139) ~[?:?]
at org.eclipse.xtext.ide.server.ProjectManager.createFreshResourceSet(ProjectManager.java:155) ~[?:?]
at org.eclipse.xtext.ide.server.ProjectManager.lambda$newBuildRequest$4(ProjectManager.java:130) ~[?:?]
at org.eclipse.xtext.xbase.lib.ObjectExtensions.operator_doubleArrow(ObjectExtensions.java:139) ~[?:?]
at org.eclipse.xtext.ide.server.ProjectManager.newBuildRequest(ProjectManager.java:141) ~[?:?]
at org.eclipse.xtext.ide.server.ProjectManager.doBuild(ProjectManager.java:111) ~[?:?]
at org.eclipse.xtext.ide.server.ProjectManager.doInitialBuild(ProjectManager.java:107) ~[?:?]
at org.eclipse.xtext.ide.server.BuildManager.doInitialBuild(BuildManager.java:138) ~[?:?]
at org.eclipse.xtext.ide.server.WorkspaceManager.refreshWorkspaceConfig(WorkspaceManager.java:142) ~[?:?]
at org.eclipse.xtext.ide.server.WorkspaceManager.initialize(WorkspaceManager.java:113) ~[?:?]
at org.eclipse.xtext.ide.server.LanguageServerImpl.lambda$initialize$3(LanguageServerImpl.java:213) ~[?:?]
at org.eclipse.xtext.ide.server.concurrent.RequestManager.runWrite(RequestManager.java:71) ~[?:?]
at org.eclipse.xtext.ide.server.LanguageServerImpl.initialize(LanguageServerImpl.java:216) ~[?:?]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:?]
at org.eclipse.lsp4j.jsonrpc.services.GenericEndpoint.lambda$null$0(GenericEndpoint.java:51) ~[?:?]
... 11 more
Udo are you running natively stretch or in a container/docker or whatever environment?
Thanks,
Ok, tonight I gave it another try, with success.
In the file āorg.apache.felix.eventadmin.impl.EventAdmin.cfgā (stored in userdata\openhab2\etc) I added a new parameter:
org.apache.felix.eventadmin.Timeout=0
If you donāt have that parameter (like I did), it defaults to 5000ms (source). If you set it to ā0ā there is no timeout. In my case, the OH core got āblacklistedā because: āIf an event handler takes longer than the configured timeout to process an event, it is blacklisted. Once a handler is in a blacklist, it doesnāt get sent any events anymore.ā
After setting this parameter, I upgraded back to #1084. On first boot, it took very long for OH to start. After 10 minutes, I initiated a restart. After that, OH started pretty quickly without any errors.
So now I have a working #1084 build.
Not sure if setting that parameter is ābest practiceā. So I wanted to check: does one of you guys have a specific value set for this parameter?
Sorry, did forget to mention⦠Iām on a xen vm. But Iām pretty sure that this detail is not the source of the strange path error.
I did also test with the URI set in vscode instead of using the mapped samba share ("\\ip.of.my.openhab\openhab-conf\") and the message is slightly another, openHAB then complains about the URI containing authentication [string?]
And, yes, certainly the samba share is using authentication
Could you please enter an issue at Issues Ā· eclipse-archived/smarthome Ā· GitHub?
Another question: when the LSP port is configurable in vscode, where to configure in openHAB2?
Not possible yet. I have created an issue for it.
Thx, nice workaround, that made my #1084 working
Have been stumbling with that for hours now
Btw, I am on Raspbian Jessie on a RPi3 ā¦
For me openhab #1084 also fails to start properly. Besides the message that Paper UI couldnāt be downloaded there are other errors coming before that, so here is my log when starting:
2017-11-20 11:33:48.341 [ERROR] [core.karaf.internal.FeatureInstaller] - Failed installing 'openhab-package-"standard"': No matching features for openhab-package-"standard"/0.0.0
2017-11-20 11:33:49.056 [WARN ] [url.mvn.internal.AetherBasedResolver] - Error resolving artifact org.openhab.core:org.openhab.ui.paperui:jar:2.2.0-SNAPSHOT: [Could not find artifact org.openhab.core:org.openhab.ui.paperui:jar:2.2.0-SNAPSHOT]
java.io.IOException: Error resolving artifact org.openhab.core:org.openhab.ui.paperui:jar:2.2.0-SNAPSHOT: [Could not find artifact org.openhab.core:org.openhab.ui.paperui:jar:2.2.0-SNAPSHOT]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:720) [4:org.ops4j.pax.url.mvn:2.5.3]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:659) [4:org.ops4j.pax.url.mvn:2.5.3]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:600) [4:org.ops4j.pax.url.mvn:2.5.3]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:567) [4:org.ops4j.pax.url.mvn:2.5.3]
at org.apache.karaf.features.internal.download.impl.MavenDownloadTask.download(MavenDownloadTask.java:47) [9:org.apache.karaf.features.core:4.1.3]
at org.apache.karaf.features.internal.download.impl.AbstractRetryableDownloadTask.run(AbstractRetryableDownloadTask.java:60) [9:org.apache.karaf.features.core:4.1.3]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:?]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [?:?]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:?]
at java.lang.Thread.run(Thread.java:745) [?:?]
Suppressed: shaded.org.eclipse.aether.transfer.ArtifactNotFoundException: Could not find artifact org.openhab.core:org.openhab.ui.paperui:jar:2.2.0-SNAPSHOT
at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:434) [4:org.ops4j.pax.url.mvn:2.5.3]
at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts(DefaultArtifactResolver.java:246) [4:org.ops4j.pax.url.mvn:2.5.3]
at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifact(DefaultArtifactResolver.java:223) [4:org.ops4j.pax.url.mvn:2.5.3]
at shaded.org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveArtifact(DefaultRepositorySystem.java:294) [4:org.ops4j.pax.url.mvn:2.5.3]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:705) [4:org.ops4j.pax.url.mvn:2.5.3]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:659) [4:org.ops4j.pax.url.mvn:2.5.3]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:600) [4:org.ops4j.pax.url.mvn:2.5.3]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:567) [4:org.ops4j.pax.url.mvn:2.5.3]
at org.apache.karaf.features.internal.download.impl.MavenDownloadTask.download(MavenDownloadTask.java:47) [9:org.apache.karaf.features.core:4.1.3]
at org.apache.karaf.features.internal.download.impl.AbstractRetryableDownloadTask.run(AbstractRetryableDownloadTask.java:60) [9:org.apache.karaf.features.core:4.1.3]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:?]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [?:?]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:?]
at java.lang.Thread.run(Thread.java:745) [?:?]
Caused by: shaded.org.eclipse.aether.resolution.ArtifactResolutionException: Error resolving artifact org.openhab.core:org.openhab.ui.paperui:jar:2.2.0-SNAPSHOT
at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:444) ~[?:?]
at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts(DefaultArtifactResolver.java:246) ~[?:?]
at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifact(DefaultArtifactResolver.java:223) ~[?:?]
at shaded.org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveArtifact(DefaultRepositorySystem.java:294) ~[?:?]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:705) ~[?:?]
... 12 more
2017-11-20 11:33:49.116 [ERROR] [core.karaf.internal.FeatureInstaller] - Failed installing 'openhab-binding-astro", openhab-ui-"basic, openhab-persistence-"influxdb", openhab-ui-habmin", openhab-binding-"zwave, openhab-ui-habpanel, openhab-ui-paper, openhab-misc-"restdocs"': Error:
Error downloading mvn:org.openhab.core/org.openhab.ui.paperui/2.2.0-SNAPSHOT
2017-11-20 11:33:56.840 [INFO ] [el.core.internal.ModelRepositoryImpl] - Loading model 'influxdb.persist'
2017-11-20 11:34:04.320 [INFO ] [el.core.internal.ModelRepositoryImpl] - Loading model 'abwesenheit.rules'
2017-11-20 11:34:04.630 [INFO ] [thome.model.lsp.internal.ModelServer] - Language Server started on port 5007
2017-11-20 11:34:05.177 [INFO ] [el.core.internal.ModelRepositoryImpl] - Loading model 'test.sitemap'
2017-11-20 11:34:05.567 [INFO ] [el.core.internal.ModelRepositoryImpl] - Loading model 'astro.things'
2017-11-20 11:34:07.760 [INFO ] [.dashboard.internal.DashboardService] - Started dashboard at http://192.168.0.22:8080
2017-11-20 11:34:07.774 [INFO ] [.dashboard.internal.DashboardService] - Started dashboard at https://192.168.0.22:8443
I am running openHAB on a RPi3 as well. The versions I got are:
$ cat /var/lib/openhab2/etc/version.properties
openHAB Distribution Version Information
----------------------------------------
build-no : Build #1084
online-repo : https://openhab.jfrog.io/openhab/online-repo-snapshot/2.2
Repository Version
----------------------------------------
openhab-distro : 2.2.0-SNAPSHOT
smarthome : 0.9.0-SNAPSHOT
openhab-core : 2.2.0-SNAPSHOT
openhab1-addons : 1.11.0-SNAPSHOT
openhab2-addons : 2.2.0-SNAPSHOT
karaf : 4.1.3
It definitely is NOT a best practice. When any event processing takes longer than 5s, something is wrong and some piece of code blocks the thread, which is a bug. Changing this parameter and thus not setting a time-out merely ignores this bug, but as you say that takes 10 minutes to start, it is still obvious that this is not an intended behavior. Unfortunately, we cannot see from the log, which part of the code blocks. OSGiEventManager
only delegates, so it itself is most likely not the culprit. I have created an issue to address this in future.
I added an issue for the Error executing command: java.io.IOException: Stream Closed
This is the part of my log (only info, not debug) where my #1084 system stopped working several times:
2017-11-20 10:42:37.929 [WARN ] [ore.internal.events.OSGiEventManager] - Dispatching event to subscriber 'org.eclipse.smarthome.io.monitor.internal.EventLogger@a0334a' takes more than 5000ms.
2017-11-20 10:42:37.931 [WARN ] [ore.internal.events.OSGiEventManager] - Dispatching event to subscriber 'org.eclipse.smarthome.io.monitor.internal.EventLogger@a0334a' takes more than 5000ms.
2017-11-20 10:42:37.932 [WARN ] [org.apache.karaf.services.eventadmin] - EventAdmin: Blacklisting ServiceReference [{org.osgi.service.event.EventHandler, org.eclipse.smarthome.core.events.EventPublisher}={event.topics=smarthome, component.name=org.eclipse.smarthome.core.internal.events.OSGiEventManager, component.id=62, service.id=158, service.bundleid=110, service.scope=bundle} | Bundle(org.eclipse.smarthome.core_0.9.0.201711182209 [110])] due to timeout!
2017-11-20 10:42:41.162 [WARN ] [org.apache.karaf.services.eventadmin] - EventAdmin: Blacklisting ServiceReference [{org.osgi.service.event.EventHandler, org.eclipse.smarthome.core.events.EventPublisher}={event.topics=smarthome, component.name=org.eclipse.smarthome.core.internal.events.OSGiEventManager, component.id=62, service.id=158, service.bundleid=110, service.scope=bundle} | Bundle(org.eclipse.smarthome.core_0.9.0.201711182209 [110])] due to timeout!
2017-11-20 10:42:41.174 [WARN ] [org.apache.karaf.services.eventadmin] - EventAdmin: Blacklisting ServiceReference [{org.osgi.service.event.EventHandler, org.eclipse.smarthome.core.events.EventPublisher}={event.topics=smarthome, component.name=org.eclipse.smarthome.core.internal.events.OSGiEventManager, component.id=62, service.id=158, service.bundleid=110, service.scope=bundle} | Bundle(org.eclipse.smarthome.core_0.9.0.201711182209 [110])] due to timeout!
2017-11-20 10:42:41.179 [WARN ] [org.apache.karaf.services.eventadmin] - EventAdmin: Blacklisting ServiceReference [{org.osgi.service.event.EventHandler, org.eclipse.smarthome.core.events.EventPublisher}={event.topics=smarthome, component.name=org.eclipse.smarthome.core.internal.events.OSGiEventManager, component.id=62, service.id=158, service.bundleid=110, service.scope=bundle} | Bundle(org.eclipse.smarthome.core_0.9.0.201711182209 [110])] due to timeout!
Yes, thatās what I mean: These logs do not tell us anything. We need to do some proper tracking within the framework in order to be able to produce better logs in future.
Seeing all those misplaced quotes makes me wonder if something in the config parser is broken? My /var/lib/openhab2/etc/org.openhab.addons.cfg looks like this:
package = "standard"
ui = "basic,paper,habpanel,habmin"
remote = B"true"
persistence = "influxdb"
misc = "restdocs"
binding = "zwave,astro"
Looking at the samples in /etc/openhab2/services/addons.cfg let me try removing the quotes from the file but that just changed the error messages (no more quotes), the actual errors are still the same.
Is there anything I can try to help fixing this?
The quotes are definitely wrong (upgraded to #1088 as well and there are no quotes at all in my addons.cfg)
I think you have to delete cache:
Thanks for reminding me. I already tried that a few times, but it seems I never did it at the same time as removing the quotes. So I always had either a bad config or cached data coming from the bad config. Doing both (clearing cache and removing quotes) now got rid of the error about openhab-package-standard
not being installed, but I still have this left:
2017-11-21 19:18:43.016 [INFO ] [el.core.internal.ModelRepositoryImpl] - Loading model 'abwesenheit.rules'
2017-11-21 19:18:44.785 [INFO ] [.dashboard.internal.DashboardService] - Started dashboard at http://192.168.0.22:8080
2017-11-21 19:18:44.802 [INFO ] [.dashboard.internal.DashboardService] - Started dashboard at https://192.168.0.22:8443
2017-11-21 19:18:50.333 [INFO ] [el.core.internal.ModelRepositoryImpl] - Loading model 'influxdb.persist'
2017-11-21 19:18:51.191 [INFO ] [el.core.internal.ModelRepositoryImpl] - Loading model 'test.sitemap'
2017-11-21 19:18:57.012 [INFO ] [el.core.internal.ModelRepositoryImpl] - Loading model 'astro.things'
2017-11-21 19:18:57.191 [INFO ] [thome.model.lsp.internal.ModelServer] - Language Server started on port 5007
2017-11-21 19:19:03.081 [WARN ] [url.mvn.internal.AetherBasedResolver] - Error resolving artifact org.openhab.ui:org.openhab.ui.habmin:jar:2.2.0-SNAPSHOT: [Could not find artifact org.openhab.ui:org.openhab.ui.habmin:jar:2.2.0-SNAPSHOT]
java.io.IOException: Error resolving artifact org.openhab.ui:org.openhab.ui.habmin:jar:2.2.0-SNAPSHOT: [Could not find artifact org.openhab.ui:org.openhab.ui.habmin:jar:2.2.0-SNAPSHOT]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:720) [4:org.ops4j.pax.url.mvn:2.5.3]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:659) [4:org.ops4j.pax.url.mvn:2.5.3]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:600) [4:org.ops4j.pax.url.mvn:2.5.3]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:567) [4:org.ops4j.pax.url.mvn:2.5.3]
at org.apache.karaf.features.internal.download.impl.MavenDownloadTask.download(MavenDownloadTask.java:47) [9:org.apache.karaf.features.core:4.1.3]
at org.apache.karaf.features.internal.download.impl.AbstractRetryableDownloadTask.run(AbstractRetryableDownloadTask.java:60) [9:org.apache.karaf.features.core:4.1.3]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:?]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [?:?]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:?]
at java.lang.Thread.run(Thread.java:745) [?:?]
Suppressed: shaded.org.eclipse.aether.transfer.ArtifactNotFoundException: Could not find artifact org.openhab.ui:org.openhab.ui.habmin:jar:2.2.0-SNAPSHOT
at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:434) [4:org.ops4j.pax.url.mvn:2.5.3]
at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts(DefaultArtifactResolver.java:246) [4:org.ops4j.pax.url.mvn:2.5.3]
at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifact(DefaultArtifactResolver.java:223) [4:org.ops4j.pax.url.mvn:2.5.3]
at shaded.org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveArtifact(DefaultRepositorySystem.java:294) [4:org.ops4j.pax.url.mvn:2.5.3]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:705) [4:org.ops4j.pax.url.mvn:2.5.3]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:659) [4:org.ops4j.pax.url.mvn:2.5.3]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:600) [4:org.ops4j.pax.url.mvn:2.5.3]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:567) [4:org.ops4j.pax.url.mvn:2.5.3]
at org.apache.karaf.features.internal.download.impl.MavenDownloadTask.download(MavenDownloadTask.java:47) [9:org.apache.karaf.features.core:4.1.3]
at org.apache.karaf.features.internal.download.impl.AbstractRetryableDownloadTask.run(AbstractRetryableDownloadTask.java:60) [9:org.apache.karaf.features.core:4.1.3]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:?]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [?:?]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:?]
at java.lang.Thread.run(Thread.java:745) [?:?]
Caused by: shaded.org.eclipse.aether.resolution.ArtifactResolutionException: Error resolving artifact org.openhab.ui:org.openhab.ui.habmin:jar:2.2.0-SNAPSHOT
at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:444) ~[?:?]
at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts(DefaultArtifactResolver.java:246) ~[?:?]
at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifact(DefaultArtifactResolver.java:223) ~[?:?]
at shaded.org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveArtifact(DefaultRepositorySystem.java:294) ~[?:?]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:705) ~[?:?]
... 12 more
2017-11-21 19:19:03.136 [ERROR] [core.karaf.internal.FeatureInstaller] - Failed installing 'openhab-ui-basic, openhab-misc-restdocs, openhab-binding-astro, openhab-ui-habmin, openhab-persistence-influxdb, openhab-ui-habpanel, openhab-ui-paper, openhab-binding-zwave': Error:
Error downloading mvn:org.openhab.ui/org.openhab.ui.habmin/2.2.0-SNAPSHOT
The config now looks like this:
package = standard
ui = basic,paper,habpanel,habmin
remote = B"true"
persistence = influxdb
misc = restdocs
binding = zwave,astro
Any hint how to proceed? I didnāt write that config myself, openHAB did. So I donāt know how any wrong quotes got in there in the first place.
This doesnāt look right? Typo or is that actually in your file? I have remote = true
, not āBā and no quotes.
I donāt know if this is related to your current problem.