there seems to be some kind of issue (just a warning though) with my item definition. I have 2 or 3 channels linked to a single item. Is this use case deprecated?
2018-11-22 19:51:01.838 [WARN ] [ore.common.registry.AbstractRegistry] - Cannot add "Metadata" with key "channel:aLivingCeilingBulb". It exists already from provider "GenericMetadataProvider"! Failed to add a second with the same UID from provider "GenericMetadataProvider"!
2018-11-22 19:51:01.839 [WARN ] [ore.common.registry.AbstractRegistry] - Cannot add "Metadata" with key "channel:aLivingCeilingBulb". It exists already from provider "GenericMetadataProvider"! Failed to add a second with the same UID from provider "GenericMetadataProvider"!
2018-11-22 19:51:01.844 [WARN ] [ore.common.registry.AbstractRegistry] - Cannot add "Metadata" with key "channel:aLivingFloorRightBulbs". It exists already from provider "GenericMetadataProvider"! Failed to add a second with the same UID from provider "GenericMetadataProvider"!
2018-11-22 19:51:01.845 [WARN ] [ore.common.registry.AbstractRegistry] - Cannot add "Metadata" with key "channel:aLivingFloorRightBulbs_switch". It exists already from provider "GenericMetadataProvider"! Failed to add a second with the same UID from provider "GenericMetadataProvider"!
2018-11-22 19:51:01.846 [WARN ] [ore.common.registry.AbstractRegistry] - Cannot add "Metadata" with key "channel:aLivingFloorLeftBulbs". It exists already from provider "GenericMetadataProvider"! Failed to add a second with the same UID from provider "GenericMetadataProvider"!
2018-11-22 19:51:01.847 [WARN ] [ore.common.registry.AbstractRegistry] - Cannot add "Metadata" with key "channel:aLivingFloorLeftBulbs_switch". It exists already from provider "GenericMetadataProvider"! Failed to add a second with the same UID from provider "GenericMetadataProvider"!
I’m running 2.4 SNAPSHOT Build 1436 (just upgraded from 2.3.0 from Synology DSM) that is very slow (was already slow in 2.3 since I started 1 month ago). No upgrade issue faced but I now started investigating why it is so slow and wanted to share 2 first strange behaviours I observed.
Here is my configuration:
openhab> info
Karaf
Karaf version 4.2.1
OSGi Framework org.eclipse.osgi-3.12.100.v20180210-1608
JVM
Java Virtual Machine Java HotSpot™ 64-Bit Server VM version 25.191-b12
Version 1.8.0_191
Vendor Oracle Corporation
Uptime 1 hour 6 minutes
Process CPU time 25 minutes
Process CPU load 0.06
System CPU load 0.11
Open file descriptors 259
Max file descriptors 4,096
Total compile time 47 minutes
Threads
Live threads 232
Daemon threads 101
Peak 251
Total started 4381
Memory
Current heap size 125,028 kbytes
Maximum heap size 160,384 kbytes
Committed heap size 160,384 kbytes
Pending objects 0
Garbage collector Name = ‘Copy’, Collections = 394, Time = 1 minute
Garbage collector Name = ‘MarkSweepCompact’, Collections = 34, Time = 24 minutes
Classes
Current classes loaded 20,931
Total classes loaded 22,800
Total classes unloaded 1,869
Operating system
Name Linux version 4.4.59+
Architecture aarch64
Processors 4
Quite rapidly after startup (and even after a while), it is hanging and facing timeouts left and right like:
[ERROR] [.smarthome.model.script.actions.HTTP] - Fatal transport error: java.util.concurrent.TimeoutException: Total timeout 5000 ms elapsed
for HTTP Clients typically foriCloud and CalDav bindings:
[DEBUG] [rg.eclipse.jetty.client.HttpExchange] - Failed HttpExchange@5385495f req=COMPLETED/java.util.concurrent.TimeoutException: Total timeout 10000 ms
for myOpenhab Cloud reconnections:
[WARN ] [l.handler.ICloudAccountBridgeHandler] - Unable to refresh device data
java.io.IOException: java.util.concurrent.TimeoutException: Total timeout 10000 ms elapsed
,
[INFO ] [io.openhabcloud.internal.CloudClient] - Disconnected from the openHAB Cloud service (UUID = 93001014-b2ca-43d5-b3e0-c2abdee9cfac, base URL = http://localhost:8080)
for pools (MariaDB with JDBC?):
[WARN ] [com.zaxxer.hikari.pool.HikariPool ] - 2m15s967ms724μs102ns - Thread starvation or clock leap detected (housekeeper delta=yank-default).
for events ti the LOGGER: [WARN ] [me.core.internal.events.EventHandler] - Dispatching event to subscriber ‘org.eclipse.smarthome.io.monitor.internal.EventLogger@6668705f’ takes more than 5000ms.
But I don’t really know the origin of these problems. If someone has some insights on where to start the investigation, he is welcome. The only 2 strange behaviours I could really identify are as follow:
1/ OH tries endlessly to fetch maven-metadata.xml which does not exist (rather the maven-metadata**-local**.xml @ https://openhab.jfrog.io:443/openhab/online-repo-snapshot/2.4/org/openhab/distro/openhab-addons-legacy/2.4.0-SNAPSHOT/)
This thread is the most CPU consuming one. Here is the stack trace of features-3-thread-1 :
openhab> threads 174
Thread 174 features-3-thread-1 WAITING
Stacktrace:
java.lang.Object.wait line: -2
java.lang.Object.wait line: 502
org.apache.karaf.features.internal.download.impl.MavenDownloadManager$MavenDownloader.await line: 101
org.apache.karaf.features.internal.region.Subsystem.downloadBundles line: 537
org.apache.karaf.features.internal.region.Subsystem.downloadBundles line: 452
org.apache.karaf.features.internal.region.SubsystemResolver.resolve line: 224
org.apache.karaf.features.internal.service.Deployer.deploy line: 388
org.apache.karaf.features.internal.service.FeaturesServiceImpl.doProvision line: 1025
org.apache.karaf.features.internal.service.FeaturesServiceImpl.lambda$doProvisionInThread$13 line: 964
org.apache.karaf.features.internal.service.FeaturesServiceImpl$$Lambda$379/1169167538.call line: -1
java.util.concurrent.FutureTask.run line: 266
java.util.concurrent.ThreadPoolExecutor.runWorker line: 1149
java.util.concurrent.ThreadPoolExecutor$Worker.run line: 624
java.lang.Thread.run line: 748
–> Only visible with DEBUG log level: 404 File Not found.
2/ There are regular entries in the log about a nma binding I don’t know anything about.
[DEBUG] [core.karaf.internal.FeatureInstaller] - Installed ‘openhab-action-nma’
Does it reinstall the binding all the time? Why?
This is my first post so my apologies if I am not at the right place, and/or if the post is not very well formatted
I hope these 2 points raised would help improving the next deliveries.
You’re doing a great job, I love it
This is related to the performance improvements in https://github.com/eclipse/smarthome/pull/6438 I think this needs some more research in how this use case can be supported. Nice find.
Would definitely need to be fixed for the 2.4.0 release (actually already for ESH 0.10.0, which is planned for Dec 7). Hope you can come up with a fix!
Hello,
I’ve get a try to the M6 milestones. Good work from the team as always
Here are some findings after the lauch of openHAB (or simply I miss a steop??)
17:46:08.433 [WARN ] [.internal.service.FeaturesServiceImpl] - Can’t load features repository mvn:org.openhab.distro/distro/2.4.0-SNAPSHOT/xml/features
java.lang.RuntimeException: Error resolving artifact org.openhab.distro:distro:xml:features:2.4.0-SNAPSHOT: [Could not find artifact org.openhab.distro:distro:xml:features:2.4.0-SNAPSHOT in openhab (https://openhab.jfrog.io/openhab/online-repo-snapshot/2.4/)] : mvn:org.openhab.distro/distro/2.4.0-SNAPSHOT/xml/features
at org.apache.karaf.features.internal.service.RepositoryImpl.load(RepositoryImpl.java:116) ~[?:?]
at org.apache.karaf.features.internal.service.RepositoryImpl.(RepositoryImpl.java:50) ~[?:?]
at org.apache.karaf.features.internal.service.RepositoryCacheImpl.create(RepositoryCacheImpl.java:51) ~[?:?]
at org.apache.karaf.features.internal.service.FeaturesServiceImpl.getFeatureCache(FeaturesServiceImpl.java:593) ~[?:?]
at org.apache.karaf.features.internal.service.FeaturesServiceImpl.listInstalledFeatures(FeaturesServiceImpl.java:643) ~[?:?]
at org.openhab.core.karaf.internal.FeatureInstaller.installFeature(FeatureInstaller.java:450) ~[?:?]
at org.openhab.core.karaf.internal.FeatureInstaller.installPackage(FeatureInstaller.java:491) ~[?:?]
at org.openhab.core.karaf.internal.FeatureInstaller.lambda$2(FeatureInstaller.java:163) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:?]
at java.lang.Thread.run(Thread.java:748) [?:?]
Caused by: java.io.IOException: Error resolving artifact org.openhab.distro:distro:xml:features:2.4.0-SNAPSHOT: [Could not find artifact org.openhab.distro:distro:xml:features:2.4.0-SNAPSHOT in openhab (https://openhab.jfrog.io/openhab/online-repo-snapshot/2.4/)]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:720) ~[?:?]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:659) ~[?:?]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:600) ~[?:?]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:567) ~[?:?]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:557) ~[?:?]
at org.ops4j.pax.url.mvn.internal.Connection.getInputStream(Connection.java:123) ~[?:?]
at java.net.URL.openStream(URL.java:1045) ~[?:?]
at org.apache.karaf.features.internal.service.RepositoryImpl.load(RepositoryImpl.java:113) ~[?:?]
… 10 more
Suppressed: shaded.org.eclipse.aether.transfer.ArtifactNotFoundException: Could not find artifact org.openhab.distro:distro:xml:features:2.4.0-SNAPSHOT in openhab (https://openhab.jfrog.io/openhab/online-repo-snapshot/2.4/)
at shaded.org.eclipse.aether.connector.basic.ArtifactTransportListener.transferFailed(ArtifactTransportListener.java:39) ~[?:?]
at shaded.org.eclipse.aether.connector.basic.BasicRepositoryConnector$TaskRunner.run(BasicRepositoryConnector.java:355) ~[?:?]
at shaded.org.eclipse.aether.util.concurrency.RunnableErrorForwarder$1.run(RunnableErrorForwarder.java:67) ~[?:?]
at shaded.org.eclipse.aether.connector.basic.BasicRepositoryConnector$DirectExecutor.execute(BasicRepositoryConnector.java:581) ~[?:?]
at shaded.org.eclipse.aether.connector.basic.BasicRepositoryConnector.get(BasicRepositoryConnector.java:249) ~[?:?]
at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.performDownloads(DefaultArtifactResolver.java:520) ~[?:?]
at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:421) ~[?:?]
at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts(DefaultArtifactResolver.java:246) ~[?:?]
at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifact(DefaultArtifactResolver.java:223) ~[?:?]
at shaded.org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveArtifact(DefaultRepositorySystem.java:294) ~[?:?]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:705) ~[?:?]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:659) ~[?:?]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:600) ~[?:?]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:567) ~[?:?]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:557) ~[?:?]
at org.ops4j.pax.url.mvn.internal.Connection.getInputStream(Connection.java:123) ~[?:?]
at java.net.URL.openStream(URL.java:1045) ~[?:?]
at org.apache.karaf.features.internal.service.RepositoryImpl.load(RepositoryImpl.java:113) ~[?:?]
at org.apache.karaf.features.internal.service.RepositoryImpl.(RepositoryImpl.java:50) ~[?:?]
at org.apache.karaf.features.internal.service.RepositoryCacheImpl.create(RepositoryCacheImpl.java:51) ~[?:?]
at org.apache.karaf.features.internal.service.FeaturesServiceImpl.getFeatureCache(FeaturesServiceImpl.java:593) ~[?:?]
at org.apache.karaf.features.internal.service.FeaturesServiceImpl.listInstalledFeatures(FeaturesServiceImpl.java:643) ~[?:?]
at org.openhab.core.karaf.internal.FeatureInstaller.installFeature(FeatureInstaller.java:450) ~[?:?]
at org.openhab.core.karaf.internal.FeatureInstaller.installPackage(FeatureInstaller.java:491) ~[?:?]
at org.openhab.core.karaf.internal.FeatureInstaller.lambda$2(FeatureInstaller.java:163) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:?]
at java.lang.Thread.run(Thread.java:748) [?:?]
Caused by: shaded.org.eclipse.aether.resolution.ArtifactResolutionException: Error resolving artifact org.openhab.distro:distro:xml:features:2.4.0-SNAPSHOT
at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:444) ~[?:?]
at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts(DefaultArtifactResolver.java:246) ~[?:?]
at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifact(DefaultArtifactResolver.java:223) ~[?:?]
at shaded.org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveArtifact(DefaultRepositorySystem.java:294) ~[?:?]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:705) ~[?:?]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:659) ~[?:?]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:600) ~[?:?]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:567) ~[?:?]
at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:557) ~[?:?]
at org.ops4j.pax.url.mvn.internal.Connection.getInputStream(Connection.java:123) ~[?:?]
at java.net.URL.openStream(URL.java:1045) ~[?:?]
at org.apache.karaf.features.internal.service.RepositoryImpl.load(RepositoryImpl.java:113) ~[?:?]
… 10 more
17:46:08.537 [ERROR] [.core.karaf.internal.FeatureInstaller] - Failed installing ‘openhab-package-expert’: No matching features for openhab-package-expert/0
and also
17:46:21.221 [ERROR] [.core.karaf.internal.FeatureInstaller] - Failed installing ‘openhab-misc-restdocs’: Unable to resolve root: missing requirement [root] osgi.identity; osgi.identity=openhab-runtime-base; type=karaf.feature; version=“[2.4.0.SNAPSHOT,2.4.0.SNAPSHOT]”; filter:=“(&(osgi.identity=openhab-runtime-base)(type=karaf.feature)(version>=2.4.0.SNAPSHOT)(version<=2.4.0.SNAPSHOT))”
17:46:22.808 [WARN ] [ome.core.internal.events.EventHandler] - Dispatching event to subscriber ‘org.eclipse.smarthome.io.monitor.internal.EventLogger@231e47de’ takes more than 5000ms.
my system info:
openhab> info
Karaf
Karaf version 4.2.1
Karaf home /volume1/@appstore/openHAB/runtime
Karaf base /volume1/public/openHAB/userdata
OSGi Framework org.eclipse.osgi-3.12.100.v20180210-1608
JVM
Java Virtual Machine Java HotSpot(TM) 64-Bit Server VM version 25.191-b12
Version 1.8.0_191
Vendor Oracle Corporation
Pid 11016
Uptime 22 minutes
Process CPU time 7 minutes
Process CPU load 0.04
System CPU load 0.31
Open file descriptors 268
Max file descriptors 4,096
Total compile time 4 minutes
Threads
Live threads 181
Daemon threads 91
Peak 232
Total started 1851
Memory
Current heap size 88,614 kbytes
Maximum heap size 2,023,424 kbytes
Committed heap size 126,976 kbytes
Pending objects 0
Garbage collector Name = 'G1 Young Generation', Collections = 222, Time = 5.294 seconds
Garbage collector Name = 'G1 Old Generation', Collections = 0, Time = 0.000 seconds
Classes
Current classes loaded 18,354
Total classes loaded 18,405
Total classes unloaded 51
Operating system
Name Linux version 3.10.105
Architecture amd64
Processors 4
the third time I see this (1 time on regular Linux host, 2 times on DSM)
For the DSM users: you guys need to check the files in userdata to see why there are old configs there pointing to snapshot repos.
It seems that following 2.3.0.5 you deployed 2.4 snapshots using various methods since the spk packages were not being renewed.
Now, you are trying to use the same methods to deploy 2.4.0.M6 (understandable, since you want the latest and greatest and can’t wait for an “official” release) but I don’t know what could have gone wrong during the upgrade process… I don’t own a Synology NAS to test this stuff out )
I would try to clean up as much as possible the userdata subfolders and re-deploy 2.4.0M6
I tried to reproduce your problem But I can’t reproduce it. It might be specific to hue, but can’t test that. I suspect the context this messages are given doesn’t result in any problems. It looks like it wants to register the channel as metadata. Metadata should have an unique key, so the message seems correct. But I don’t think it should do something with channel. But I don’t know why it does. Therefor can you give some more feedback:
Did you upgrade from another openHAB version and if so, which one?
If you upgraded can you clear your cache and restart to see if this message persists.
If the message keeps showing when starting or I guess when reloading the items page, can you enable trace log level org.eclipse.smarthome.core and provide me the log.
Maybe this will give some more inside into why this happens.
fast implementation! Love it (off topic for M6 since I am running Snapshot also)
Setting up openhab2 (2.4.0~S1442-1) ...
root@homer:/var/log/openhab2# apt-get upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
openhab2
1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 73.5 MB of archives.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 https://openhab.jfrog.io/openhab/openhab-linuxpkg/ unstable/main openhab2 all 2.4.0~S1442-1 [73.5 MB]
Fetched 73.5 MB in 7s (9,788 kB/s)
Reading changelogs... Done
(Reading database ... 65897 files and directories currently installed.)
Preparing to unpack .../openhab2_2.4.0~S1442-1_all.deb ...
Unpacking openhab2 (2.4.0~S1442-1) over (2.4.0~S1441-1) ...
Processing triggers for systemd (215-17+deb8u8) ...
Setting up openhab2 (2.4.0~S1442-1) ...
Listing important changes for version 2.4.0:
Warning: Astro Binding: The 'kilometer' and 'miles' channels have been replaced by a new 'distance' channel
Warning: Jeelink Binding: The 'currentWatt' and 'maxWatt' channels have been replaced with 'currentPower' and 'maxPower' channels
Warning: WeatherUnderground Binding: A bridge has been added on top of the current things, you need to add a bridge containing your api-key.
Warning: ZWave Binding: Major changes have been merged to support features such as security. All things must be deleted and re-added. Refer to https://community.openhab.org/t/zwave-binding-updates/51080 for further information.
Warning: Synop Binding is now using UoM. 'wind-speed-ms' and 'wind-speed-knots' channels have been replaced by a single 'wind-speed' channel.
root@homer:/var/log/openhab2# apt-cache madison openhab2
openhab2 | 2.4.0~S1442-1 | https://openhab.jfrog.io/openhab/openhab-linuxpkg/ unstable/main amd64 Packages
openhab2 | 2.4.0~S1441-1 | https://openhab.jfrog.io/openhab/openhab-linuxpkg/ unstable/main amd64 Packages
openhab2 | 2.4.0~S1440-1 | https://openhab.jfrog.io/openhab/openhab-linuxpkg/ unstable/main amd64 Packages
openhab2 | 2.4.0~20181122162220-1 | https://openhab.jfrog.io/openhab/openhab-linuxpkg/ unstable/main amd64 Packages
openhab2 | 2.4.0~20181122141057-1 | https://openhab.jfrog.io/openhab/openhab-linuxpkg/ unstable/main amd64 Packages
openhab2 | 2.4.0~20181122121219-1 | https://openhab.jfrog.io/openhab/openhab-linuxpkg/ unstable/main amd64 Packages
openhab2 | 2.4.0~20181122105226-1 | https://openhab.jfrog.io/openhab/openhab-linuxpkg/ unstable/main amd64 Packages
openhab2 | 2.4.0~20181122034436-1 | https://openhab.jfrog.io/openhab/openhab-linuxpkg/ unstable/main amd64 Packages
openhab2 | 2.4.0~20181121231425-1 | https://openhab.jfrog.io/openhab/openhab-linuxpkg/ unstable/main amd64 Packages
openhab2 | 2.4.0~20181121003414-1 | https://openhab.jfrog.io/openhab/openhab-linuxpkg/ unstable/main amd64 Packages
openhab2 | 2.4.0~20181119183855-1 | https://openhab.jfrog.io/openhab/openhab-linuxpkg/ unstable/main amd64 Packages
openhab2 | 2.4.0~20181118113913-1 | https://openhab.jfrog.io/openhab/openhab-linuxpkg/ unstable/main amd64 Packages
openhab2 | 2.4.0~20181117214013-1 | https://openhab.jfrog.io/openhab/openhab-linuxpkg/ unstable/main amd64 Packages
openhab2 | 2.4.0~20181117181319-1 | https://openhab.jfrog.io/openhab/openhab-linuxpkg/ unstable/main amd64 Packages
openhab2 | 2.4.0~20181117002418-1 | https://openhab.jfrog.io/openhab/openhab-linuxpkg/ unstable/main amd64 Packages
openhab2 | 2.4.0~20181115123158-1 | https://openhab.jfrog.io/openhab/openhab-linuxpkg/ unstable/main amd64 Packages
openhab2 | 2.3.0-1 | https://openhab.jfrog.io/openhab/openhab-linuxpkg/ unstable/main amd64 Packages
openhab2 | 2.3.0~RC2-1 | https://openhab.jfrog.io/openhab/openhab-linuxpkg/ unstable/main amd64 Packages
openhab2 | 2.3.0~RC1-1 | https://openhab.jfrog.io/openhab/openhab-linuxpkg/ unstable/main amd64 Packages
openhab2 | 2.2.0-1 | https://openhab.jfrog.io/openhab/openhab-linuxpkg/ unstable/main amd64 Packages
openhab2 | 2.1.0-2 | https://openhab.jfrog.io/openhab/openhab-linuxpkg/ unstable/main amd64 Packages
root@homer:/var/log/openhab2#
Ps: @Benjy why keep the old stuff from 2.1, 2.2 and 2.3 RC in the unstable repo ?
Hi to all.
I hope i can ask this here.
After changing from openhab 2.3 stable to 2.4 M5, I got this error and a crash:
org.openhab.io.net.http.HttpUtil ] - Fatal transport error: java.net.SocketException: Too many open files 2018-11-04 02:08:31.717 [ERROR] [ab.binding.http.internal.HttpBinding] - No response received from:.
I am happy that i could downgrade again and 2.3 runs great. But i am very interested in some new options.
Now the question is, was there a change in the http binding or error, which is now fixed with M6?
Thanks for help and greetings,
Markus.
Hi Dim.
Yes, the first error comes from a hue bridge. This one has motion sensors.
Java version is Runtime Enviroment (zulu 8.33.0.1-linux64) (build 1.8.0_192-b01)