Were any of you using the Preview and Beta: Amazon Echo Control version of the binding (amazonechocontrol_2.5.0.-2019-10-23.jar)? The “amazonechocontrol-2.5.6-fixleaks.jar” version doesn’t seem to allow for any of my Amazon/Echo “Smart Home Devices” to function, as they normally do with the 2019-10-23 Preview and Beta .jar file.
I’m running amazonechocontrol:2.5.0.Beta_07_Preview_1 jar on OH 2.4 (Build #1412). Like you found, the new “fixleaks” jar does not work in my installation. If you’ve found a workaround then please share the details.
The memory leak that occurs with this bug has a big appetite. I’m seeing about 34MB per hour is lost on my OH2.4 running amazonechocontrol:2.5.0.Beta_07_Preview_1.
It takes about half a day to consume all the available memory on my RPI. I’m tempted to run a cron initiated reboot three times a day to buy some time while this problem is sorted out. Easy to do, but this kind of workaround would never win any awards for elegance.
After applying the latest amazon .jar workaround I have got this show up in the logs - different error message, maybe related, maybe helpful???
2020-06-16 22:43:43.450 [WARN ] [trol.internal.handler.AccountHandler] - handling of websockets fails
org.openhab.binding.amazonechocontrol.internal.HttpException: GET url 'https://alexa.amazon.com/api/activities?startTime=1592304223045&size =10&offset=1' failed: Bad Request
at org.openhab.binding.amazonechocontrol.internal.Connection.makeRequest(Connection.java:644) ~[bundleFile:?]
at org.openhab.binding.amazonechocontrol.internal.Connection.makeRequestAndReturnString(Connection.java:499) ~[bundleFile:?]
at org.openhab.binding.amazonechocontrol.internal.Connection.makeRequestAndReturnString(Connection.java:494) ~[bundleFile:?]
at org.openhab.binding.amazonechocontrol.internal.Connection.getActivities(Connection.java:938) ~[bundleFile:?]
at org.openhab.binding.amazonechocontrol.internal.handler.AccountHandler.handlePushActivity(AccountHandler.java:773) ~[bundleFile:? ]
at org.openhab.binding.amazonechocontrol.internal.handler.AccountHandler.handleWebsocketCommand(AccountHandler.java:718) ~[bundleFi le:?]
at org.openhab.binding.amazonechocontrol.internal.handler.AccountHandler.webSocketCommandReceived(AccountHandler.java:706) [bundleF ile:?]
at org.openhab.binding.amazonechocontrol.internal.WebSocketConnection$AmazonEchoControlWebSocket.onWebSocketBinary(WebSocketConnect ion.java:417) [bundleFile:?]
at sun.reflect.GeneratedMethodAccessor112.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_201]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_201]
at org.eclipse.jetty.websocket.common.events.annotated.CallableMethod.call(CallableMethod.java:70) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.websocket.common.events.annotated.OptionalSessionCallableMethod.call(OptionalSessionCallableMethod.java:72) [b undleFile:9.4.20.v20190813]
at org.eclipse.jetty.websocket.common.events.JettyAnnotatedEventDriver.onBinaryMessage(JettyAnnotatedEventDriver.java:133) [bundleF ile:9.4.20.v20190813]
at org.eclipse.jetty.websocket.common.message.SimpleBinaryMessage.messageComplete(SimpleBinaryMessage.java:68) [bundleFile:9.4.20.v 20190813]
at org.eclipse.jetty.websocket.common.events.AbstractEventDriver.appendMessage(AbstractEventDriver.java:65) [bundleFile:9.4.20.v201 90813]
at org.eclipse.jetty.websocket.common.events.JettyAnnotatedEventDriver.onBinaryFrame(JettyAnnotatedEventDriver.java:125) [bundleFil e:9.4.20.v20190813]
at org.eclipse.jetty.websocket.common.events.AbstractEventDriver.incomingFrame(AbstractEventDriver.java:145) [bundleFile:9.4.20.v20 190813]
at org.eclipse.jetty.websocket.common.WebSocketSession.incomingFrame(WebSocketSession.java:321) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.websocket.common.extensions.ExtensionStack.incomingFrame(ExtensionStack.java:202) [bundleFile:9.4.20.v20190813 ]
at org.eclipse.jetty.websocket.common.Parser.notifyFrame(Parser.java:226) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.websocket.common.Parser.parseSingleFrame(Parser.java:262) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.websocket.common.io.AbstractWebSocketConnection.readParse(AbstractWebSocketConnection.java:582) [bundleFile:9. 4.20.v20190813]
at org.eclipse.jetty.websocket.common.io.AbstractWebSocketConnection.onFillable(AbstractWebSocketConnection.java:441) [bundleFile:9 .4.20.v20190813]
at org.eclipse.jetty.websocket.common.io.AbstractWebSocketConnection.onFillable(AbstractWebSocketConnection.java:428) [bundleFile:9 .4.20.v20190813]
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint.onFillable(SslConnection.java:426) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:320) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.io.ssl.SslConnection$2.succeeded(SslConnection.java:158) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:367) [bundleFile:9.4.20.v201 90813]
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:782) [bundleFile:9.4.20.v20190813]
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:918) [bundleFile:9.4.20.v20190813]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_201]
NOTE : When it originally installed it would not start because of a dependency issue. I had to run the following before it would actually start and become active. This might have been due to the fact that OH 2.4 didn’t have what was needed by the latest echo control Jar file ? I was running a very old version prior to loading this one.
From the image above It would appear that I now have 2 libs offering the same services. Not sure if that’s a problem but all appears to be ok and all of my automation is functioning as per normal. The fact that the new one is showing “resolved” and not “Active” as per the other version 2.7.0 might suggest that 2.7.0 could still be providing the necessary services while 2.8.5 is laying dormant just satisfying the requirement for the new echo control Jar file to be bought online ?
Hope some of this helps you. My system has now been stable with no obvious errors in the logs and no noticeable memory leaks.
Same here. Can confirm a relationship between the amazonechocontrol binding and an "out of memory " issue after a couple of hours. Also lost things and had to restore from backup. Believe due to an irregular logout from openHAB. A regular logout was impossible after appearance of the “out of memory” issue. Windows 10, openHAB 2.5.5 snapshoz #111. I have changed nothing here during the period of occurrance which was a couple of days ago.
I had exactly the same problem.
about 5000+ threads after some hours, all hanging, nearly nothing working (timeouts everywhere):
After updating this mentioned addon and uninstalling the amazon binding upfront everything worked.
ONE IMPORTANT NOTE TO ADD:
You need to clean cache after uninstalling the amazonecho binding and before installing the new one!!!
sudo openhab-cli clean-cache
then its staying at about ~300 threads stable since hours
I tried uninstalling the binding from PaperUI, then installing it via CLI, But all my things and Items were no longer matched with the binding. I have 19 echo’s and didn’t want to recreate them. Is there a fix/work around for this?
@dotelpenguin
On my headless RPi running raspian not openhabian I too found the binding reverting to the release version after a restart of OH.
My fix was
Stop release version of binding from within the console
Uninstall the release binding from the console
Stop OH
Download the new jar into the addon folder. I did this from a ssh terminal session on a windows pc changing directory to /user/share/openhab2/addons and the using wget (sudo wget) followed by the full url of the new jar
3rd day, 3rd attempt. Today I removed the add-on from PaperUI, stopped the OH service, cleaned cache, placed the .jar in the addon folder and rebooted.
Earlier, I tried the bundle:update, first to the file org.openhab.binding.amazonechocontrol-2.5.6-fixleaks.jar and then to the org.openhab.binding.amazonechocontrol-2.5.6-fix-leaks-fix-connection.jar (dated 2.5.6.202006152137 in bundle:list). I also rebooted and re-verified the correct version of the addon was running, but was still experiencing memory leaks.
I’ll track this today and see if it’s any better.
Thanks for all the work on getting this resolved!
*editing to add that I’m already seeing low memory on my Pi. before I was starting around 240 megs free, after this update I’m already down to 35. What to most folks run for free memory?
openhabian@openHABianPi:~ $ free -h
total used free shared buff/cache available
Mem: 975M 578M 35M 6.5M 362M 339M
Swap: 99M 0B 99M
openhabian@openHABianPi:~ $
Well I did a second reboot and it seems to have corrected the original memory issue. I figured that cache cleanup may have grabbed extra memory. I’ve got 274 megs free currently, running on a 3b+ for about 1.5 years now too.
openhabian@openHABianPi:~ $ free -h
total used free shared buff/cache available
Mem: 975M 470M 274M 12M 231M 443M
Swap: 99M 0B 99M
openhabian@openHABianPi:~ $
For me, the following lines gave me a solution from right after J-N-K posting the latest fix (i only added “bundle:stop xxx” to the post from binderth above):
Open a console with > 100 columns
(i just learned, that karaf is cutting the lines at console windows end. So if “grep -i …” doesn’t show a result, your console windows may be not wide enough)
and run openhab-cli:
sudo openhab-cli console
... some password an message stuff follows ...
openhab> bundle:list |grep -i amazon
228 x Active x 80 x 2.5.5 x openHAB Add-ons :: Bundles :: Amazon Echo Control Binding
openhab> bundle:stop 228
openhab> bundle:update 228 https://janessa.me/esh/org.openhab.binding.amazonechocontrol-2.5.6-fix-leaks-fix-connection.jar
openhab> bundle:start 228
openhab> bundle:list 228
START LEVEL 100 , List Threshold: 50
ID │ State │ Lvl │ Version │ Name
────┼────────┼─────┼────────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────
228 │ Active │ 80 │ 2.5.6.202006152137 │ openHAB Add-ons :: Bundles :: Amazon Echo Control Binding
openhab> logout
The binding version stays since then.
OOM is gone.
Older WARNings and unrelated ERRORs in the logs still occur.
Feature of a beta version used by some people are not implemented.
From what people are writing, it seems like Amazon had actived a stronger policy checking requests on their side for a while. This made the OOM occur and helped to find the error in the binding code (well - a bit of the hard way) so to say. Amazon seems to have withdrawn the stronger policy yesterday(?) for amazon.de(?).
This is my understanding at the end, which may be incorrect though.
Thank you for the help. I was able to get the revised echo control binding (2.5.6-fix-leaks-fix-connection) to run on my RPI OpenHAB 2.4 after updating the Gson library, per your instructions. I’m currently running the new binding and the error logs are clean (no web sock errors).
Echo control functions well, but the memory leaks persists. My RPI has lost over 200MB since the last reboot about three hours ago. It’s hovering around 100MB free at the moment. I’m watching to see how long it takes to run out of memory – I suspect that will occur soon.
Edit 1: After running four hours the declining free memory jumped up to 250MB (was down to 100MB). No action from me, it did it on its own. So things are looking more hopeful. Will update this post in a few hours.
@richaardvark, Karl2 posted some tips on installing the new echo control binding on OpenHAB 2.4. See this post:
I’ve got it working, but currently monitoring for memory leaks. Fingers are crossed that JNK’s patched echo control binding has solved the OOM issue on my installation.