Web Socket Error on AmazonEchoControl binding every 65 seconds

Using the provided jar in this thread, It is working with amazon.com.au as the hub site.

Thank you, my capability is now restored.

Paul

Same here. Can confirm a relationship between the amazonechocontrol binding and an "out of memory " issue after a couple of hours. Also lost things and had to restore from backup. Believe due to an irregular logout from openHAB. A regular logout was impossible after appearance of the β€œout of memory” issue. Windows 10, openHAB 2.5.5 snapshoz #111. I have changed nothing here during the period of occurrance which was a couple of days ago.

The"fix leaks" .jar also works out for me.

I updated my bundle, but after a few hours I run out of heap again. I look and my bundle has reverted back to the 2.5.5. Am I missing a step?

I updated the bundle. Verified it was running, then BOOM back to 2.5.5 later.

I had exactly the same problem.
about 5000+ threads after some hours, all hanging, nearly nothing working (timeouts everywhere):
After updating this mentioned addon and uninstalling the amazon binding upfront everything worked.

ONE IMPORTANT NOTE TO ADD:
You need to clean cache after uninstalling the amazonecho binding and before installing the new one!!!
sudo openhab-cli clean-cache

then its staying at about ~300 threads stable since hours

I tried uninstalling the binding from PaperUI, then installing it via CLI, But all my things and Items were no longer matched with the binding. I have 19 echo’s and didn’t want to recreate them. Is there a fix/work around for this?

I would suggest to restore a backup , reboot, uninstall, install from cli.
No other idea from my site :frowning:

@dotelpenguin
On my headless RPi running raspian not openhabian I too found the binding reverting to the release version after a restart of OH.

My fix was

  1. Stop release version of binding from within the console
  2. Uninstall the release binding from the console
  3. Stop OH
  4. Download the new jar into the addon folder. I did this from a ssh terminal session on a windows pc changing directory to /user/share/openhab2/addons and the using wget (sudo wget) followed by the full url of the new jar
  5. Restart OH
  6. From console check the new bundle is the new one.
1 Like

3rd day, 3rd attempt. Today I removed the add-on from PaperUI, stopped the OH service, cleaned cache, placed the .jar in the addon folder and rebooted.

Earlier, I tried the bundle:update, first to the file org.openhab.binding.amazonechocontrol-2.5.6-fixleaks.jar and then to the org.openhab.binding.amazonechocontrol-2.5.6-fix-leaks-fix-connection.jar (dated 2.5.6.202006152137 in bundle:list). I also rebooted and re-verified the correct version of the addon was running, but was still experiencing memory leaks.

I’ll track this today and see if it’s any better.

Thanks for all the work on getting this resolved!

*editing to add that I’m already seeing low memory on my Pi. before I was starting around 240 megs free, after this update I’m already down to 35. What to most folks run for free memory?

openhabian@openHABianPi:~ $ free -h
total used free shared buff/cache available
Mem: 975M 578M 35M 6.5M 362M 339M
Swap: 99M 0B 99M
openhabian@openHABianPi:~ $

Well I did a second reboot and it seems to have corrected the original memory issue. I figured that cache cleanup may have grabbed extra memory. I’ve got 274 megs free currently, running on a 3b+ for about 1.5 years now too.

openhabian@openHABianPi:~ $ free -h
total used free shared buff/cache available
Mem: 975M 470M 274M 12M 231M 443M
Swap: 99M 0B 99M
openhabian@openHABianPi:~ $

For me, the following lines gave me a solution from right after J-N-K posting the latest fix (i only added β€œbundle:stop xxx” to the post from binderth above):

Open a console with > 100 columns
(i just learned, that karaf is cutting the lines at console windows end. So if β€œgrep -i …” doesn’t show a result, your console windows may be not wide enough)
and run openhab-cli:

sudo openhab-cli console
... some password an message stuff follows ...

openhab> bundle:list |grep -i amazon
228 x Active x  80 x 2.5.5      x openHAB Add-ons :: Bundles :: Amazon Echo Control Binding
openhab> bundle:stop 228
openhab> bundle:update 228  https://janessa.me/esh/org.openhab.binding.amazonechocontrol-2.5.6-fix-leaks-fix-connection.jar
openhab> bundle:start 228
openhab> bundle:list 228
START LEVEL 100 , List Threshold: 50
 ID β”‚ State  β”‚ Lvl β”‚ Version            β”‚ Name
────┼────────┼─────┼────────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────
228 β”‚ Active β”‚  80 β”‚ 2.5.6.202006152137 β”‚ openHAB Add-ons :: Bundles :: Amazon Echo Control Binding
openhab> logout

The binding version stays since then.
OOM is gone.
Older WARNings and unrelated ERRORs in the logs still occur.
Feature of a beta version used by some people are not implemented.

From what people are writing, it seems like Amazon had actived a stronger policy checking requests on their side for a while. This made the OOM occur and helped to find the error in the binding code (well - a bit of the hard way) so to say. Amazon seems to have withdrawn the stronger policy yesterday(?) for amazon.de(?).
This is my understanding at the end, which may be incorrect though.

4 Likes

Hello @Karl2,

Thank you for the help. I was able to get the revised echo control binding (2.5.6-fix-leaks-fix-connection) to run on my RPI OpenHAB 2.4 after updating the Gson library, per your instructions. I’m currently running the new binding and the error logs are clean (no web sock errors).

Echo control functions well, but the memory leaks persists. My RPI has lost over 200MB since the last reboot about three hours ago. It’s hovering around 100MB free at the moment. I’m watching to see how long it takes to run out of memory – I suspect that will occur soon.

Edit 1: After running four hours the declining free memory jumped up to 250MB (was down to 100MB). No action from me, it did it on its own. So things are looking more hopeful. Will update this post in a few hours.

  • Thomas

Up 22 hours free mem is 85M on openhabian 2.5 pi3 b+ and had gson 2.8.5 because i already use the myq beta binding also.

@richaardvark, Karl2 posted some tips on installing the new echo control binding on OpenHAB 2.4. See this post:

I’ve got it working, but currently monitoring for memory leaks. Fingers are crossed that JNK’s patched echo control binding has solved the OOM issue on my installation.

  • Thomas
1 Like

I did update the binding and reboot. Will be monitoring for the activity. So far it looks good!

Kudos to everyone who contributed and got it working!

works well, thanks!

-ugu5ma

I installed the new jar file a couple of days ago and everything was fine up to a couple of minutes ago - I’m getting OOM errors again and everything has stopped working.

2020-06-18 08:57:08.204 [WARN ] [org.eclipse.jetty.server.HttpChannel] - /rest/items-filtered
javax.servlet.ServletException: javax.servlet.ServletException: org.glassfish.jersey.server.ContainerException: java.lang.OutOfMemoryError: Java heap space
	at org.ops4j.pax.web.service.jetty.internal.JettyServerHandlerCollection.handle(JettyServerHandlerCollection.java:88) ~[?:?]
	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) ~[bundleFile:9.4.20.v20190813]
	at org.eclipse.jetty.server.Server.handle(Server.java:494) ~[bundleFile:9.4.20.v20190813]
	at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:374) [bundleFile:9.4.20.v20190813]
	at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:268) [bundleFile:9.4.20.v20190813]
	at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) [bundleFile:9.4.20.v20190813]
	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) [bundleFile:9.4.20.v20190813]
	at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117) [bundleFile:9.4.20.v20190813]
	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336) [bundleFile:9.4.20.v20190813]
	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313) [bundleFile:9.4.20.v20190813]
	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171) [bundleFile:9.4.20.v20190813]
	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129) [bundleFile:9.4.20.v20190813]
	at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:367) [bundleFile:9.4.20.v20190813]
	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:782) [bundleFile:9.4.20.v20190813]
	at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:918) [bundleFile:9.4.20.v20190813]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_222]
Caused by: javax.servlet.ServletException: org.glassfish.jersey.server.ContainerException: java.lang.OutOfMemoryError: Java heap space
	at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:489) ~[?:?]

Glad its working for you now.
Cheers
Karl.

It seems that in some installations the updated bundle is later replaced with the release bundle. This is expected oof you clean cache and may happen if you install/uninstall other addons.

The solution could be to uninstall amazonechocontrol (from PaperUI or addons.cfg, wherever you originally installed it) and then, after the uninstall finished (make sure the bundle is no longer listed on the karaf console), drop the jar above in the addons directory.

I expect 2.5.6 in the near future (probably this weekend), so this β€žrevert the fixed versionβ€œ issue will be finally solved then.

Regarding the β€žbetaβ€œ: since @michi pointed me to the PR, we can try to get that merged, but itβ€˜s impossible to do that before 2.5.6.

2 Likes

Can someone please point out where the current β€œnew” jar is. I can’t find it in this topic. due to it having so many posts.

Big thanks!

Thanks it was that.
Sorry, I should have checked the bundle version before posting but I hadn’t restarted or cleared the cache.

Someone is using the latest docker image? How Can I fix it in my running docker container?