java.lang.OutOfMemoryError: unable to create new native thread

Your snippets from logs are symptoms. They just show the exception thrown when something tries to create a thread and there is no more available. They don’t show what is using them in the first place.

Is everything up to date? Disable all your bindings and then enable them one by one. When the thread count spikes you might have found the root cause/binding.

Yes exactly. I was after the tree view and thread names. When I had almost 2k “dead” threads I could spot it from the listing that there’s just too many.

See my topic about OOM.

I had very similar errors to this a few weeks back when I moved apartments. The reason seems to be that one or more bindings worked incorrectly when some devices/bridges/vacuum cleaner etc were disconnected. OpenHAB ate all memory and eventually had these errors. After connecting my devices or setting them as disabled in PaperUI everything has been running fine for about 2 weeks now.

I.e my best tip if you have not tried is to disable all devices that might not respond/be connected but still be included in your configurations/setup.

2 Likes

Thanks guys, thanks to your help, especially @gitMiguel latest post, it seems I identified the leaking binding. Need some more time to confirm it 100% as eventhough it stopped growing like before every couple of seconds, maybe I entered some coincidental period. Need some more time for observation but it seemed to be Xiaomi Mi IO Binding which started opening connections endlessly (more then 850 in the last 12 hours). I’m not solving it yet to confirm although I think this was the problem

3 Likes

Hey,

did you mention it to the binding creator. I am observing the same behaviour. Don’t want to hassle him twice.

2 Likes

Create an issue on the GitHub page and give a link to this thread in the issue, plus post a link to the issue here in this thread. It is helpful to know how many people get issues like this and if they are not easily reproducible to look at what is common between effected peoples setups.

Also worth trying older versions of a binding, if you find version x does not have the issue it helps find the cause quicker.

1 Like

@Simsal I haven’t mentioned it to the creator as there is a thread (where creator is involved) which mentions that in the recent release there is a need to configure some device Id or some other parameter otherwise binding will behave improperly (I tried to find this thread to quote it here but for some reason can’t do it easily). I don’t have that device anymore to test biding deeply therefore I haven’t mentioned anything yet as maybe it is abovementioned problem. I suggest you raise it with binding creator. I will keep digging for the thread with problem description and post it here if find anything.

@Simsal this may happen if you have text config and/or connectivity issues.
If you have text config, please take a look at the readme and fully define the thing according to the example. If you have defined your thing with text config, ensure your IP and token are correctly populated. If you continue to have the issue, please send a debug log (longer period) so we can see what is happening that may trigger the behaviour.

Hi Marcel,
thank you for reaching out to me. Appreciate it.

I recreated all my things this morning. Now I have just Paper UI created things with token from the cloud.
Items are via text config. This shouldn’t be a problem?
I will monitor now if the problem keeps coming up, if yes i will get you a debug log.

Nope… that’s just fine. it is what I use.

2 Likes

I posted my logs in another thread.
The MI io binding consumes already 1900 threads for me after 12hrs uptime

You can see the threads causing it there.
And it starts growing right after reboot of openhab. I did not see this before I upgraded the binding.

So, after two days, i don’t see the problem anymore.
For me, the solution was to delete all old miio things and add the new ones with token.

1 Like

Huh, i have a same issue but i dont use xiaomi binding…any ideas?

EDIT:
If using amazon echo binding this might be the reason:
check here:

1 Like

The reason seems to be that one or more bindings worked incorrectly when some devices/bridges/vacuum cleaner etc were disconnected. OpenHAB ate all memory and eventually had these errors.

Same here. For me it was a vacuum I left switched off. My openHABian (Pi3B, 1GB) ran into OOM errors within 24 hours!

I wish openHAB would detect and warn about this scenario somehow since that OOM error isn’t trivial to hunt to down.

Cheers

@brevilo which version of the miio binding are you using?

There were 2 fixes for this included in 2.5.8 version.
Do you already have this version of the binding and still run into the problem?

Hey Marcel. Yep, I’m on 2.5.8. I don’t have hard proof that your binding was indeed the culprit but the vacuum is the only device/item than can get (and was) disconnected. I’ll keep an eye on things over the next days and report back.

Cheers

If indeed you have the issue, you may be able to run dev:dump-create in the kraf console which creates a zip file. Inside that zip there is a file threads.txt.

If you see there hundreds of threads related to miio than… Houston we have a problem

Getting the same error on my Ubunutu 20.04 VIM3 with Openhabian 3

Any clue what could be wrong?

Same problem on FreeBSD 12, OpenJDK 11.0.11+9-1. shell:threads shows them all like this:

"Mi IO MessageSenderThread" Id=2143 in TIMED_WAITING
    at java.base@11.0.11/java.lang.Thread.sleep(Native Method)
    at org.openhab.binding.miio.internal.transport.MiIoAsyncCommunication$MessageSenderThread.run(MiIoAsyncCommunication.java:275)

From a quick look at the code, I wonder if the thread leak happens when isAlive() returns false.

did you do the dev:dump-create action? That should give a good hint

In case of running out of threats issues related to the miio binding you should see hundreds or thousands of waiting threads related to miio.

Expected behaviour is to have 2 or 3 threads per miio thing and a few more for the http part (cloud) and 2 for the discovery

Yes, the >thousand of waiting "Mi IO MessageSenderThread" threads are only present in shell:threads (and the system e.g. top -H) but not in threads.txt of the dump. (Though the style of output is the same between shell:threads and the dump, aren’t they supposed to be the same thing?)