Openhab filling up Memory and Swap

The VMs are isolated from each other (I use ESXi) and don’t share any physical memory.

Those users who do file issues or add to existing issues already so this. The rest don’t file issues, even when asked. This wouldn’t help.

And when you create an issue, it already asked for this information.

That shouldn’t be a problem. The hypervisor provides isolation between VMs. The Grafana problem wouldn’t care problems unless the machine is overprovisioned and it runs out physical memory, in which case I think the hypervisor or VM itself will error it, not one program running on one VM.

That’s an interesting perspective. I hadn’t seen any suggestion to file an issue on Github anywhere in this thread, hence I thought it would be useful to point people toward a single issue to gather symptoms and information. Much like the HK Locator error issue that I’ve been loosely following: Issue 587. Users who suffer the issue can offer help to each other, do tests, swap notes and support each other to try and narrow down the problem. To state that this sort of communication within the community won’t help seems somewhat defeatist.

No but I can report the inverse: I don’t use Chrome and despite running latest OH code and quite a number of bindings, I do NOT have any memory leakage to report since months.
Maybe a direction worth inspecting into.

Have to correct myself, mem usage did grow significantly in the last 2 days.
Still don’t use Chrome however, so it’s rather unlikely your issue (and mine) is caused by that.

This has not happened for a long time and started showing after I moved from 2.5M1 to 2.5M2, nothing else I’m aware of to have changed significantly.

So that’s all the re-integrated ESH stuff to likely contain a memory leak.

So I used to have major problem with RPI going down once a week due to Java Heap memory. I did increase the swap memory in the java options and after this its been up for 63470minutes. But almost at the same time my wall tablet battery died and I had no Idea that a client running the OH app could make the server memory full. I was really close to switching over to home assistant but since the problem now is gone I will stick with OH until it happens again. For me having a stable system when you rent out the flat is very very very important…

Now that I’ve shutdown my “permanent” clients, OH process size remained the same but resident size dropped significantly while usage on swap increased on about the same order of magnitude.
So there were mem pages the OS identified as not being accessed in a long time and paged them out.
As I wrote this is since I moved from 2.5M1 to M2 and didn’t show up in M1 (my instance ran from almost day1 until M2 was released) so it was introduced by the changes between these two milestones.
Which is somewhat understandable because to re-integrate ESH was a big effort all by itself. Now let’s go bug hunting.

Does anyone know of a way I can persist the Java heap size so I can graph it in Grafana? I need to do something like capture the value of “Current heap size” from “shell:info” on the OH console.

Configure certificate based login to the Karaf console. https://karaf.apache.org/manual/latest/security (create the ssh key pair, create an entry in /var/lib/openhab2/etc/keys.properties

`openhab=<id_rsa.pub key>,_g_:admingroup`

NOTE: The corresponding private key needs to be stored in the .ssh folder of the user that will be executing the command(s). If you will run this from openHAB it needs to be stored in ~openhab/.ssh.

# Current heap size
ssh -p 8101 openhab@localhost shell:info | grep 'Current heap' | cut -d ' ' -f 16

# Maximum heap size
ssh -p 8101 openhab@localhost shell:info | grep 'Maximum heap' | cut -d -f 16

# Committed heap size
ssh -p 8101 openhab@localhost shell:info | grep 'Committed heap' | cut -d -f 14

Run these commands from a Rule that updates openHAB Items that get persisted, or update OH Items using curl from a cron triggered script, or save the values directly into a database using your database's API.

fyi since i changed to my “workaround” (avoiding android app as permanent client) some months ago i didn’t had any memory problems with openhab…
even though i extended the system with new rules/things/items… but i didn’t do any tests with the openhab
android app since then… as far as i can remember there were some updates on the android app some time ago…
Best Regards,
Hans

Hi Gavin - I’m having the same java memory issue and I’d like to try your solution for rebooting - could you please post a quick outline of the rules and how you are monitoring memory usage on the Pi? Could be useful for others!

thanks

Julian

Rule to notify on low memory - uses the standard sysinfo binding. Memory_Available_alert is a virtual numeric value used in the site map to set the point of alert.

    if(Memory_Available.state < Memory_Available_alert.state  as DecimalType && memory_alert === false){
		var msg = "Memory Available (< " + Memory_Available_alert.state + "mb) " + Memory_Available.state + "mb remaining"
		logWarn(ruleName,msg)
		sendPushoverMessage(pushoverBuilder(msg).withUser(pi_pushgavin).withTitle("Openhab").withPriority(0).withSound("cosmic"))												
		memory_alert = true
	}

Rule to reboot the Pi using a virtual switch

rule "Reboot Pi"

when
	Item Reboot_PI changed from OFF to ON
then
	val String ruleName = "Reboot Pi"

	val String msg = "Openhab is Rebooting the PI"
	logInfo(ruleName,msg)
	sendPushoverMessage(pushoverBuilder(msg).withUser(command_pushgavin).withTitle("Openhab").withPriority(0).withSound("cosmic"))									
	Reboot_PI.postUpdate(OFF)
	Thread::sleep(5000) // sleep for 5 seconds

	executeCommandLine('"sudo" "reboot"', 100)

end

BTW - since changing to the Pi4 memory has not been an issue.
You could also combine these two rules to automatically reboot the pi - but I did not like that idea.
I should also mention that there is a flag “memory_alert” set to note that a message has been sent - this is declared at the top of the rule file and reset by a rule that runs at midnight each night. This stops the issue of getting lots of messages.

Out of good old fashioned paranoia, I have my openhab system reboot itself daily. Root has a cron job that just has systemctl reboot.

I did it this way so I could pick an idle tine in the dead of night, rather than having OH reboot when someone might be trying to use it.

Guys, you shouldn’t be rebooting your systems because a proper openHAB doesn’t grow in size once it reached its working set size. If yours do, find the error. If you cannot find it, downgrade to the latest version that does not have this problem.
BTW, “reboot” does not make use of shutdown scripts, so you’re seriously putting the integrity of your system at risk. This is known to e.g. break zram.

1 Like

Thanks Gavin that’s perfect. I’ll probably just use the first rule to start with to keep tabs on the memory usage. I’ve moved my in wall tablets off the android app to habpanel. Apparently the app has a memory leak which may be my issue.

Assuming you refer to openHAB java process ever-growing, that wouldn’t be the app (to run on your phone) but the REST API on server side.

Yes sorry Markus should have been more precise! The problem is on the server - I’ve run memory analyzer and I get a suspect leak

3,973 instances of “org.eclipse.jetty.io.SocketChannelEndPoint” , loaded by org.eclipse.jetty.io occupy 181,587,432 (73.71%) bytes.

Please file that issue on GitHub to openhab-core and provide the details there, too, that developers need to fix it such as bindings you use and the full output of your memory analysis.

It would be best to first confirm that this is an issue in 2.5M2.

What version of openHAB are you running. And can you check which version of jetty-io is active?