Why is OH using that much memory?

The memory are always near 4% of 16 gb ( 15.7 gb) . That’s seems to be a lot.

That’s the only process consuming that much memory on my system.
Is that normal ?

Nb: I only have the amazon echo control and the plex binding enable .

openHAB is written in Java. One problem is that in my experience developers are bit sloppy when it comes to memory management in high order languages with garbage collectors (GC).

The language and APIs also very often force you (by immutable objects etc) to create “new” objects all the time instead of recycle old ones. If the GC does not run often enough, the memory count will increase. You might want to test that with a forced garbage collection by the way.

Additionally openHAB uses DSLs (Domain specific languages) for .things and .item files and the generated models need to stay in memory and the external library we use (Xtend) was never intended to run on smaller, memory constrained, devices (which means developers didn’t care about memory management).

The embedded webserver (jetty) is meant to be run on big servers with many parallel threads. That will again take a chunk of memory.

All items, things, rules, sitemaps are always kept in memory.

So there are many reasons why openHAB consumes memory. Usually the java virtual machine will just take memory until the operating system notices about memory pressure and then the GC will collect and free unused objects.

python and the dotnet framework are also languages with a garbage collector and will have spikes in memory usage. Applications like mosquitto and deCONZ that I see in your list are c++ applications on the other hand and should stay at about the same level.

Cheers, David

2 Likes

Wow, and here I was thinking how does OH2 use so LITTLE memory. Hardware is cheap!

1 Like

The only thing I see in your image it’s consuming 612M memory. The value your looking at (13.7G actually) is virtual memory. There can be several reasons why it’s high, like what -Xmx you give it.
My advice. Don’t believe everything people will claim about sloppy programming or garbage collection problems. Just look at the facts and do inform yourself about what your actually looking at:

2 Likes

Advanced java programmers know the pitfalls, of course.
Most bindings are not written by professionals however.

And it is still a garbage collector based language, I mean there is a reason why the deconz process with over 12 hours processor time has only 500MB virt memory assigned while java with only 3 hours processing time is at about 14GB :slight_smile:

1 Like

I’ve actually wrote some rules to store the values and reinitialize them to the items.
Do they really never lose their values or change to null?

Not during the runtime of openHAB, unless a binding, rule or the REST API tells them to.

2 Likes

No. The resident size is more or less normal, but the total process size is not. While it varies depending on a couple of parameters such as the amount of bindings and rules - and as @hilbrand correctly states there’s also a number of reasons that additionally affect this such as if you start java with -Xms -
it actually shouldn’t vary all that much in practice. Based on experiences for ARM and assuming your x86 systems consumes 4 times that much (that’s not guaranteed to be correct but the most commonly found ratio), something on the order of 2-4 GB would be to be expected on your system.
I would actually believe in a memory leak in your case. You would need to restart OH and track memory consumption to find out.
It’s a totally different question though if there’s anything you can or should do about it, or if that’s a problem at all. As the resident size (the working set of memory) is low, it’s actually ‘virtual’ memory in a literal sense that does not do harm (or just very little such as to consume some swap space).

1 Like

Java sets the maximum heap size differently depending on how much memory you have available on your system. On my system with openjdk 1.8.0_191 and 8GB of ram, it defaults to 2GB for the max heap. Possibly higher on your system.

You can check the default heap size that java determines by running:

java -XX:+PrintFlagsFinal -version | grep HeapSize

If you want to reduce it, since openhab probably doesn’t need that much, (java just allocates it since it’s available and we haven’t set the option) try adding the following into /etc/default/openhab2 to restrict the maximum memory usage.

EXTRA_JAVA_OPTS=“-Xmx 1g”

1 Like

While not wrong it’s not necessarily good advice either.
-Xmx limit, when hit, will effectively result in OH crashing or locking up because most of the code isn’t prepared to handle an OutOfMemory case .
Yes java will try garbage collection before that but in case of a code to have a memory leak java doesn’t know that. It believes the memory is still in use so it won’t release that.

FWIW, in openHABian we use -Xms250m -Xmx350m on 1GB RPis which effectively means to increase the mx default (from 256m).

Then again as the Java default is to use 1/4th of physmem something’s weird (non-default at least) on your system already as your process size is well above 4G. Check what memory options java is running with, restart it with defaults or even a lower -Xmx and see what happens (crash or just garbage collection ?) when it hits the limit.

It does not seem to be a memory leak. I just restart openhab process and it goes directly to 3.5% of 15.7 gb of memory…then it stay around 3.5-4.0

One particular case for which the OH server could use a lot of memory is the case you are using lots of “Image” channels in particular if your images are in HD resolution.

Which is in line with my prediction.
Memory leaks, if existent, only show up over time as a slowly, ever-increasing process size.

@Lolodomo I tried to disable paper ui (would reenable it If I need it) . But it consumes the same amount of ram with or without it.

i’m not sure why you say it’s not good advise. It’s comments like this from sustaining members that discourage new posters from contributing to this board.

If his jvm process is defaulting to a Mx of 4GB for example, perhaps that’s the only thing wrong, shrink it down. My advise was to validate the current maximum and then lower it, if there is a memory leak as you mention then it will be apparent as the system will eventually crash. First step is verify and reduce the Mx size.

1 Like

H goes to 14 straight from the beginning ? I do not know what this mean?

Will do some research on that topic before posting back.

thanks guys

I never said that disabling Paper UI could help…

It certainly wasn’t my intention to discourage anyone. Clearly I don’t have absolute wisdom or own the philosopher’s stone so please take my post as a contribution or opinion that anyone can disagree with and keep posting your own suggestion.

That being said, I still think it is not a good idea to lower Xmx.

No. There’s nothing wrong with that as it is the default and expected behavior. Note it’s not actively used memory so really nothing tragic about having that ‘hang around’.
While there’s headroom (yes, OH does not need that much), OH will effectively crash when the max limit is hit, so if you lower it using Xmx it’ll crash faster.
How fast in turn depends on the extent of memory leaks to exist in the user’s setup.
But given his size was almost 16G (see 1st post) my guess is that could be pretty soon the case.

I believe that all you get from lowering the limit is the certainty that you have a memory leak. If that’s what you want - ok, just do. But you can as well observe memory consumption over time instead, that’s an as-good indicator but one to keep OH working. And knowing there’s a leak is not really a valuable information as it does not tell you where the leak is and how you could avoid it, and most likely you can’t do anything about it anyway (as a user).
So in total OH will crash faster when you lower the limit, but it won’t really get you anything in return, hence my view on this advice.

1 Like

how do i force a recurent garbage collection on oh?

thanks