What to do on Java out of heap memory

  • Platform information:
    • Hardware: CPUArchitecture/RAM/storage
      Pi 4, 2GB
    • OS: what OS is used and which version
      Hypriot OS 1.12.3
    • Java Runtime Environment: which java platform is used and what version
    • openHAB version:
      openhab_docker 2.5.9
  • Issue of the topic: please be detailed explaining your issue
    I lately experience a lot of java out of heap memory errors, and wanted to discuss solutions/methods to resolve or track down the problem. I my case it first appears after 3 weeks of runtime…now the problem is “increasing” as it appears after 2 days of runtime…I will begin uninstalling some bindings as sugested here: Documentation issue with docker-container
    but maybe it would be a “nice to have tool” that exyactly lets you analyze the problem, or at least a solution to fight the symptoms…maybe we can figure something out in openhab core, that prevents the whole openhab-instance from crashing when experiencing this issue, as I sometimes also have database errors, which are very difficult to restore…

Some additional Info -> the error occured faster, as I had no Internet-connection overnight…I try to uninstall amazon echo control binding first.

You can see the heap size if you login to the openHAB karaf console and type in shell:info
It will grow and shrink on a normal system but it should help to diagnose the issue quicker so I have less downtime on each binding.

1 Like

Okay thanks for the great tip =). Currently it looks like this…only 50kb left…

  Karaf version               4.2.7
  Karaf home                  /openhab/runtime
  Karaf base                  /openhab/userdata
  OSGi Framework              org.eclipse.osgi-3.12.100.v20180210-1608

  Java Virtual Machine        OpenJDK Client VM version 25.265-b11
  Version                     1.8.0_265
  Vendor                      Azul Systems, Inc.
  Pid                         17
  Uptime                      39 minutes
  Process CPU time            7 minutes
  Process CPU load            0.04
  System CPU load             0.07
  Open file descriptors       287
  Max file descriptors        1,048,576
  Total compile time          55.784 seconds
  Live threads                250
  Daemon threads              108
  Peak                        288
  Total started               2758
  Current heap size           203,983 kbytes
  Maximum heap size           253,440 kbytes
  Committed heap size         253,440 kbytes
  Pending objects             0
  Garbage collector           Name = 'Copy', Collections = 210, Time = 7.315 seconds
  Garbage collector           Name = 'MarkSweepCompact', Collections = 11, Time = 6.214 seconds
  Current classes loaded      20,858
  Total classes loaded        21,340
  Total classes unloaded      482
Operating system
  Name                        Linux version 5.4.51-v7l+
  Architecture                arm
  Processors                  4

This is after removing amazon control

It’s going nearer it’s max:

  Current heap size           223,771 kbytes

Yes that is normal with Java, it fills up near the max and then the garbage collection cleans out the junk and you will see it drop back down.

Uptime of 39 minutes and 210 Garbage collections = a garbage collection every 12 seconds if I am reading it correctly. Try increasing your heap to 384mb

My system is 21 hours and only has 52 collections to compare, but I’m still getting all my things back up again after moving to openHAB 3.

I tried a more aggressive setting:

 EXTRA_JAVA_OPTS=-Xms512m -Xmx1024m -Duser.timezone=Europe/Vienna

What does this “garbage Collection value” mean? Is there something wrong with a binding? Or do I have config-issues?

That’s probably too much as you only have 2gb and you need some ram for cache, zram and the like.
If there is a binding issue and you increased the heaps size, all you have done is delay the issue and it will just take longer to run out, but it will still run out. This is why its handy to see what is going on, if you disable the binding that has an issue you will see it improve without needing to wait for it to run out of heap space. It can also be caused by a badly written rule.

1 Like

Which xms, xmx would you recommend? And by the way: thanks for all the help =)

It’s kind of like dishes.
Anytime anyone eats, they go and get a clean plate from the cupboard, put the dirty plate aside. Maybe they’ll eat some more in a minute and want the plate again.
From time to time the garbage collector comes around, says “hey, anybody using this dirty plate?”. If not, it washes the plate up and puts it back in the cupboard.

So there are balances to be struck.
You don’t want the garbage collector around every minute, taking up space in the kitchen.
You don’t mind a big pile of dirty plates - so long as there are some clean ones left in the cupboard. Maybe get a bigger cupboard. But then when the garbage collector does come, he’ll be there for longer.

What can go wrong - someone keeps getting a clean plate, but never tells the garbage collector they have finished with the dirty one. It’s difficult to figure out who is doing that, you only have a dirty plate to look at. Getting a bigger cupboard only puts off the problem for a while.


Okay I see…that really cleared things up for me thanks =) So a too large memory makes no sense, and a too small leads to errors right? And the garbage-collector value does not indicate errors?

@rossko57 can you also explain xms and xmx in that way =). It helps me understand =)

Nope, I know nothing about *nix systems.

Xmx is the maximum amount of memory the Java Virtual Machine (JVM) will ask from the host. If you exceed the Xmx you will get out of memory exceptions.

Xms is the amount of memory that the JVM will ask for when it first starts up. If you know ahead of time it’s going to take 200 MB of RAM to bring up and initialize the program, you can grab that amount of memory from the start which can improve the startup as there is no waiting around for the OS to allocate new memory.

XMS <= Xmx

Note that once the JVM grabs a bit of memory, it keeps it. So from outside the JVM it looks like the process only ever grows in memory. That’s why you need to use the commands like those above within the JVM to see how much memory is actually being used on the heap. In short, the JVM manages it’s own memory instead of relying on the OS to manage it.

These are JVM properties and also exist on Windows, Mac, and anywhere else the JVM runs.

1 Like

Thanks for the Help :slight_smile:

Is that how the Java extra opt for docker-compose would look like?

<= means is less than or equal to. You can’t have a XMS value that is larger than the XMX value.

Ahh okay :slight_smile: thanks :-). Currently I have a little overkill of xms: 500 and xmx: 800… I’m looking for a more reasonable setting, but more than standard…

Edit: This setting seems to work fine, and is more than enough :slight_smile: