Docker high memory usage

Hi,

memory usage for openhab has always been quite high. I am wondering how it would ever run on a raspberry pi or so…
Is there a way to reduce it?

I am using the latest openhab docker image.

Daniel

A Docker container is always going to use significantly more memory compared to the same program running outside the container. It works just fine memory wise when installed on anything with as much RAM as an RPi 2 or better.

No, there is no way to reduce the amount of memory used and still have a functional openHAB short of not running it in a container. Outside of a container, at least on one instance it’s using 445M Virt, 217M resident.

Is this the case? Docker applications are not actually virtualized, unless run on a non native platform (like using linux on virtual box on a mac).

The process that runs in a docker container is isolated/restricted/jailed in the linux kernel using cgroups, other than that it’s just another process on the host system. The only performance penalty a docker container might have is networking where it uses a virtual bridge, but thats probably only significant when under high network IO load and not an issue if using host networking. This at least used to be the case, i’m not sure if its changed with newer docker releases.

I’ve never really looked into it that deeply but that is my experience.

I agree, it’s not fully virtualized but I wonder if the OS is no longer able to share libraries between different processes when they are running in the same kernel namespace. My ancient and half remembered understanding is that shared libraries get loaded by the kernel once and the kernel manages access to them from the different processes. Since the container is running in its own cgroup it doesn’t have access to the already shared libraries from the host or any of the other containers. It has to load those shared libraries itself. That’s my theory admittedly based on not much actually. It’s been a long time since I’ve taken an OS class and even then we didn’t cover Linux that much.

But in practice, when ever I’ve had a process running in Docker and the same process not running in Docker, the Docker version usually uses an order of magnitude more RAM (as reported by HTOP).

For a current example, my main OH instance running in a Docker container is using right now 5208M virtual memory, 1195M resident. My other OH instance not running in a container is using right now 445M virtual memory, 217M resident. I’ve never actually seen a version of OH running outside of container use more than 500M of virtual memory which makes some sense as the default heap size is 256M I believe for the JVM. Roughly half of the ram is used by the OH program itself and the other half is overhead to run the JVM. But running in the container doesn’t give OH more heap space to work with. It too should be only using the same 256M for the heap space (unless overridden) and something has to account for the remaining almost 5 GB or virtual ram being used.

Your theory on sharing loaded libraries is a good one to explain some memory, it not something i considered. I’m not sure why it would be using so much more memory in a container however as thats not been my experience with other containerized apps we run. My only thought is somehow Java sees the world differently when in a container and starts allocating memory differently ? I was running my OH non containerized on the same system, but did not see, or at least did not notice any meaningful memory increase when switching to docker. Something i should test i guess to confirm.

I would like to try to reduce memory usage. Any idea where would one set the JVM parameters for heap allocation -Xms (heap memory size on startup) and -Xmx (maximum heap memory size)?

Daniel

Reducing the heap size isn’t really going to do much for you. As I said, it’s already only 256mb. Cut it in half and your save 128mb. And reducing the amount of heap doesn’t reduce the amount of memory ooenHAB needs to function. It just means when it hits the 128mb limit it crashes with OutOfMemiry exceptions.

If you really want to reduce memory use, don’t run OH in a container. Beyond that you can pass command line arguments to the JVM using the EXTRA_JAVA_OPTS environment variable.

Ok, using Java opts helps with reducing memory utilization (I set both values to 3G), however Openhab does not seem to startup correctly: All web interfaces are working (logs normal, …) however, CPU utilization is not coming down.

I see it as @digitaldan: Why does this happen at all? Why should Java use (so much) more in a container setting than on a Raspberry Pi? Openhab is consuming 4 times more memory in my docker than a 2G RPi even has available.

lol… I just read (JVM Memory Handling for Dockers. In this post, we will take a look at… | by Madhu Pathy | Medium) that the default heap size in JDK is 1/4 RAM 8G would fit nicely with my 32G total…

There are also command line arguments you can pass to Docker to limit how much memory is available for a container.

I tried that, sadly that leads to Openhab crashing. Doesn’t seem that OH knows about the limitation and tries to allocate more.

What’s actually the best way to assign the docker instance more memory? Mine keeps crashing after 2-3 days, but I still have enough free memory on the host, so would like to assign it to the container and the OH-application.

From Docker’s documents:

By default, a container has no resource constraints and can use as much of a given resource as the host’s kernel scheduler allows.

So there is nothing to do for the container. For Java you can give it more memory by setting -Xms1024m -Xmx1024m (or how ever large you want to make it) to the EXTRA_JAVA_OPTS environment variable passed to the container in the docker run command.

However, giving it more memory is pretty much just going to buy you some time. It’s not going to solve the root problem because you instance has a memory leak and it will fill up all the memory at some point no matter how much memory you give it.