OH 4.1.0 UsedHeapPercent is at 99% doesn't look good

I just installed the system info binding to check if it provides info regarding the system because it looks like I have memory issues.
Values I see
SysteminfoMemoryTotal = 8286896128
SysteminfoMemoryUsedPercent = 53,7% (.537)
SysteminfoMemoryAvailableHeap = 6839936
SysteminfoMemoryUsedHeapPercent = switching between 98% and 99% (.98 / .99)

It looks like there is enough memory available on my Synology but SysteminfoMemoryUsedHeapPercent doesn’t look good.

Is it possible to assign more Heapspace?

I don’t think this necessarily indicates a problem. Assuming the heap it’s talking about is the heap space allocated to Java, well Java is going to continue to consume heap space until it completely fills it up. Then as it needs more space it will run garbage collection to release heap space so it can continue to run.

If you’ve no other indications of problems (e.g. high system load, swap space in use) this is not by itself an indication of a problem.

This is what I have as values for swap…
SysteminfoSwapTotal = 7121928192
SysteminfoSwapAvailable = 6449790976
SysteminfoSwapUsed = 672137216
SysteminfoSwapAvailablePercent = 0.906
SysteminfoSwapUsedPercent = 0.094

OK, if swap is being used you’ve too much stuff running on this machine. 10% of swap usage isn’t too much so most of the time you’d probably not notice much. But there will be times when a rule runs in OH (or something else on this machine wants to run) gets delayed as it’s memory is swapped out from disk.

Your system load is likely over 1 which means at any given time you have at least one process waiting around doing nothing while it waits for a bit of RAM to be retrieved from swap.

You can change the heap size on the OH command line but that’s going to give less heap to OH. That may or may not be a problem. Better would be to move stuff off of this machine. It’s overloaded.

Is this SysteminfoCpuLoad ? if yes that value is 0.319

I don’t know if that’s the property or not. On Linux there is usually three loads presented by top or htop. A 1 minute average system load, 5 minute average system load, and a 15 minute average system load. It’s not a measure of the CPU so I doubt that’s what that Channel is reporting.

The processor from the Synology NAS is Intel Celeron N3160 @ 1.60GHz with 4 cores and 4 threads.
The values I get are below. The last value was during a update from the mapDB viewer (takes minutes). Are those values acceptable?

load average: 2.83, 3.40, 3.70 [IO: 2.08, 2.07, 2.05 CPU: 0.75, 1.34, 1.66]
load average: 3.21, 3.45, 3.71 [IO: 2.17, 2.10, 2.06 CPU: 1.04, 1.36, 1.66]
load average: 4.19, 4.03, 4.01 [IO: 2.16, 2.02, 2.00 CPU: 2.03, 2.00, 2.00]

With top I get

PID   USER      PR  NI    VIRT    RES  %CPU  %MEM     TIME+ S COMMAND
10374 9001      20   0 6144.0m 2.776g 5.263 35.97   1075:57 S /usr/lib/jvm/java-17-openjdk-amd64/bin/java -XX:-UsePerfData -Dopenhab.home=/ope+

Assuming it’s the same, the first line mean almost three processes have been stuck waiting around for something (file IO, RAM to be swapped back in, network) for the past minute, more than three were waiting for the past five minutes, and almost 4 for the past 15 minutes.

Each subsequent run is showing that the load is getting worse and wose.

Any system load greater than 1 for more than a minute is not acceptable.

Because swap is being used, I’ll say it once again. You have too much running on this machine for the amount of memory available. You need to move something to some other machine or increase the amount of RAM on this machine or you will continue to see high load and poor performance.

In today’s world, unless you are doing 3D graphics rendering or AI type stuff, the CPU almost never matters. Even crappy CPUs are almost never the limiting factor.

MapDB only saves one value per Item. What are you viewing? You certainly are not generating charts from it.

The NAS has 8GBRam and that is the maximum.
I have only 4 dockers running
Grafana
InfluxDB
Mosquitto
Openhab4.1.0
Are you saying the NAS has to less resources?

I use this to check if mapdb is still writing data but I’m sure that rrd4j and mapdb are stopping after a couple of hours after a restart. Mapdb was reliable with OH3.4.1. What I don’t understand is that all the 3 storage files have the correct timestamp and look they are updated.

Based on that load, yes. You are short on something which is causing 3-4 processes to always be stuck waiting for access to something instead of doing their job.

OH is going to take a little over a gig of that RAM. InfluxDB and Grafana are not light users of RAM either but I don’t know what a typical usage would be. But don’t forget that in addition to those four Docker containers you have all the other stuff Synology runs as well taking up RAM.

But I’m only guessing it’s a RAM problem because that’s usually the problem and your sysinfo is reporting swap is in use. Synology does weird stuff though so there might be other limited resources that is the real limiting factor that all the processes are waiting around for. But at the end of the day, you either need to reduce the demands on that limited resource (i.e. move processes to run on some other machine) or increase the amount of that limited resource (usually means adding hardware).

Maybe writing is fine but OH is stuck waiting around for bits of it’s RAM to be fetched from swap when you do a read operation. The least used parts of RAM are what get swapped out. OH is constantly writing but only rarely reads.

I will go for new hardware. I know that a raspberry pi is the preferred solution but I have some question.

  1. Is a raspberry pi 5 already fully supported?
  2. If I go go 8GB RAM can I run OH in a docker or not?
  3. Can I boot from HatDrive! Top (NVMe 2230, 2242 GEN 3) with Silicon Power PCIe M.2 NVMe SSD 512 GB Gen3x4 R/W tot 2, 200/1, 600 MB/s interne SSD

Should I consider other hardware?

OH ruins just fine on it. I don’t know if openHABian is fully released for it though. There are a number of pretty drastic changes in bookworm that broke a lot of parts of openHABian.

Depends on what else you run on the machine too. Docker does indeed run on a RPi. But running anything in a container requires more RAM than would be required to run the software installed on the machine.

I don’t know. That’s a question for an RPi forum. openHABian won’t support that configuration but you don’t have to use openHABian.

Only OH4

Who can I ask this?

Look at the docs and currently release of openHABian. I think there is a testing thread somewhere on the forum too. But if the docs do not include bookworm, it’s not yet supported. Since RPi 5 only runs bookworm …