Moving OH3 from Synology Docker to Raspberry

Hi all
since I have installed OH3 as a docker container I have to restart every several days the docker container because Java running out of memory. The OH3 Systems runs than with more than 6GB RAM consumption. I cannot find out which Java and version is installed within the docker.
The Docker Image comes from openhab/openhab:latest

The Docker Container runs on a DS920plus with 20GB Ram.
Now I have setup a new raspberry and want to migrate my settings to this Pi.
But within the docker container I cannot find openhab-cli
Is it a valid process to copy all data from

  • addons
  • conf
  • userdata

Or better question, which folders / data I should not copy to the new system.

Hope that somebody can give me an advice

BR Uwe

Update:
Java is openjdk version 11
installed under /opt/lib/jvm/default-jvm

That has nothing to do with the fact that OH is running in a Docker container. Moving to the RPi probably won’t fix this particular problem. There are lots of other good reasons to make this move, I just don’t want you to think that doing this is going to fix this particular problem.

It’s not there. But there is /openhab/runtime/bin/backup which is the script that openhab-cli calls anyway. But…

Yes. That’s all that backup script does anyway. You probably want to skip userdata/cache and userdata/tmp but yes, all the configs are in those folders.

Hi Rich
thanks for the path to the backup.
Meanwhile I found a thread nearly similar to my problem. I sounds like the problem with (persistent) rrd4j.
I will try to group them with a different saving intervall.
Half of the items needs a maximum saving once a day
But I will proceed with the little Pi (I have to order a ssd) and will implement a cronjob to copy the backups to the nas.
I will be back when I moved to the Pi

Cannot proceed today, first I have to buy a ssd. No space left on the SD-Card :frowning:

See / hear you tomorrow

Hi Rich

I have setup a new openhabian and extend the filesystem
But:
How I can extend overlay0
There are only 721MB and I need at least 1.1GB for the existing persistent files

openhabian@openhabian:~ $ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/root        29G  5.2G   23G  19% /
devtmpfs        1.9G     0  1.9G   0% /dev
tmpfs           1.9G     0  1.9G   0% /dev/shm
tmpfs           778M  1.7M  777M   1% /run
tmpfs           5.0M     0  5.0M   0% /run/lock
/dev/mmcblk0p1  253M   50M  203M  20% /boot
/dev/zram0      721M  706M     0 100% /opt/zram/zram0
overlay0        721M  706M     0 100% /var/lib/openhab/persistence
/dev/zram1      974M   18M  890M   2% /opt/zram/zram1
overlay1        974M   18M  890M   2% /var/log
tmpfs           389M     0  389M   0% /run/user/1000

My Unix-KnowHow is to old what can help me
The last deep knowledge was with SunOS 4.1 in the nineties.
the folder persistence is mapped to overlay0, But I have no experience with overlayfs.
Question here, how can I increase the volume for overlay0?

BR Uwe
Btw. SSD is ordered

Update: The full overlay0 comes from the restore

Overlay appears to come from ZRAM. ZRAM lives, as the name implies, in RAM. So look at the openHABian docs for how to give ZRAM more space. Note, when you move to an SSD you don’t need ZRAM any more. Its purpose is to put stuff that is written to a lot into RAM instead of on disk to minimize wear and tear on the SD card. You don’t need that on an SSD.

Hi Rich
thanks for your reply. Meanwhile I have manually copied the really neccessary persistent files to the Pi.
And, ist works. with less memory consumption.
But what have I done.
I have created 3 groups and added this groups to the persitence file with different strageries
Thanks for your hint to the openHABian documentation. I will have a dive in

Please note it’s neither recommended nor supported to use SSDs with openHABian.
I’m sure you have read that in the docs :wink:

I know this is an older thread, but I have the same Java issue in my docker container running on a 918+. I’m also thinking of migrating back to a pi, but maybe this will solve my problem. Rrd4j persistence is out-of-the-box, how can I optimise this to get rid of the issues?

I solved my problem on another way.
I installed a docker with grafana and influxdb on my ds920+
since this time OH 3 is running on a pi without any issue

Maybe this is also a way for you?

BR

No need to complicate the setup.
You can simply NFS mount the persistence directories from your NAS to a Pi. Or attach a SSD and just use it for those dirs.

If it is some kind of memory leak upgrading openHAB may help. Memory leaks won’t be fixed by running openHAB on a different platform. The whole idea behind using a Java Virtual Machine is that you can run the same software without changing it on any platform. :slight_smile:

I’m on 3.3, so maybe I can wait until 3.4 is released and for now restart the container every 5 days…

Nah. Firstly, a mem leak now is something completely different than what this thread is about so please stay with that topic and if needed open your own thread.
Second, waiting for 3.4 does not make sense. Go ahead and identify the binding that is causing this. Search the forum for help on that, there’s multiple threads.