[SOLVED] Write Persistence data to NFS share


following up the suggestion by @mstormi to open up a new topic (Is there a possibility that openHAB cloud service is compromised?) on how to store persistence data on NFS shares in order to limit the writes to the SD, I’d like to ask your help to do the following:

  • I’ve got an NFS share for backup purposes, which the openhab user is able to access.
  • I’ve got influxDB and rrd4j running.
  • I’d like to write persistence to the share.

my influxDB config looks like this:


# The name of the database user, e.g. openhab.
# Defaults to: openhab

# The password of the database user.

# The name of the database, e.g. openhab.
# Defaults to: openhab


Do I just need to change the URL to the network drive?
I don’t want to lose data though.

my rrd4j config is empty.

thanks in advance for you help,

Haven’t used InfluxDB but for sure url is not the parameter to change. You need to do it inside the InfluxDB server, so search this forum or check out the InfluxDB docs.
For rrd4j, you can simply move the directory (/var/lib/openhab2/persistence or just the rrd4j dir therein) to NFS and create a link there to point to its new location.

I’ve also moved the logs, as you suggested in the other post.

Are those (logging and persistence) the write-intensive services of openhab? i.e. would moving those off to a network share increase the SD Cards lifetime?


I thouhgt I’d document the steps here, if any newbee like me would like to move off logs, persistence and still use the openhabian enabled frontail viewer:

1. Change the directory where logfiles are stored:

sudo nano /var/lib/openhab2/etc/org.ops4j.pax.logging.cfg

Change the below lines in the file:

log4j2.appender.out.fileName = /mnt/YOURDIRECTORY/openhab.log
log4j2.appender.out.filePattern = /mnt/YOURDIRECTORY/openhab.log.%i
log4j2.appender.event.fileName = /mnt/YOURDIRECTORY/events.log
log4j2.appender.event.filePattern = /mnt/YOURDIRECTORY/events.log.%i

2. Change the directory in the frontail service:

sudo nano /etc/systemd/system/frontail.service

Change the directory to the new log location:

/mnt/YOURDIRECTORY/openhab.log  /mnt/YOURDIRECTORY/events.log

3. Change the directory in openhab.json:

sudo nano /usr/lib/node_modules/frontail/preset/openhab.json

apply those changes here:

    "/mnt/YOURDIRECTORY/openhab.log": "text-align: right; font-size: 0.8em; border-top: 2px solid #F8F8F8;",
    "/mnt/YOURDIRECTORY/events.log": "text-align: right; font-size: 0.8em; border-top: 2px solid #F8F8F8;",

4. Move persistence, create a link, set the correct permissions + user/group:

mv /var/lib/openhab2/persistence/rrd4j /mnt/YOURDIRECTORY
ln -s /mnt/YOURDIRECTORY/rrd4j /var/lib/openhab2/persistence/rrd4j
sudo chown -R openhab:openhab /mnt/YOURDIRECTORY/rrd4j 
sudo chmod 777 /mnt/YOURDIRECTORY/rrd4j 

To not lose data, copy the content of the influxDB to the NFS share & set correct ownership:

sudo cp -R /var/lib/influxdb/ /mnt/YOURDIRECTORY/
sudo chown -R influxdb:influxdb /mnt/YOURDIRECTORY/influxdb/

Edit influxdb.conf:

sudo nano /etc/influxdb/influxdb.conf

and apply the below changes:

  dir = "/mnt/YOURDIRECTORY/influxdb/meta"
  dir = "/mnt/YOURDIRECTORY/influxdb/data"
  wal-dir = "/mnt/YOURDIRECTORY/influxdb/wal"

restart influxdb:

sudo service influxdb restart



Yes, see also this post.

Hi @KurtS!

I’'ve been trying to do what you describe but obviously I’m more of an newbee than you.

I’m successful following your step until number 4 and dealing with the rrd4j persistence. After issuing the commands I get the following in my log:

[ERROR] [sistence.rrd4j.internal.RRD4jService] - Could not create rrd4j database file ‘/var/lib/openhab2/persistence/rrd4j/SM_PV.rrd’: /var/lib/openhab2/persistence/rrd4j/SM_PV.rrd (access denied)

Obvisouly there is some kind of permission issue.
The link refers to a mount point on my NAS which is mounted by the following command in fstab: /mnt/synology nfs rw,async,hard,intr,noexec 0 0

I have also tried with this mount command: /mnt/synology nfs defaults 0 0

And aside from what is mentioned above I’ve tried to create users and groups on the NAS (openhab/openhabian) but with no luck.

This line in “/etc/exports” seems to be important in configuring the shared folder on the NAS:


I’ve tried issuing a touch command to the remote system as the openhab user:

sudo -u openhab touch /var/lib/openhab2/persistence/rrd4j/test.test

The created file gets the following permissions:

-rw-r–r-- 1 nobody 4294967294 0 Nov 1 07:34 test.test

Any help would be greatly appreciated!

(-3 hours sleep thanks to this :woozy_face:)


Didn’t want to be the rootcause for your lack of sleep :wink:

Yes, you have the wrong permissions on your NAS and subsequently, Openhab is not able to write to your mount path:

-rw-r–r-- 1 nobody

should look like:

-rw-rw-r-- 1 openhab openhab

Did you issue

sudo chown -R openhab:openhab /mnt/YOURDIRECTORY/rrd4j 
sudo chmod 777 /mnt/YOURDIRECTORY/rrd4j 

my fstab looks like:

DS716:/volume1/openHAB /mnt/DS716 nfs defaults,noauto 0 0


I added the noauto option but now the mount was not accomplished. /mnt/synology nfs defaults,noauto 0 0

The chown command generates the following output

sudo chown -R openhab:openhab /mnt/synology/openhab-persistence//rrd4j

chown: changing ownership of ‘/mnt/synology/openhab-persistence/rrd4j’: Operation not permitted

Impressively quick response! Much appreciated!

Try no_root_squash , your current setting maps root on client to nobody on NAS.

Either way, as of today (this thread is 2yrs old), I would go for a different solution now that ZRAM has become available in openHABian.
I also used to run my OH on a box to mount write-intensive stuff of my NAS but there’s drawbacks to this such as a dependency of availability on more external systems (NAS, network). These can be particularly nasty e.g. on power outages.


The no_root_squash adjustment gave me the permission to use chown and chmod. However, now the user openhab is unable to write to the folder /mnt/synology/openhab-log/ where my openhab.log and events.log resides, thus openhab will not start!

Tried a sudo chmod 777 on the openhab-log folder but when I try the touch command with the user openhab I get permission denied. The no_root_squash seems to somehow come in conflict with user mapping in some way.

Regarding zram, sounds like the right way to go. However, at the moment I do not use a UPS/battery backup to power my raspberry but I’m considering getting some kind of UPS solution. Relying on RAM only requires a UPS I guess.

No but previous OH runs without that setting probably corrupted your rights setup.
Try openhab-cli reset-ownership . Delete the cache, eventually reinstall.

No ZRAM does not. But it’s nevertheless a good idea.

After a kernel panic incident I will now try to add some robustness to my setup.
I have ordered a couple of PiJuice hats to my Rapberrys (UPS hats) and I have the aim to activate the Zram feature.

You mentioned in this thread that there are som drawbacks in using external systems which I can understand. My thought was to use my Synology NAS with a raided disk configuration to securely handle my persistence data.

I have read some of your earlier posts on this topic in various threads and I would really like to have your recommendation of how to setup a robust system.
If Zram is used to handle persistence data (in combination with the Amanda backup solution and a UPS hat) would that be enough in your opinion or would a NAS storage have a place in a robust setup?


1 Like