A cron job regularly creates a backup of my openhab config by /usr/share/openhab/runtime/bin/backup
I have mounted a directory (/backup) on my Pi to a NAS via NFS.
Now I want the script to create the zip file in /backup and not in $OPENHAB_BACKUPS anymore.
However, if I change the environment variable by $OPENHAB_BACKUPS="/backup" it still uses the original path.
I changed etc/environment but this also does not work.
Other solutions are:
change the script. no option because changes get overriden when script is replaced e.g. during an openhabian update.
if I look into the script, I could write it into /etc/profile.d/openhab.sh or /etc/default/openhab: this is also not an option
I could provide a backup name, incl path like backup /backup/newbackup.zip: but then I loose the timestamp within the file name
Does anybody know what I need to do to chang the directory so that the changes survive future openhabian updates?
openhab-cli backup in return only calls the script /usr/share/openhab/runtime/bin/backup
that means your one liner also does not work. For any reasons the script does not use the environment variable OPENHAB_BACKUPS
excellent! will use this from now on. Have updated my cron job accordingly.
The backup script itself contains these lines of code:
## Set path variables
if [ -r /etc/profile.d/openhab.sh ]; then
. /etc/profile.d/openhab.sh
elif [ -r /etc/default/openhab ]; then
. /etc/default/openhab
fi
/etc/profile.d/openhab.sh also refers to /etc/default/openhab.
/etc/default/openhab itself contains these line of code:
So in case you always would like to override the OPENHAB_BACKUPS location you can remove the leading # in front of the environment variable and change it in this file.
That also should work - and for all users, that would like to create a backup.
FWIW, i’ve written a small script that automatically backups to a configured directory (and also mounts the directory if it is a NFS share. In order to avoid overflow in the directory, the script only contains the last 10 backup files.
you might change the directories, or also skip the mounting if you perform the backup on a local directory.
#!/bin/bash
# Script to start our application
echo "Doing autorun script..."
date +"%FORMAT_STRING"
date +"%m_%d_%Y"
date +"%Y-%m-%d"
today=$(date +"%Y-%m-%d")
#check if mountpoint is already existing
if [[ $(findmnt -M "/mnt/openHAB_Backup/") ]]; then
echo "Backup mounted"
else
echo "Backup not mounted, mounting backup"
mount -t nfs 192.168.2.40:/volume1/Backups/openHAB /mnt/openHAB_Backup/
fi
#after that check if mounting was succesful and if yes, perform backup
#otherwise do not
if [[ $(findmnt -M "/mnt/openHAB_Backup/") ]]; then
openhab-cli backup /mnt/openHAB_Backup/openhab3_backup_${today}.zip
#if there are more than 10 files, ensure that only the 10 newest files are available
ls -1t -dq /mnt/openHAB_Backup/*openhab3_backup* | tail -n +11 | xargs rm -f
else
echo "mounting failed. Backup skipped..."
fi
I let this script run daily at 5am and the result looks like:
good finding. But there remains the risk that the file gets overridden by an openhab update, or am I wrong?
But anyway the solution you provided seems to work.
The regular solution would be to set root’s environment but on openhabian the root’s password is not existent.
That’s just me liking to have control.
I’m only mounting if it for some reason (e.g. NAS rebooted, network issues in the past) the mountpoint does not exist anymore
this does not work unfortunately.
I find no way to set the variable for root.
I even managed to get password of root, log in, set variable, but sudo backup still does not work🤷🏻♂️
I am sorry for confusing you. I was in the car when replying.
works.
sudo ./backup
is not working as it is not possible for me to set environment variable for user with root privilege.
Thanks again for the time you took to make this clearer to me.