Backing up OpenHAB to Nextcloud

Inspired by this thread from @Kim_Andersen and my own need to document these things, this is one (out of infinitely many possible) solution for regular OpenHAB backups to remote storage.

Preliminaries

These instructions assume that you have:

  • An OpenHAB 3 installation;
  • on a Linux system (I use OpenSUSE);
  • with a Bash shell; and
  • access to a Nextcloud installation (or if you’re not bothered about cleaning up old backups, any other WebDAV service will do).

Other prerequisites are:

  • curl
  • xmllint
  • grep
  • tail
  • sed

Of those, xmllint is the only one not likely to be already installed in your system. On OpenSUSE, the incantation is

zypper install libxml2-tools

sudo

For some reason, openhab-cli backup wants to be run as root, so the user running this script should have sudo powers to invoke the above command.

Nextcloud / WebDAV credentials and remote directory

You will need to provide a set of credentials to the script for it to be able to upload the backups and clean up old files. You will also need to create a directory where OpenHAB backups (and nothing else) will be stored. Note that if you do store anything else in this directory it will likely be deleted by the clean up routine.

Workflow description

The back up is done via a Bash script, which may be called from a crontab at regular intervals, unattended.

The script:

  1. Calls openhab-cli backup to create a backup file named after the current date and time.
  2. Uploads the resulting file using curl’s WebDAV capabilities.
  3. Uses curl again plus a bunch of common command line tools to query the server for existing backups and deletes the oldest ones, to conserve space.

Bash script

This is the Bash script that hopefully does the magic. Or rather, a template: you will need to:

  • Replace MYUSERNAME with your actual user name on the remote Nextcloud / WebDAV server.
  • Replace OpenHAB/Backups with the actual path to a directory where your backups will be stored in the Nextcloud / WebDAV server.
#!/bin/bash

###
### VARIABLES
###

# The backups will be saved locally (i.e., in the OpenHAB server) in this directory.
# An attempt will be made to create this directory if it doesn't already exist.
BACKUPDIR=$HOME/backups

# The base name of the backup file.
FNAME=$(date -Isec |cut -c 1-16)

# The full local path of the backup file
FPATH=$BACKUPDIR/$FNAME.zip

# The Nextcloud / WebDAV login credentials. Recommended to create an application password.
# NOTE: MYUSERNAME is a stand-in for your actual username. Do a search and replace.
DAVLOGIN=MYUSERNAME:9522ba3b-d40f-4a16-b03e-5038f199fc5a

# Your Nextcloud / WebDAV hostname
DAVHOST=https://EXAMPLE.NET

# The WebDAV root of your server
DAVROOT=$DAVHOST/remote.php/dav

# The full WebDAV URL to the directory where your backups will be saved.
# This directory must exist on the server and be write-accessible prior to
# running this script.
DAVURL=$DAVROOT/files/MYUSERNAME/OpenHAB/Backups

# A Nextcloud WebDAV search expression to clean up old files.
# NOTE: the string `/files/MYUSERNAME/OpenHAB/Backups` is included verbatim in
# this expression because I was too lazy to use a variable. Do a search and replace
# with the actual path used in your installation – unless 1) the actual path used in
# your installation is `OpenHAB/Backups` and 2) you have already done a global search
# and replace for MYUSERNAME.
DAVSEARCHEXP='<?xml version="1.0" encoding="UTF-8"?><d:searchrequest xmlns:d="DAV:" xmlns:oc="http://owncloud.org/ns"><d:basicsearch><d:select><d:prop><oc:fileid/><oc:size/></d:prop></d:select><d:from><d:scope><d:href>/files/MYUSERNAME/OpenHAB/Backups</d:href><d:depth>0</d:depth></d:scope></d:from><d:where><d:like><d:prop><d:getcontenttype/></d:prop><d:literal>application/zip</d:literal></d:like></d:where><d:orderby><d:order><d:prop><d:getlastmodified/></d:prop><d:descending/></d:order></d:orderby></d:basicsearch></d:searchrequest>'

###
### CODE
###

# Create the local backup directory if it doesn't exist.
# NOTE: The script will fail if "$BACKUPDIR" exists but
# is not a directory or not writable by the user running
# the script.
[ ! -e "$BACKUPDIR" ] && {
	mkdir -p "$BACKUPDIR"
}

# Delete old backups from local filesystem
rm "$BACKUPDIR/*.zip" 2>/dev/null

# Create the backup (use --full for a full backup, whatever that is, or omit for a plain backup)
sudo openhab-cli backup --full "$FPATH" >/dev/null 2>&1


# Upload to remote
curl -T "$FPATH" -u "$DAVLOGIN" "$DAVURL/$FNAME.zip"

# Delete all but the most recent two backups from remote (Nextcloud specific)
# If you want to keep more or less files, replace `+3` in the `tail` command
# with another number.
curl -s -H "Content-Type: text/xml" -X SEARCH  -u $DAVLOGIN "$DAVROOT" --data "$DAVSEARCHEXP"  |xmllint -format - |grep "<d:href>" |tail -n +3 |sed -r 's|\s*</?d:href>\s*||g' |while read RPATH; do
        curl -s -X DELETE -u $DAVLOGIN "$DAVHOST$RPATH"
done

Crontab

A line such as this will do the job:

# Run on Mondays and Thursdays
7 7 * * 1,4 $HOME/bin/backup.sh >/dev/null

Conclusion

Hope this helps. It may work for you, it may not.

3 Likes

I moved this to a more appropriate category.

openHAB usually runs as the user openhab and the permissions on some of the files backed up mean you’d either have to be user openhab or root to read them.

Note, there is a tiny chance of a corrupted/incomplete backup if you happen to be modifying OH at the same time as the backup is taken. For example, if the backup occurs when OH is writing out to the JSONDB, you might have zero length JSONDB files in your backups.

Given the cron you are using I would imagine you are unlikely to be messing with OH but I wanted to make sure this potential is documented here.

Thanks for posting!

1 Like

Very good point and something I had not explicitly considered. Part of the idea behind keeping multiple backups is to guard against possible occasional corruption, but this is definitely something to be aware of should a restore fail for no other obvious reason.

Technically, it should be possible to pause OpenHAB for the duration of the backup, but that might have undesirable side effects. Perhaps scheduling the backups at a time when nobody is likely to be messing with the system is a viable compromise?

There really isn’t such a thing as a “pause”. But your script could stop OH, take the backup, and then start it back up again. Though a restart of OH can be disruptive and could hide other problems (e.g. a memory leak).

Yes, though readers of this post would only know to do that if they know it’s a potential problem. :wink:

I believe OH only writes out to the JSONDB on shutdown and when you save something in MainUI (or call a REST API endpoint that changes something. First it will take a backup of what ever file(s) it’s going to need to write to under $OH_USERDATA/jsondb/backup. Then it writes the new files. So the odds are only one of those two files will be corrupted (either the backup or the active one). But you still don’t want to deal with either being corrupted in a backup.

Yup that’s what I meant by ‘pause’. More specifically, the way I would go about this is:

  • Check its state at script start-up (service openhab status)
  • If running, stop it and set a Bash trap that calls service openhab start
  • Do the backup

Good to know about the saving mechanism of JSONDB files, thanks! :+1:

I use the command below to backup Openhab.

/usr/bin/openhab-cli backup --full $FILELOCATION/openhab-$(date +%A).zip

The date +%A gives you Thursday etc as the name of the file.

This way I have a running 7 day backup and it automatically overwrites every 7 days.

-rw-r–r-- 1 root root 2097833739 Apr 21 23:19 openhab-Friday.zip
-rw-r–r-- 1 root root 2092368808 Apr 24 23:25 openhab-Monday.zip
-rw-r–r-- 1 root root 2090818807 Apr 22 23:20 openhab-Saturday.zip
-rw-r–r-- 1 root root 2089401854 Apr 23 23:20 openhab-Sunday.zip
-rw-r–r-- 1 root root 2104055481 Apr 27 05:21 openhab-Thursday.zip
-rw-r–r-- 1 root root 2097530641 Apr 25 23:24 openhab-Tuesday.zip
-rw-r–r-- 1 root root 2099308468 Apr 26 23:24 openhab-Wednesday.zip

1 Like

And technically, if $FILELOCATION is on a remote mount such as NFS, SSHFS, WebDAVFS, etc., you have a nice and easy off-site backup.

Off-site being important as, in my experience, the most common failure mode for the Raspberry Pi devices commonly used to host OpenHAB is SD card failure. Of course, an external hard drive plugged into the Π is also an option.