Automatic Backup openHAB2 via GIT to BitBucket (FREE)

All,

I typed up a script that will handle backing up my openHAB2 server to Bitbucket daily at 12AM (Midnight). Feel free to modify/use for GitHub or whatever GIT server you use. I went with Bitbucket over GitHub as Bitbucket offers unlimited private repositories for free whereas GitHub wants $7/month for that. Please note that Bitbucket has a 1GB soft cutoff and 2GB hard cutoff per repository, but I’m backing up more than I should and only using less than 400MB for my entire backup.

What’s nice about using GIT for backing up is that you have a very nice overview of what files are being changed. You also have a complete historical backup dating back to when you start. I can’t recommend a remote backup solution of some sort enough. While backing up to another separate device from your openHAB server is sufficient for most cases, I lean toward cloud based storage for things like this as it’s getting to be very reliable nowadays.

One more note. This is a little unorthodox as normally, you update the code in a git repository and then push it down to your server, but I’ve been too lazy to do the whole git thing (make a branch, change the code, merge the branch into master, pull it down on my server) to just change a line or 2. With that said, it works perfectly for my use at the moment. However, soon I will be moving to rsyncing to an NFS share (on my Synology NAS) daily which is then backed up daily as well. I will probably start using my git repository as a code source vs a backup since I have a few other projects that will require git repositories soon.

Ok, so here is my quick write-up:

***I’m running this on a Raspberry Pi 3 running openHAB 2.1 stable installed via the repository installation path.

My directory layout (These locations will be created in the instructions below):
/data/scripts/openhab2backup - backup script
/data/logs/ - backupscript log location
/data/openhab2backup/ - local temporary backup location

***Take note of the ()'s below i.e. (YOUR BITBUCKET ACCOUNT NAME)…

Create a Bitbucket account and a private repository called “openhab2backup”.

Create key on raspberry pi:

mkdir -p /root/.ssh
chmod 0700 /root/.ssh
ssh-keygen -t rsa -C "(YOUR E-MAIL ADDRESS)"

Copy the contents of /root/.ssh/id_rsa.pub (starts with “ssh-rsa”) into the ssh key section for main user (your main account) of Bitbucket (not a deploy key as that’s read only).

Test connection now from raspberry pi:

ssh -T (YOUR BITBUCKET ACCOUNT NAME)@bitbucket.org

Should see:

root@rpi01:~/.ssh# ssh -T (YOUR BITBUCKET ACCOUNT NAME)@bitbucket.org
logged in as (YOUR BITBUCKET ACCOUNT NAME).
You can use git or hg to connect to Bitbucket. Shell access is disabled.
root@rpi01:~/.ssh#

Set default git config:

#I just made up the user.name and user.email. It's not really important.
git config --global user.name "openhab2backup"
git config --global user.email "openhab2backup@auto.com"
git config --global push.default simple

Create Backup Location:

cd /data
git clone git@bitbucket.org:(YOUR BITBUCKET ACCOUNT NAME)/openhab2backup.git

#This will create the git directory - /data/openhab2backup

Make temp local backup and perform first directory sync manually:

mkdir -p /data/openhab2backup/configuration
mkdir -p /data/openhab2backup/userdata
mkdir -p /data/openhab2backup/runtime
rsync -r /etc/openhab2 /data/openhab2backup/configuration
rsync -r /var/lib/openhab2 /data/openhab2backup/userdata
rsync -r /usr/share/openhab2 /data/openhab2backup/runtime

Add untracked directories:

#this will show tracked and untracked files:
git status 
cd /data/openhab2backup
git add configuration 
git add userdata
git add runtime

Git Commit:

git commit -m “First Push”

Git Push (Can take a while the first time):

git push

Check Bitbucket now to see if code was pushed up.

Create file named “openhab2backup” and paste the contents below. Place in /data/scripts/

#This script will automatically backup the necessary openhab2 configuration files to a secondary location on this server and then push to Bitbucket. Do not rely on this local secondary backup location for a backup as it still resides on the local system.

confdir="/etc/openhab2"
userdatadir="/var/lib/openhab2"
runtimedir="/usr/share/openhab2"
logfolder="/data/logs"
logfilename="oh2backup_$(date +%Y%m%d_%H%M%S)"
oh2backupresult=$logfolder/$logfilename
backuploc="/data/openhab2backup"

#Make log folder if it doesn't already exist and create the file that will contain the backup results.
echo "---------------------------Starting--------------------------" >> $oh2backupresult
mkdir -p $logfolder
touch $oh2backupresult

#Sync Directories:
echo "-----------------Syncing openHab2 Directories----------------" >> $oh2backupresult
/usr/bin/rsync -r $confdir $backuploc/configuration
echo "$confdir --> $backuploc/configuration" >> $oh2backupresult
/usr/bin/rsync -r $userdatadir $backuploc/userdata
echo "$userdatadir --> $backuploc/userdata" >> $oh2backupresult
/usr/bin/rsync -r $runtimedir $backuploc/runtime
echo "$runtimedir --> $backuploc/runtime" >> $oh2backupresult

#Add Untracked Files:
echo "--------------------Adding Untracked Files-------------------" >> $oh2backupresult
cd /data/openhab2backup
/usr/bin/git add configuration
/usr/bin/git add userdata
/usr/bin/git add runtime

#Commit:
/usr/bin/git commit -m "generated files on `date +'%Y-%m-%d %H:%M:%S'`" >> $oh2backupresult

#Push:
echo "---------------------Pushing to Bitbucket--------------------" >> $oh2backupresult
/usr/bin/git push >> $oh2backupresult
echo "-----------------------Backup Complete-----------------------" >> $oh2backupresult

Modify Script Mode to Make Executable

chmod 0755 /data/scripts/openhab2backup

Create Cron:

crontab -e

Paste the following at the bottom and save.

00 00 * * * /data/scripts/openhab2backup
9 Likes

Regarding the logs, a new log file is created with every run. Feel free to change this behavior to one file if you please.

Here is my latest run in a log file for your reference:

---------------------------Starting--------------------------
-----------------Syncing OpenHab2 Directories----------------
/etc/openhab2 --> /data/openhab2backup/configuration
/var/lib/openhab2 --> /data/openhab2backup/userdata
/usr/share/openhab2 --> /data/openhab2backup/runtime
--------------------Adding Untracked Files-------------------
[master 8e6f932] generated files on 2017-08-31 10:38:57
 1 file changed, 1 insertion(+), 1 deletion(-)
---------------------Pushing to Bitbucket--------------------
-----------------------Backup Complete-----------------------

Thanks for sharing your efforts.
Fwiw, openHABian comes with the Amanda backup system to include the capability to backup to AWS S3 (you can get 5GB for free there).

That’s good to know. I was thinking about going that route for installation, but I went with the repository path. I will look into the documentation in openHABian for Amanda. Thanks.

the repo path is great, because of the fantastic version control. Thank you!

Please be aware that you could possibly upload private and/or secret information like passwords, ssh keys or API keys to the cloud. E.g. Someone else’s computer.

That’s indeed an important issue that should be mentioned. One way around this is to exclude some files through a .gitignore file. I’m hosting my configuration on GitHub, here’s my .gitignore with some exceptions.

Of course that will only allow to exclude complete files, not parts or lines. The usefulness is hence restricted.

Also I have to ask, @lysol is there any reason you include the runtime folder? Many developers and countless hours made sure this folder doesn’t need to be part of the backup. I’d remove it from your example or make it a commented option for whoever explicitly wants to.

Another detail: You are working in a folder called /data/.... This path is not part of the official FHS and while this might be nitpicky, I’m sure you’ll find a better spot. See: https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard

1 Like

Thanks for sharing @lysol. Is there any reason you don’t just create the git repo inside the openhab configuration folders directly? I do this locally (using a local Gitlab server) and this eliminates the need for the backup script to sync across to your backup location, and also allows me to makes changes to my config on another machine, push the changes to git and then pull them down to my openhab server. I find this workflow very useful, and even have a button setup in openhab which triggers a git pull so I can remotely update my config without having to ssh in.

1 Like

That’s how I do it as well. I guess the reason is because he’s combining three folders into one repository.

Good point :slight_smile:

That’s a good point. Another option could be to use Ansible to vault the file(s) and/or directory(s) via the script before pushing up.

My goodness, I need to reread and reread again the posts I make from my phone. :grimacing: It’s a wonder anyone can understand anything I write sometimes.

Another alternative to .gitignore is to use one of the git extensions and utilities that let you encrypt certain files. I was too lazy to try and figure those out too but others use them. I think one is called git-encrypt.

Personally, I would be certain to .gitignore or encrypt anything that has a password, username, location, MAC address, or anything else that can be used against you. For me, that includes almost everything in the conf/services folder and a good number of .things and .items files as well.

Make sure you are getting the ~openhab/.java (/var/lib/openhab2/.java I think for an apt installed OH) folder if you are using the Nest binding. Your login token ends up being stored there and if you miss it you will need to get a new pin and reauthenticate to the Nest servers on a restoration.

Just to add to the discussion. I run in Docker so combining the conf and userdata folders into one is not that hard. So I too have the one git repo that covers both, but right now I have my .gitignore set to ignore everything except what is in userdata/etc (I should probably add jsondb to be configuration controlled at some point). I would definitely exclude the userdata/cache and userdata/tmp folders from git though, particularly given how touchy OH can be when those folders are not just right during an upgrade. You will likely have to clear them out anyway as part of your restore procedure so why not save the space and avoid having them part of your repo in the first place?

Also, you could use symbolic links and link your /data/openhab2backup/configuration folder directly to /etc/openhab2 and do the same for /var/lib/openhab2 and drop the rsyncs entirely. Then your backup would just be running the git commands and restoring would just be a git pull or git checkout away without the need to copy files back to the “live” folders. That is how I used to manage it before I started running in Docker.

Finally, to :+1: Thom’s hint, /opt/openhab2backup or even better would be a more appropriate path. Still probably not strictly right to put it there but I see this sort of thing put into /opt all the time.

I’m only including the runtime folder as I could see use to revert versions. With that said, I have no real use for it. I threw that one in there for no real reason other than some people have backed it up with openHAB.

To answer your guess, yes I wanted to have everything in a single repository.

Hi @rlkoshak really awesome tips for docker backup, I have some questions for the process:

  1. would u please explain how to setup the .gitignore file? just create this file and write down the folder I want to exclude? I’ve created a .gitignore file in the root repo, it looks like this
userdata/cache
userdata/tmp

however it won’t work, I still get the cache and tmp with lots of files into bitbuket, should I place this .gitignore file to somewhere that can make to push process auto ignore these folders?

  1. how to symbolic the docker openhab folder? could u please provide the code? (so that we don’t need to create and rsync all files?) Does it can auto detect when the docker openhab folder change something?

  2. could u please explain which folder I should backup to?(using docker), I don’t have a ~openhab/.java file I think
    Inside the userdata folder, what exactly which folder I should keep? like this:
    001

so sorry for the noob questions, I am pretty new to bitbucket stuff, thanks
Shane

  1. Just create the file and put it in the top folder of your git repo.

My .gitignore folder is as follows:

userdata/*
!userdata/etc
!userdata/jsondb

That basically means ignore everything in userdata but go ahead and include userdata/etc and userdata/jsondb

Just create the file and put it in the root of your checked out git repo.

  1. I don’t understand the question. It doesn’t make sense in a Docker context. Please see https://docs.docker.com/engine/admin/volumes/volumes/ and https://hub.docker.com/r/openhab/openhab/

These are not bitbucket questions. They indicate a lack of understanding on how Docker works. Read the links above and http://docs.openhab.org/installation/docker.html. Everything is described in detail in these links. If you have specific questions about Docker after reading those you would be better served by posting a new thread.

Hi @rlkoshak, thanks man, it all worked now
I forgot to run “git rm -r --cached .” after I modifiy the .gitignore file, that’s the reason why the gitignore won’t work

That basically means ignore everything in userdata but go ahead and include userdata/etc and userdata/jsondb

this answer my questions very well, thanks, please ignore the question2

I just need to know which folder I should backup, which one I should ignore, for your information, the /etc and /jsondb folder are the only folders that should always backing up? also may I ask you why you ignore the /userdata/config + /kar +/persistence folder? I read somewhere said that the /persistence folder includes the item stats (I have mapdb and rrd4j installed, which stores all item’s last state when rpi recover from lose power, but I am not exactly sure of that, I do have a mapdb folder inside the /persistence), also may I ask what exactly does the /userdata/kar folder do in openhab? I can see only a folder named “openhab-addons-2.1.0” inside the /kar folder but I have no idea of what is this doing

again so sorry for the noob questions ^.^

Shane

Those are the ones I include in github. But there is a distinction between a full backup and just configuration controlling things I have modified. If you are using HABPanel there is almost certainly other folders to install. If you are not using addons.cfg then you probably need to backup your installed bindings and such as well.

There is a backup and restore script that now comes with the snapshot. You could look at that script and see what folders it backs up.

Becuase I don’t use git for backup. I use it to configuration control only those files I’ve edited. My intent is to use what is stored in git to reconstruct OH, not back it up.

Also, the only persistence I use that gets stored there is MapDB which is only used for restoreOnStartup. I don’t really care if I lose this data.

I think that is where the offline bindings kar file gets stored. I use online (i.e. when I install a binding it gets downloaded on demand).

@lysol Thank you for alternative method that you mentioned! I tried this, but since I cannot create directories on /root on my pi, I create SSH key on /home/username/.ssh
I also make my data folder on /home/username/data, and git clone it here.
I successfully push to bitbucket manually, and have “first push” note on bitbucket

But I am unable to run the script, even when logged as root, it give me permission denied. Can you help me to mention what is wrong? Do I have to chown it? If yes, what is permission combination that I should use?

here is my openhab2backup (renamed to ohbackup for simplicity):

#This script will automatically backup the necessary openhab2 configuration files to a secondary location on this server and then push to Bitbucket. Do not rely on this local secondary backup location for a backup as it still resides on the local system.

confdir="/etc/openhab2"
userdatadir="/var/lib/openhab2"
runtimedir="/usr/share/openhab2"
logfolder="/home/openhabian/data/logs"
logfilename="oh2backup_$(date +%Y%m%d_%H%M%S)"
oh2backupresult=$logfolder/$logfilename
backuploc="/home/openhabian/data/openhab2backup"

#Make log folder if it doesn't already exist and create the file that will contain the backup results.
echo "---------------------------Starting--------------------------" >> $oh2backupresult
mkdir -p $logfolder
touch $oh2backupresult

#Sync Directories:
echo "-----------------Syncing openHab2 Directories----------------" >> $oh2backupresult
/usr/bin/rsync -r $confdir $backuploc/configuration
echo "$confdir --> $backuploc/configuration" >> $oh2backupresult
/usr/bin/rsync -r $userdatadir $backuploc/userdata
echo "$userdatadir --> $backuploc/userdata" >> $oh2backupresult
/usr/bin/rsync -r $runtimedir $backuploc/runtime
echo "$runtimedir --> $backuploc/runtime" >> $oh2backupresult

#Add Untracked Files:
echo "--------------------Adding Untracked Files-------------------" >> $oh2backupresult
cd /home/openhabian/data/ohbackup
/usr/bin/git add configuration
/usr/bin/git add userdata
/usr/bin/git add runtime

#Commit:
/usr/bin/git commit -m "generated files on `date +'%Y-%m-%d %H:%M:%S'`" >> $oh2backupresult

#Push:
echo "---------------------Pushing to Bitbucket--------------------" >> $oh2backupresult
/usr/bin/git push >> $oh2backupresult
echo "-----------------------Backup Complete-----------------------" >> $oh2backupresult

Also, how to change log behavior? Is it possible if it does not log at all?

@rlkoshak can you guide me on how to use symbolic links, and how to implement it here? All I can found is to create symbolic links to files, not folders. I also does not understand on what should be modified from lysol’s guide to make it work? I think I do not use docker, I use openhabian image file running on pi.

I won’t ask about git-encrypt yet, I think I will ask another people from another forum for this, so I do not hijack this thread.

@mstormi Out of topic here: Hello, I am having problem on setting aws for amanda backup. Can you help me to a link for tutorial to make it work with aws? I only found about amanda + nas guide. My s3 bucket already have tape1 to tape15 generated by amanda installation, but no backup is uploaded to amazon aws.