Unattended restore of backup (without asking for confirmation)?


I have a secondary server thats a pure offline-backup, everytime I switch the server on, its automatically syncing with the master-server.

For OpenHab Im currently stopping the openhab service, rsync the /var/lib/openhab/persistence/ directory, export the full configuration on the master-server with openhab-cli and want to restore the config on the backup-server without beeing asked if Im sure that this will delete the previous configuration.

Is there a switch that this part can run in a script without interaction?

Any objections to my backup strategy of OpenHab? The backup-server is a pretty old and loud server, thats really just good for a “tape-archive” alike backup, but in case the primary server fails I want an easy way of restoring everything, and with a live copy I can check the integrity of the backup every now an then.

if the program doesn’t offer an option to skip prompting, maybe try this: How can I respond to prompts in a Linux Bash script automatically? - Stack Overflow

PS: Try openhab-cli restore --noninteractive

the --noninteractive option does not seem to have an effect, but echoing the “y” into the cli did do the trick (echo -e “y” | openhab-cli …) , thank you for pointing it out.

Well, it’s you that has to live with it, so who are we to object ?
Apart from the fact that you have to shutdown OH, have you ever tested it works ?
Quite some stuff may start failing when your IP changes.

Also, you should be aware that a reasonable disaster preparation strategy is much more than application level config export. Think hardware, OS and 3rd party software you need.

The IMHO best automated and cheapest solution is to use openHABian on Raspi.
It has a storage card mirroring feature, too.
BTW based on rsync, too. No need to shutdown OH in order to sync.

1 Like

If that’s your approach, why not use rsync for everything? I don’t see that openhab-cli does much for you in this case and there is a lot to be said for reducing the variety of ways that you do things. There really isn’t anything special about openhab-cli backup. It just zips up the /etc/openhab and parts of /var/lib/openhab.

One thing you don’t mention though is whether/how you keep the running versions of all the software up to date on the backup server too.

[quote=“mstormi, post:4, topic:154089, full:true”]
Apart from the fact that you have to shutdown OH, have you ever tested it works ?
Quite some stuff may start failing when your IP changes.[/quote]
Everything works fine, except the bindings which are physically not connected.
the servers are running on DHCP and name-resolution is used to get the current IP address. There will be no static IP addresses for any server in the future, so a changing IP address shouldnt be an issue in the first place (IPv6 is not meant to be used with static addresses)

[quote=“mstormi, post:4, topic:154089, full:true”]
Also, you should be aware that a reasonable disaster preparation strategy is much more than application level config export. [/quote]
this is my disaster strat, its cheap and easy to implement. When something goes horribly wrong, I plug in the second server and everything should be up and running in a matter of (insert time for unplugging and plugging everything thats needed)

thats strange, in a lot of places the shutdown of OH is strongly recommended when syncing the rr4dj database, a lot of folks had issues with data-loss when OH was running during the sync.
And openhab-cli restore does not work when OH is running.

Even other server applications require a shutdown for any sort of restore, dont see a problem here. The server is a offline copy so uptime is not an issue.

here is my current script:

configfile="/root/OpenHabBackupConfig$(date +"%Y-%m-%d")"
systemctl stop openhab.service
ssh master-server "openhab-cli backup --full ${configfile}"
rsync -a master-server:/var/lib/openhab/persistence/ /var/lib/openhab/persistence/
scp master-server:"${configfile}.zip" /root/
echo -e "y" | openhab-cli restore "${configfile}.zip"
systemctl start openhab.service

every backup-advice for OH recommends doing the database seperately, because its up to the user what kind of database is used and regarding to the docs, openhab-cli backup does not take care of that.

On the other hand, we do not know if the backup/restore of OH will change in the future, it might be like that right now, but it could change with every release. Using the predefined way thats included in OH is the only “correct” way. Take for example a change in the way how certs are handled, or new directories are added. It would be cumbersome and error-prone to track all the changes in a personal backup-script when the openhab-cli backup is already taking care of this.

Btw: because its a simple dedicated server (yes, something like this still exits today), I cannot use teraform or cloudformation or any other repository based configuration. That would be the best way.
Im currently updating both servers independently, so the software-version is not exactly the same, but that gives me the chance to review certain changes.

Zip doesn’t handle files changing out from under it while it’s creating the archive. Technically neither does rsync but if a file is changing while rsync is transferring it the whole rsync doesn’t get cancelled (IIRC). And because it’s done on a file by file basis you are much less likely to encounter that problem than with a zip where if any file changed the whole archive is messed up.

The big problem with rrd4j is it’s almost constantly being written to so the likelihood that a write is happening while creating a backup is relatively high.

On the restore side, OH doesn’t read from most of the managed configs (i.e. the stuff in /var/lib/openhab) except during OH boot. Furthermore managed configs are often modified by the upgradeTool to adjust them for breaking changes during an upgrade. Given both of these yes, stopping OH prior to a restore is required.

That’s a fairly bold statement. It is in fact the case that openhab-cli does back up the embedded databases: SQLite, rrd4j, and MapDB. All of the files associated with these databases are in /var/lib/openhab. It’s only external databases like PostgreSQL, MongoDB, InfluxDB, etc. that are stored outside of OH and therefore openhab-cli cannot handle.

Can you point to the place in the docs that says openhab-cli doesn’t handle the embedded databases? If it’s not clear that it’s talking about external databases that needs to be adjusted to be made clear.

And yet in practice it hasn’t changed since OH 2.0 (that’s eight years ago for those counting).

Then why did you say this?

Any objections to my backup strategy of OpenHab?

If you just want to start an argument I’m sure there are plenty of other places on the Internet more than willing to give you one. If you are just looking for praise for your “clearly superior” backup strategy I’m sure you can find other places on the Internet willing to cater to your ego. Heck, you probably could have just removed the request for feedback from this thread and likely would have received some likes and “atta boys!”

But if you come here and ask for opinions, it’s kind of rude to reject them out of hand and claim you have “the only correct” one. Especially when you are talking with users in this thread who have decades of combined experience with openHAB.

Not really. If you rsync all of /etc/openhab and all of /var/lib/openhab you get everything. When you spin up your cold backup server it will pick up right where it left off. Note this is how (and the only way that it could) work in the Docker deployment too.

But what do I know, I’ve only been using OH for a decade.

[quote=“rlkoshak, post:8, topic:154089, full:true”]
If you just want to start an argument I’m sure there are plenty of other places on the Internet more than willing to give you one. [/quote]

only your question brought that up, so while answering your question I thought about why I did that and figured out that, without giving it a second thought, it was already the best way (if there is no other included backup-script). Never implement something again that is already implemented in a maintained software-package. Thats one of THE most basic rules in IT. So yes by definition its correct.

But I was not aware that openhab cli already takes care of the embedded databases.
Your answer here:

is the first on Google when looking into the topic, and when someone writes “I think it takes care of that” is not very reassuring so its more like an advice to do the database backup seperately.

So there is nothing rude here. Actually we figured a few things out here:

  • openhab-cli will backup the embedded databases
  • restore can be automated
  • we should shutdown obenhab during that procedure
  • in theory one could backup everything with rsync, but thats a nogo in multiple regards (not IT-secure, not failsafe, manual implementation of file-locking mechanims is error-prone)
  • there might be some issues with IP address changes (even though that should not happen when everybody is ahering to the RFCs)

I’m not familiar with openhab-cli. Is it part of the openhab package for Linux? Is it a part of openhabian?

Afaik, it is not a part of the main openhab zip distribution nor docker.

I really like the simplicity of this, and this way of using openhab-cli should continue to be supported and perhaps also be the recommended way of backup/restore especially for those unfamiliar with openhab.

@rlkoshak other than the fact that this doesn’t backup external persistence data (MySQL, influx etc) are there any issues with this?

Edit: ideally the master server should be shut down prior to taking the backup. But it would be nice if it can be made so this doesn’t have to be done. Some form of disk flush on the core would be good.

How often do you do that? Maybe a more regular backup on the live server is also a good idea, i.e. just create a backup file stored on the master server then the backup server can copy from there when it’s turned on without disturbing the master server

regarding to the docs it should be part of the main-package:

But looks like its not part of the Docker image:

its strange though, I would have expected this to be a core component. Maybe it will be added in the future?

Both servers run on RAID1, for me this is the easiest “backup” of all data. There is another rsync-process running thats backing up the home partition (which is way more important to me than OH) to a geographical different location, but I do not want to clutter that backup with OH. Thats the reason for the offline-server, which is also the tertiary backup for my home data.

One questions that needs verification:
do we need to shutdown the primary server for the backup of the data? obenhab-cli does not complain when the server is still running. Is there a risk of data-loss ? Should openhab-cli take care of the shutdown and restart of the server (would be the “expected” way, as other tools like mysqldump are also working on a live server and are taking care of data-integrity internally)

I’ve done some digging. Here’s what I found

  • openhab-cli is part of the linux package, not a part of the main distribution zip file, nor a part of docker / openhabian. Perhaps openhabian installs the linux package, therefore it becomes available on an openhabian-installed systems.

  • openhab-cli is basically a front end for various other scripts, including backup and restore. These last two scripts are part of the main distribution and they are therefore available on all types of openhab installations including manual and docker.

1 Like

No you don’t need to shut down the primary server for the backup of the data.

  • The files in the conf folder are static - only changed by you, manually. So they are 100% safe to backup
  • The files inside userdata/jsondb/ are written to when there are changes made through the Main UI e.g. when you’re actively making changes to your openhab config via the UI, e.g. adding things, editing items, etc. Same as above, this would be safe to backup while running. Metadata may be actively updated if you have rules doing so.
  • userdata/logs are indeed actively written to, so there may be a possibility of corruption but the data in there isn’t that important.
  • userdata/persistence would be written to by things like mapdb or rrd4j, so there’s a potential for issues there. If you are really concerned, you could stop the corresponding persistence bundles prior to taking the backup. This may cause problems with rules relying on them. If you want to avoid this, use a client/server based persistence, e.g. influxdb, mysql, pgsql. They have a solid and separate backup/restore process.
1 Like

It was intended to mean “open the zip file and check if you are not sure.” If you are going to take every time I write “I think” to mean to take the complete opposite meaning of what I said, you’re going to be very frustrated with every one of my posts.

When I say “I think” it means “I’m pretty certain what I’m about to say is 100% correct but I’m not in a position to physically verify it right now so you should go look at the docs or try it out for yourself.” But that takes too long to type and I figured saying “I think” was clear enough that what is said needs verification.

If I meant “don’t trust anything I’m about to say” I wouldn’t have responded at all.

There is nothing wrong with the approach.

My only objection was that despite using rsync for the rrd4j database files is fine it is deemed horrible to contemplate using it for all of the OH configs. My only suggestion was using openhab-cli doesn’t really add much over rsync so doing it the same way for both would be both simpler and easier.

All the disadvantages listed above about rsync are worse with zip. And that’s all that openhab-cli backup and restore does. Zip up some files to backup and unzips them to restore.

It depends. It is probably best to do so to prevent any tiny chance of a file changing while it’s being backed up, potentially corrupting that file in the backup.

In practice, a tar, zip, or rsync while OH is running has a very low probability of the timing working out such that a file will become corrupted during that backup. The risk is low but it’s up to you whether or not to take that risk.

I’ve personally run a daily backup that just tars and gzips up all the contents of my conf and userdata folder (which includes mapdb and rrdj4) for about four years and I’ve yet to experience any “file changed while reading” errors. I’m also unaware of anyone reporting on the forum in the years since openhab-cli has been around where a backup failed. YMMV.

A more dangerous risk of data loss would be if one is actively making configuration changes while the backup is taking place. If you change a config that has already been saved to the backup while the rest of your files are being backed up you might have an inconsistent state. But at 2AM the only writes taking place on most people’s OH is going to be the logs, persistence, and Item metadata in very rare cases (like @JimT points out) and the logs really don’t matter in terms of backup.

I think that would amount to a breaking change if it became the default behavior. I personally think the risk is low enough that stopping OH on backup shouldn’t be the only option. Maybe a flag could be added to shutdown and restart OH on backup. It probably should be added to the backup.sh and backup.ps scripts too.