Openhabian setup with NFS boot - What backup options?

Hi,

I’m in the process of finalizing my Rpi + openhabian setup with NFS boot (I chose to for the time being the option of /boot on SD card and system partition on my Synology NAS - NFS mounted.

I have remote backup and versioning activated on the NAS for my other stuff (Synology cloud + Google drive) and was thinking I could simply include the Raspberry system partition in this backup strategy - this way, I could go back in time every hour in the past in case I’d need to restore something.

Question is : is this going to work? I’m worried about the fact that since the remote backups would take place while the openhabian config is active, am I really going to get something that is restore-able?

Thanks,

Yes this is what Amanda (the backup solution to come with openHABian) does as well.

I’d opt against your setup, though. Have been running like that for quite some time, too, but it kept showing that there’s dependencies (on NAS, network, power) that you don’t suffer from if you run it all in one box.
That’s why there’s now ZRAM to mitigate the risk of SD corruption.

Thanks Markus for the feedback. I have to say your message is a bit of a cold shower :slight_smile: as I was convinced I was going the right way. Thanksa lot for sharing your experience.
I’ll run some further tests to see how it goes, even if from your message, I understand that the dependencies you are referring to are of the kind that is visible and impacting over longer time frame, so my tests might not be very conclusive.

Well in the end it’s simple: both systems have to run for your automation to work, so there’s two times the risk of your installation failing and two times the effort (UPS, backup, …) to mitigate that risk.

Yes, I understand. Thanks for elaborating. Makes full sense.

The other consideration I’m making is the easyness and errorproness for someone like me;-)
I read once more the entire Amanda readme file. Thanks a lot for all the info in there, it is very useful (and a lot is applicable to non Amanda based setup, I think).

My perception is that the end to end to do list and things to know and to remember is still rather high for me (including the restore part), versus keeping entire “system partitions” versionned and backed up by the NAS and being able to restore any of them at any point in time (using the NAS built-in restoration tool, fully graphical, no command line etc…).

I have one related question (applying to both Amanda and Synology like backup, I think)
When backing up system partitions of active systems, if between the beginning and the end of the backup, there are changes, couldn’t that lead to inconsistent backups?

Yes in theory, no in practice.

Ok, good for me then:-)

Quick update on this topic : I have been running with the following config for 3 weeks now:

  • 3 Rpi with boot on their SD card and each a filesystem on a synology NAS (one hosting the zwave stick in a remote location, one running openhabian and one raspbian config used for some testing)
  • I upgraded to a Rpi4 thinking that the 1Gbits LAN interface would not hurt my setup
  • I have the NAS doing remote backups (using Hyper Backup of all my personal files including the RPi filesystems) - with possibility to go back in time on an hourly basis.

I have done various restore tests. What I found out is important is that to run “sudo shutdown” on the Rpi is not enough and I had to power it off entirely. Then restores work 100% of the time. I just need to restore from HyperBackup, then power up the Rpi and voilà.

Overall, it fits decently well my needs, a bit brute force but I like the comfort so far…

I’m now checking if I could do an hybrid mode where I have remote backups (for disaster recovery) and local backups on the same NAS for coping with “minor disasters” like when I screw up the configs like this, to see if I can shorten a bit the restore time (some > 20 minutes today, but if it could be faster it would not hurt).