Amanda, restore after disaster

As a beginner, it was not that easy, but thanks to the docu and the threads in this community, I have Amanda up and running on a test rpi with openhabian. Amanda writes its data daily on my synology NAS. My first aim is to be able reproduce my full rpi instead of just recovering a selection of files (recently had some issues due to a power surge). So, I have activated the option to create raw partitions. I v tested the recovery and could perfectly recreate my SD card on another SD and have a copy of my rpi running again.
BUT that is easy to do if your rpi with the ā€˜oldā€™ SD card is still up and running. You use your still existing rpi with the existing Amanda installation to do the amfetchdump and create an image file on your nas.

What Iā€™m investigating is what to do if your ā€˜oldā€™ rpi and SD card are completely broken. So you only have the files on your NAS left. Isnā€™t that also an important use case of the backup?

I see two options:

  1. Proactively, have some kind of script running which runs the amfetchdump command, for instance every week, so your NAS already has the image files readily available before disaster strikes.
    amfetchdump -p openhab-dir openhab /dev/mmcblk0 <YYYYMMDDhhmmss> > /mnt/ext-storage/amanda/images/<filename>
    But then you need a way to schedule this and also a way to use exact timestamp of your last dump as a variable. Has anyone gone down this road before?
  2. Setting up a completely new openhabian rpi from scratch, install a new Amanda and try to recreate your image files from there. During this Amanda install, you cannot point Amanda to your already existing Amanda backup folder on the NAS (the installation will fail since it cannot create its folders as they are already existing). So the new Amanda does not know the existence of the old backup files. Now, I am lost how to proceed. The openhabian docu has a (brief) section on how to restore if things have gone really badly. But it s pushing the user forward to explore the internet. I can understand that Openhabian docu should not provide a full Amanda training. But Iā€™m afraid even the internet info is very limited. Or I am searching the wrong way. So I hope there is someone around who still wants to give some more hints. I think to understand from the docu that it does not make sense to simply copy over all the ā€˜oldā€™ backup and slots files to the folders which were created by the ā€˜newā€™ Amanda installation. You should rather try to tell the new Amanda how the old Amanda files are structured (indexed?). The hints in the openhabian docu are:
  • a. Amadmin import:
    The best link I could find is this link.

amadmin import/export
amadmin now has ā€œimportā€ and ā€œexportā€ commands, to convert the curinfo database to/from text format, for: moving an Amanda server to a different arch, compressing the database after deleting lots of hosts, or editing one or all entries in batch form or via a script.

For a beginner, this is still very limited. It looks like you should have done an export first. But that was not scheduled on the old Amanda.

  • b. Amindex: no info for this command to find and when trying this command in putty, it is not a command as well.
    bash: amindex: command not found

  • c. There should be a way to find your index files from this file. /etc/systemd/system/amandaBackupDB.service
    Below is the content oif this file on my new rpi. Which conclusions can I take from that?

[Unit]
Description=Make nightly backup of Amanda database
After=network.target network-online.target
Wants=amandaBackupDB.timer

[Service]
Type=oneshot
User=backup
Group=backup
ExecStart=/bin/bash -c 'cd /; /bin/tar czf /mnt/mounttonas/amandanieuw2/amanda-backups/amanda_data_$(date +%%Y%%m%%d%%H%%M%%S).tar.gz etc/amanda var/lib/amanda var/log/amanda; find /mnt/mounttonas/amandanieuw2 -name amanda_data_* -mtime +30 -delete' &> /dev/null

Many thanks for any hints.

Thatā€™s typical and standard cron job stuff. Hereā€™s an example cron job I use to backup my Calibre library.

#!/bin/bash
echo "Backing up calibre"
file=/srv/backups/calibre/calibre-$(date +%Y-%m-%d_%H%M).tgz
cd /srv/calibre
tar cfz $file .

fsize=$(ls -lh $file | cut -d ' ' -f 5)
toc=$(tar tfz $file)
body=${file}'\nBackup size: '${fsize}'\n\nContents:\n'${toc}

sendmail=/usr/sbin/sendmail
email=rlkoshak@gmail.com
to='To: '$email'\n'
from='From: '$email'\n'
subject='Subject: calibre Backed Up\n\n'
body=${body}
msg=${to}${from}${subject}${body}

echo -e "$msg" | $sendmail $email

Iā€™ve dropped that script into /etc/cron.monthly/backup-calibre and it runs once a month. /srv/backups is mounted from my NAS as an NFS share.

Itā€™s not really a backup if itā€™s on the same machine it was taken from. I donā€™t use Amanda myself but is there a reason you canā€™t have it back up to your NAS in the first place instead of needing to backup and then transfer?

Beyond that I canā€™t be of much help. I donā€™t use Amanda and have a different backup strategy. But I wonder if the way itā€™s intended to be set up is to use Amanda and SD card mirroring. Amanda takes the incremental backups but in the event of a disaster and you have to start over from scratch, youā€™d first restore from the mirrored SD card image and then restore from Amanda to get the latest and greatest.

Thereā€™s many possibilities but no need to overcomplicate things.
First, you should be using SD mirroring so you will be having a working copy of the current database and can use standard amrestore.
Second, see /etc/systemd/system/amandaBackupDB.service. Thereā€™s a timer thatā€™ll run this at regular intervals and copies the Amanda database to your Amanda storage area.
If in need reinstall Amanda and copy the database back.

Thanks for the suggestion with the script! Have not created any scripts so far but will investigate that for sure.

This is a misunderstanding. My amanda is backing up to my NAS. My NAS is mounted to the rpi and amanda is putting its files there. But the thing is, that does not mean there is an image file readily available on the NAS just like that. You still need a few manual commands with your existing (but potentially broken) Amanda before you have such image file to flash to an SD card. As far as I undertand, without this ā€œpreparationā€ you ll not be able to recreate your SD card from the files which Amanda has written to the NAS.

This is confusing. With ā€œSD mirroringā€, are you referring the openhabian menu option 53, or the option in Amanda itself to create a raw backup?
FYI, I v also played around with that option 53 and got it working. It s far more easy and userfriendly than setting up amanda. I deliberately did not mention this option in my opening post, to avoid that the topic would become an evaluation of pros and cons, Amanda vs SD mirroring. For several reasons I also wanted to fully understand and test Amanda on its own. Main reason is that if we believe the rpi is vulnerable, and the internal SD card is vulnerable, why would an external SD card which is in an USB card reader attached to that same rpi not be vulnerable. But, again, that on the side.

  • For a proper understanding, when you say ā€œstorage areaā€, you mean the folder where Amanda puts the backup data right? So the NAS in my case.

  • I checked that file (see my post above) but what exactly to do with this information? There a bunch of paths in there. Should I have changed something in here, in the ā€˜oldā€™ rpi, so there would be additional files stored on the NAS?

  • All I have on my NAS is

    • a folder ā€œAmanda-backupā€ which has a ā€¦tar.gz file

    • a folder ā€œslotsā€ with the 15 slot folders and 1 file ā€œstateā€. Each slot folder has 5 files, with always the same extensions.

Is it one of those files which you refer to as ā€œthe Amanda databaseā€? Which file to copy to where?

Vulnerable to what exactly? You have to know what you are protecting against to know what to do to mitigate it, and assess whether what you are doing is actually mitigating the problem adequately.

Are you worried about the SD card wearing out? The Zram config already does a good job of mitigating that. But even if it didnā€™t, you are not writing to that second card that much so itā€™s not going to wear out.

Are you worried about corruption caused by a power outage? Again, the Zram config already does a pretty good job of preventing that. But even it it wasnā€™t, itā€™s only a problem for the second card if you happen to be taking an image at the moment power is lost.

Are you worried about a cinderblock falling and crushing your RPi? Then once the mirror image is take, remove the card and put it somewhere safe.

This is important because you have to determine what you are trying to mitigate with the Amanda backups too.

SD card mirroring and Amanda are not mutually exclusive. They each help mitigate different vulnerabilities.

Yes. As Rich replied Amanda and SD mirroring arenā€™t mutually exclusive the official recommendation is to use both. Itā€™s even in the Docs.

I mean what you entered as ā€œstorage areaā€ when you were asked at Amanda Installation time, thatā€™s a directory. In your case on your NAS.

Change to the root dir and extract the latest of the .tar archivesfrom the amanda_backups directory.
WARNING this is Linux level and Not a tested procedure. I assume you understand the implications so be careful check contents before it overwrites something Important or better extract in some safe location.

PS non English autocorrection is a pita

From the hints above and google, I managed to restore an image from my test openhabian instance.

In case it could ever help anyone, here is briefly what I did. At own risk though.

After your ā€˜oldā€™ openhabian has been lost completely lost (lets say you had a flood in the location where your rpi - but not your NAS - was stored) and you are only left with amanda files on your NAS:

  1. Take a copy of your ā€˜oldā€™ amanda files and save them somewhere safe. (just to be sure, in case you would accidentaly overwrite them from your new openhabian install).
  2. Setup a brand new openhabian raspberry. Install amanda. During install, you have to point amanda to another storage location than your old one (or the installation will fail)
  3. Fetch the old tar.gz file of your choice and put it in the home folder.
    sudo cp /mnt/mounttonas/amanda-backups/amanda_data_20220214020143.tar.gz /
  4. Extract it
    sudo tar -zxvf amanda_data_20220214020143.tar.gz
  5. Now, do not run any back ups, or I believe you risk to overwrite your old files. It s also not what you want since this new openhabian install does not have any config.
  6. su backup
  7. amadmin openhab-dir find openhabian /dev/mmcblk0
  8. restore the backup of your choice.
    amfetchdump -p openhab-dir openhabian /dev/mmcblk0 20220214154137 > /mnt/mounttonas/restoretest15022022
  9. add .raw extension and use balena to make a new SD card. Now you will have your ā€˜oldā€™ openhabian back who will continue to write backups on your original storage area.

Not claiming that this is the best or most optimal way, but it is at least a way for a non-expert.

3 Likes

I realize this topic is super old, but I was wondering if there are any updates here?

I just installed Amanda and started testing it. The amfetchdump process creates an image that doesnā€™t fit the same size sd card. I had to go through hoops to resize and truncate the image before it would fit.

I would script it all and always have a clean image stored on my NAS if the whole process didnā€™t require gparted to resize the image file.

Has anyone found a solution where the outcome is an image file that can be put on a new card if the current system running Amanda fails?

Thanks,
Jerry

Thatā€™s not the right approach.
You should be using SD mirroring so you have a secondary system with an up-to-date Amanda database.
Amanda also stores a copy of its DB via cron that you can copy offsite and back when in need.