As a beginner, it was not that easy, but thanks to the docu and the threads in this community, I have Amanda up and running on a test rpi with openhabian. Amanda writes its data daily on my synology NAS. My first aim is to be able reproduce my full rpi instead of just recovering a selection of files (recently had some issues due to a power surge). So, I have activated the option to create raw partitions. I v tested the recovery and could perfectly recreate my SD card on another SD and have a copy of my rpi running again.
BUT that is easy to do if your rpi with the āoldā SD card is still up and running. You use your still existing rpi with the existing Amanda installation to do the amfetchdump and create an image file on your nas.
What Iām investigating is what to do if your āoldā rpi and SD card are completely broken. So you only have the files on your NAS left. Isnāt that also an important use case of the backup?
I see two options:
Proactively, have some kind of script running which runs the amfetchdump command, for instance every week, so your NAS already has the image files readily available before disaster strikes. amfetchdump -p openhab-dir openhab /dev/mmcblk0 <YYYYMMDDhhmmss> > /mnt/ext-storage/amanda/images/<filename>
But then you need a way to schedule this and also a way to use exact timestamp of your last dump as a variable. Has anyone gone down this road before?
Setting up a completely new openhabian rpi from scratch, install a new Amanda and try to recreate your image files from there. During this Amanda install, you cannot point Amanda to your already existing Amanda backup folder on the NAS (the installation will fail since it cannot create its folders as they are already existing). So the new Amanda does not know the existence of the old backup files. Now, I am lost how to proceed. The openhabian docu has a (brief) section on how to restore if things have gone really badly. But it s pushing the user forward to explore the internet. I can understand that Openhabian docu should not provide a full Amanda training. But Iām afraid even the internet info is very limited. Or I am searching the wrong way. So I hope there is someone around who still wants to give some more hints. I think to understand from the docu that it does not make sense to simply copy over all the āoldā backup and slots files to the folders which were created by the ānewā Amanda installation. You should rather try to tell the new Amanda how the old Amanda files are structured (indexed?). The hints in the openhabian docu are:
a. Amadmin import:
The best link I could find is this link.
amadmin import/export
amadmin now has āimportā and āexportā commands, to convert the curinfo database to/from text format, for: moving an Amanda server to a different arch, compressing the database after deleting lots of hosts, or editing one or all entries in batch form or via a script.
For a beginner, this is still very limited. It looks like you should have done an export first. But that was not scheduled on the old Amanda.
b. Amindex: no info for this command to find and when trying this command in putty, it is not a command as well. bash: amindex: command not found
c. There should be a way to find your index files from this file. /etc/systemd/system/amandaBackupDB.service
Below is the content oif this file on my new rpi. Which conclusions can I take from that?
Iāve dropped that script into /etc/cron.monthly/backup-calibre and it runs once a month. /srv/backups is mounted from my NAS as an NFS share.
Itās not really a backup if itās on the same machine it was taken from. I donāt use Amanda myself but is there a reason you canāt have it back up to your NAS in the first place instead of needing to backup and then transfer?
Beyond that I canāt be of much help. I donāt use Amanda and have a different backup strategy. But I wonder if the way itās intended to be set up is to use Amanda and SD card mirroring. Amanda takes the incremental backups but in the event of a disaster and you have to start over from scratch, youād first restore from the mirrored SD card image and then restore from Amanda to get the latest and greatest.
Thereās many possibilities but no need to overcomplicate things.
First, you should be using SD mirroring so you will be having a working copy of the current database and can use standard amrestore.
Second, see /etc/systemd/system/amandaBackupDB.service. Thereās a timer thatāll run this at regular intervals and copies the Amanda database to your Amanda storage area.
If in need reinstall Amanda and copy the database back.
Thanks for the suggestion with the script! Have not created any scripts so far but will investigate that for sure.
This is a misunderstanding. My amanda is backing up to my NAS. My NAS is mounted to the rpi and amanda is putting its files there. But the thing is, that does not mean there is an image file readily available on the NAS just like that. You still need a few manual commands with your existing (but potentially broken) Amanda before you have such image file to flash to an SD card. As far as I undertand, without this āpreparationā you ll not be able to recreate your SD card from the files which Amanda has written to the NAS.
This is confusing. With āSD mirroringā, are you referring the openhabian menu option 53, or the option in Amanda itself to create a raw backup?
FYI, I v also played around with that option 53 and got it working. It s far more easy and userfriendly than setting up amanda. I deliberately did not mention this option in my opening post, to avoid that the topic would become an evaluation of pros and cons, Amanda vs SD mirroring. For several reasons I also wanted to fully understand and test Amanda on its own. Main reason is that if we believe the rpi is vulnerable, and the internal SD card is vulnerable, why would an external SD card which is in an USB card reader attached to that same rpi not be vulnerable. But, again, that on the side.
For a proper understanding, when you say āstorage areaā, you mean the folder where Amanda puts the backup data right? So the NAS in my case.
I checked that file (see my post above) but what exactly to do with this information? There a bunch of paths in there. Should I have changed something in here, in the āoldā rpi, so there would be additional files stored on the NAS?
All I have on my NAS is
a folder āAmanda-backupā which has a ā¦tar.gz file
a folder āslotsā with the 15 slot folders and 1 file āstateā. Each slot folder has 5 files, with always the same extensions.
Is it one of those files which you refer to as āthe Amanda databaseā? Which file to copy to where?
Vulnerable to what exactly? You have to know what you are protecting against to know what to do to mitigate it, and assess whether what you are doing is actually mitigating the problem adequately.
Are you worried about the SD card wearing out? The Zram config already does a good job of mitigating that. But even if it didnāt, you are not writing to that second card that much so itās not going to wear out.
Are you worried about corruption caused by a power outage? Again, the Zram config already does a pretty good job of preventing that. But even it it wasnāt, itās only a problem for the second card if you happen to be taking an image at the moment power is lost.
Are you worried about a cinderblock falling and crushing your RPi? Then once the mirror image is take, remove the card and put it somewhere safe.
This is important because you have to determine what you are trying to mitigate with the Amanda backups too.
SD card mirroring and Amanda are not mutually exclusive. They each help mitigate different vulnerabilities.
Yes. As Rich replied Amanda and SD mirroring arenāt mutually exclusive the official recommendation is to use both. Itās even in the Docs.
I mean what you entered as āstorage areaā when you were asked at Amanda Installation time, thatās a directory. In your case on your NAS.
Change to the root dir and extract the latest of the .tar archivesfrom the amanda_backups directory.
WARNING this is Linux level and Not a tested procedure. I assume you understand the implications so be careful check contents before it overwrites something Important or better extract in some safe location.
From the hints above and google, I managed to restore an image from my test openhabian instance.
In case it could ever help anyone, here is briefly what I did. At own risk though.
After your āoldā openhabian has been lost completely lost (lets say you had a flood in the location where your rpi - but not your NAS - was stored) and you are only left with amanda files on your NAS:
Take a copy of your āoldā amanda files and save them somewhere safe. (just to be sure, in case you would accidentaly overwrite them from your new openhabian install).
Setup a brand new openhabian raspberry. Install amanda. During install, you have to point amanda to another storage location than your old one (or the installation will fail)
Fetch the old tar.gz file of your choice and put it in the home folder. sudo cp /mnt/mounttonas/amanda-backups/amanda_data_20220214020143.tar.gz /
Extract it sudo tar -zxvf amanda_data_20220214020143.tar.gz
Now, do not run any back ups, or I believe you risk to overwrite your old files. It s also not what you want since this new openhabian install does not have any config.
su backup
amadmin openhab-dir find openhabian /dev/mmcblk0
restore the backup of your choice. amfetchdump -p openhab-dir openhabian /dev/mmcblk0 20220214154137 > /mnt/mounttonas/restoretest15022022
add .raw extension and use balena to make a new SD card. Now you will have your āoldā openhabian back who will continue to write backups on your original storage area.
Not claiming that this is the best or most optimal way, but it is at least a way for a non-expert.
I realize this topic is super old, but I was wondering if there are any updates here?
I just installed Amanda and started testing it. The amfetchdump process creates an image that doesnāt fit the same size sd card. I had to go through hoops to resize and truncate the image before it would fit.
I would script it all and always have a clean image stored on my NAS if the whole process didnāt require gparted to resize the image file.
Has anyone found a solution where the outcome is an image file that can be put on a new card if the current system running Amanda fails?
Thatās not the right approach.
You should be using SD mirroring so you have a secondary system with an up-to-date Amanda database.
Amanda also stores a copy of its DB via cron that you can copy offsite and back when in need.