Amanda howto for openhabian and NAS

backup
openhabian
amanda
Tags: #<Tag:0x00007fd313168558> #<Tag:0x00007fd3131683f0> #<Tag:0x00007fd3131681c0>

(Stefan Haupt) #101

thanks for the confirmation. wow, didn’t expect this! :wink: seems the EMMC performs much better than an SD card


(Frantisek) #102

Hi, I am using 16GB micro SD card, partitioned is only 10GB, used space about 3GB. The raw backup of whole sd card takes about 1 hour and 20 minutes (RPI3, saved to local ext HDD) and the backup files have altogether about 10GB compressed.

@mstormi Markus, no reaction to my suggestion for recovery part of the readme, which is, in my opinion, very important, as when your system is down and you want to recover ASAP, the readme would be the first place to check…


(Joerg) #103

I am using SD Card with 16 GB as well. But what is your configuration that backup runs that fast? Could you post it?
I don‘t think that this is about partitioning. My used Space is less than 3GB and still the raw backup took as posted about 1 day.
I was wondering what I can do to incease speed. Is it maybe a problem that I have the Pi 3B plus model?
As soon as I have a bit more time I can post my log from that cancelled backup here.


(Markus Storm) #104

I added it to the docs, see PR quoted above.


(Frantisek) #105

As I said… RPI3, saving on local HDD connected via USB to RPI directly, and the amanda config file is mostly unchanged from the standard installation…

org "openHABian openhab-dir"				# Organization name for reports
mailto "myEmailAddress@gmail.com"			# Email address to receive reports
netusage 10000 Kbps					# Bandwidth limit, 10M
dumpcycle 2 weeks					# Backup cycle is 14 days
runspercycle 7						# Run 7 times every 14 days
tapecycle 15 tapes					# Dump to this number of different tapes during the cycle
runtapes 10						# number of virtual containers to use at most per backup run
tpchanger "chg-disk:/mnt/exthdd/slots"    # The tape-changer glue script
taper-parallel-write 2
autolabel "openHABian-openhab-dir-%%%" empty
changerfile "/etc/amanda/openhab-dir/storagestate"			# The tape-changer or SD- or disk slot or S3 state file
tapelist "/etc/amanda/openhab-dir/tapelist"				# The tapelist file
tapetype DIRECTORY
infofile "/var/lib/amanda/openhab-dir/curinfo"		# Database directory
logdir "/var/log/amanda/openhab-dir"			# Log directory
indexdir "/var/lib/amanda/openhab-dir/index"		# Index directory
define tapetype SD {
    comment "SD card size"
    length 3333 mbytes					# actual Bucket size 5GB (Amazon default for free S3)
}
define tapetype DIRECTORY {				# Define our tape behaviour
	length 3333 mbytes				# size of every virtual container (= max. usage per directory)
}
define tapetype AWS {
    comment "S3 Bucket"
    length 3333 mbytes					# actual Bucket size 5GB (Amazon default for free S3)
}

amrecover_changer "changer"				# Changer for amrecover

# don't use any holding disk for the time being
#holdingdisk hd {
#    directory "/holdingdisk/openhab-dir"
#    use 1000 Mb
#}

define dumptype global {				# The global dump definition
	maxdumps 2					# maximum number of backups run in parallel
	holdingdisk no					# Dump to temp disk (holdingdisk) before backup to tape
	index yes					# Generate index. For restoration usage
	strategy standard				#I ADDED THIS MANUALLY AS ONCE I FORCED AMANDA TO DO FULL DUMP AND SINCE THEN IT MAKES ONLY FULL DUMPS :-) THIS ADDED STRATEGY DIDN'T SOLVE THAT HOWEVER...
							# do automatic
}
define dumptype root-tar {				# How to dump root's directory
	global						# Include global (as above)
	program "GNUTAR"				# Program name for compress
	estimate server					# Estimate the backup size before dump
	comment "root partitions dumped with tar"
	compress none					# No compression
	index						# Index this dump
	priority low					# Priority level
}
define dumptype user-tar {				# How to dump user's directory
	root-tar					# Include root-tar (as above)
	comment "user partitions dumped with tar"
	priority medium					# Priority level
}
define dumptype comp-user-tar {				# How to dump & compress user's directory
	user-tar					# Include user-tar (as above)
	compress client fast				# Compress in client side with less CPU (fast)
}
define application-tool app_amraw {			# how to dump the SD card's raw device /dev/mmcblk0
        plugin "amraw"					# uses 'dd'
}
define dumptype amraw {
        global
        program "APPLICATION"
        application "app_amraw"
}
# vim: filetype=conf


(Stefan Haupt) #106

Just a guess into the blue, as I am still very surprised that my integrated EMMC storage performs much better than your SD card: What speed class is your SD Card? Like class 10? Maybe it makes sense to switch to a faster one.


(Stefan Haupt) #107

@mstormi

I noticed (although a bit late) that the suggested cron job doesn’t run successfully because of the embedded date command. Here is my syslog, you see the command is cut off.

Apr 15 02:00:01 localhost CRON[21712]: (root) CMD ((cd /; tar czf /mnt/schoofiserver2/amanda_data_$(date +)

Currently it looks like I was able to solve this changing brackets to …whatever this new character is called :wink: and escaping the percentage character. Here my own cronjob example:

(I used a screenshot as the forum editor malformes the command)


(Stefan Haupt) #108

and that this is fixed I have a new issue :wink: Amanda was working fine for me (after struggling in the beginning). Now it seems my “tapes” are full. but how do I virtually add a new tape or overwrite the oldest tape?

amcheck
$ amcheck openhab-dir
Amanda Tape Server Host Check
-----------------------------
slot 15: volume ‘openHABian-openhab-dir-014’ is still active and cannot be overwritten
slot 1: volume ‘openHABian-openhab-dir-001’ is still active and cannot be overwritten
slot 2: volume ‘openHABian-openhab-dir-001’ is still active and cannot be overwritten
slot 3: volume ‘openHABian-openhab-dir-002’ is still active and cannot be overwritten
slot 4: volume ‘openHABian-openhab-dir-003’ is still active and cannot be overwritten
slot 5: volume ‘openHABian-openhab-dir-004’ is still active and cannot be overwritten
slot 6: volume ‘openHABian-openhab-dir-005’ is still active and cannot be overwritten
slot 7: volume ‘openHABian-openhab-dir-006’ is still active and cannot be overwritten
slot 8: volume ‘openHABian-openhab-dir-007’ is still active and cannot be overwritten
slot 9: volume ‘openHABian-openhab-dir-008’ is still active and cannot be overwritten
slot 10: volume ‘openHABian-openhab-dir-009’ is still active and cannot be overwritten
slot 11: volume ‘openHABian-openhab-dir-010’ is still active and cannot be overwritten
slot 12: volume ‘openHABian-openhab-dir-011’ is still active and cannot be overwritten
slot 13: volume ‘openHABian-openhab-dir-012’ is still active and cannot be overwritten
slot 14: volume ‘openHABian-openhab-dir-013’ is still active and cannot be overwritten
volume ‘’
Taper scan algorithm did not find an acceptable volume.
(expecting a new volume)
ERROR: No acceptable volumes found
Server check took 1.356 seconds

ameport

*** A TAPE ERROR OCCURRED: [No acceptable volumes found].
No dumps are left in the holding disk.

The next 10 tapes Amanda expects to use are: 1 new tape, openHABian-openhab-dir-001, openHABian-openhab-dir-002, openHABian-openhab-dir-003, openHABian-openhab-dir-004, openHABian-openhab-dir-005, openHABian-openhab-dir-006, openHABian-openhab-dir-007, openHABian-openhab-dir-008, openHABian-openhab-dir-009.
FAILURE DUMP SUMMARY:
  smarthome /dev/mmcblk0 lev 0  FAILED [can't do degraded dump without holding disk]
  smarthome /etc/openhab2 lev 0  FAILED [can't do degraded dump without holding disk]
  smarthome /var/lib/openhab2 lev 0  FAILED [can't do degraded dump without holding disk]

(Markus Storm) #109

You’re right on escaping the percentage but wrong on the backquote (that’s how it’s called :wink: ), that isn’t guaranteed to work in cron (I wonder it does for you). I’ve changed the PR to now match what I’ve been successfully running locally.
@ThomDietrich maybe you could give that PR a review and a go.


(Markus Storm) #110

Hmm, there’s various possible reasons but the most likely one is that you have defined too few (virtual) tapes (tapecycle setting in amanda.conf, asked for at setup time) to cover the timeframe we want to be able to restore back to (dumpcycle setting, 2 weeks per default).
Note that if your tape length (tapetype DIRECTORY definition) is too small for a single dump to fit in which will easily be the case for a level 0 dump, Amanda uses as many tapes as needed so at some point the oldest one is still “not old enough” to reuse it.
Try to manually increase the tape length (tapetype) to be larger than a single level 0 dump.
Try to reduce dumpcycle, or to increase tapecycle, or both. Note you need to manually create directories and ‘amlabel’ these if you increase tapecycle. See manpage for amlabel command and /opt/openhabian/functions/backup.sh how to use it.
As that only applies to future dumps, if Amanda still tells you all tapes are active, you could try freeing one of your tapes using amrmtape command.
In general, have a look at this Wiki article.


(Stefan Haupt) #111

thanks a lot, but this is a bit of information overflow for me :wink: Can you help me to analyze?

That’s what’s in my config and to me it looks good

dumpcycle 2 weeks
tapecycle 15 tapes
lengtgh 3333 mbytes

I believe in the very first beginning I tried to use amdump a few times during the day. could it be that more than one slot have been used for this? And now I’ve reached the end of the slots but 2 weeks haven’t passed yet?

EDIT:
This is what I’ve found in ‘tapelist’. Take a look at timestamps for openHABian-openhab-dir-001 and openHABian-openhab-dir-002

20180414010001 openHABian-openhab-dir-014 reuse BLOCKSIZE:32
20180413010001 openHABian-openhab-dir-013 reuse BLOCKSIZE:32
20180412010001 openHABian-openhab-dir-012 reuse BLOCKSIZE:32
20180411010001 openHABian-openhab-dir-011 reuse BLOCKSIZE:32
20180410010002 openHABian-openhab-dir-010 reuse BLOCKSIZE:32
20180409010002 openHABian-openhab-dir-009 reuse BLOCKSIZE:32
20180408010002 openHABian-openhab-dir-008 reuse BLOCKSIZE:32
20180407010001 openHABian-openhab-dir-007 reuse BLOCKSIZE:32
20180406010001 openHABian-openhab-dir-006 reuse BLOCKSIZE:32
20180405103557 openHABian-openhab-dir-005 reuse BLOCKSIZE:32
20180405010001 openHABian-openhab-dir-004 reuse BLOCKSIZE:32
20180404010001 openHABian-openhab-dir-003 reuse BLOCKSIZE:32
20180403153638 openHABian-openhab-dir-002 reuse BLOCKSIZE:32
20180403153638 openHABian-openhab-dir-001 reuse BLOCKSIZE:32
0 Tape-16 reuse BLOCKSIZE:32

I’ve now added another free slot. Let’s see what happens next

Stefan


(Markus Storm) #112

yes

You could use amrmtape to remove some of those old dumps/free those tapes

If by that you mean you just added a line to tapelist that’s not sufficient thus no good idea.
You need to create the directories and amlabel them, as I said see what the openhabian install routine does.


Best hardware platform
(Stefan Haupt) #113

Latter is exactly what I did :wink: Will take a look at amrmtape. Thanks for your fantastic support.


(Stefan Haupt) #114

@mstormi

me again :slight_smile: would be great if you can point me into the right direction.

Now that my backup is running fine since weeks I need to take a look at the restore scenario. In my case it’s a raw disk restore (of my EMMC).

I wonder if it’s possible to restore and overwrite the same running linux version?! I mean, I boot the device which contains an “empty” Linux. I install amanda. Then I run the restore command. Will this work?

Or do I need to boot the OS that will be used to trigger the restore in a different way, like from network and keep in RAM? No clue if this is even possible.

Stefan


(Markus Storm) #115

Well, you can do that as long as you have access to the storage file and Amanda index database on that ‘fresh’ box (i.e. mount from NAS or whatever). It even would work without that index but it’s sort of a hack then, see ‘emergency’ section in Amanda Readme.
But why would you want to do that ?
You cannot overwrite the partition you’re actively running Linux off, restores ever need to go to ‘inactive’ partitions only. Yes you can boot your box from the network, but that’s not easy to accomplish for if possible at all (depends on HW and OS).
Best option is to get an external eMMC card writer (if that’s why you’re asking) and restore there, can be done from your active OH box.


(Stefan Haupt) #116

thx for the reply.

I will test the following and keep the forum informed. My OrancePi can boot from SDCard or integrated EMMC. I will try to boot from a prepared SD Card (with amanda files in place) and then restore to EMMC. Just waiting for delivery of the ordered hardware :wink:


(Markus Storm) #117

Should work but be aware it takes time and means to take down your OH box.
Try if to restore to SD (from your box running off the eMMC) works as well.


(Stefan Haupt) #118

I’d rather not touch my running system :wink: waiting for the backup hardware so I can run some tests without touching the production


(Stefan Haupt) #119

Hi Markus,

I’m back. Meanwhile my recovery device (just a second OrangePi) has arrived and my first full restore scenario has been tested successfully.

Anyway, I think you need to add one more important thing to openhabians amanda cronjob:

In order for the restore to be successful I cloned the amanda server/client files which are backed up every night. Unfortunately amfetchdump complained about missing log files. So I changed my cronjob to this:

0 2 * * * root (cd /; tar czf /mnt/MYSERVER/amanda_data_$(date +%Y%m%d%H%M%S).tar.gz /var/log/amanda /etc/amanda /var/lib/amanda; find /mnt/MYSERVER -name amanda_data_* -mtime +10 -delete) >/dev/null 2>&1

Btw: I described what I did to restore the dump on my blog. http://blog.haupt.xyz/index.php/Disaster-Recovery
If you want to have a look, scroll down to: Recovery device / Perform restore

best regards

Stefan


(Markus Storm) #120

Guess amfetchdump still worked, didn’t it ? It’s just the logs, these one usually does not backup (you can but I wouldn’t make it the default).