Amanda howto for openhabian and NAS

backup
openhabian
amanda
Tags: #<Tag:0x00007fd31421aef0> #<Tag:0x00007fd31421ac20> #<Tag:0x00007fd31421a9f0>

(Stefan Haupt) #101

thanks for the confirmation. wow, didn’t expect this! :wink: seems the EMMC performs much better than an SD card


(Frantisek) #102

Hi, I am using 16GB micro SD card, partitioned is only 10GB, used space about 3GB. The raw backup of whole sd card takes about 1 hour and 20 minutes (RPI3, saved to local ext HDD) and the backup files have altogether about 10GB compressed.

@mstormi Markus, no reaction to my suggestion for recovery part of the readme, which is, in my opinion, very important, as when your system is down and you want to recover ASAP, the readme would be the first place to check…


(Joerg) #103

I am using SD Card with 16 GB as well. But what is your configuration that backup runs that fast? Could you post it?
I don‘t think that this is about partitioning. My used Space is less than 3GB and still the raw backup took as posted about 1 day.
I was wondering what I can do to incease speed. Is it maybe a problem that I have the Pi 3B plus model?
As soon as I have a bit more time I can post my log from that cancelled backup here.


(Markus Storm) #104

I added it to the docs, see PR quoted above.


(Frantisek) #105

As I said… RPI3, saving on local HDD connected via USB to RPI directly, and the amanda config file is mostly unchanged from the standard installation…

org "openHABian openhab-dir"				# Organization name for reports
mailto "myEmailAddress@gmail.com"			# Email address to receive reports
netusage 10000 Kbps					# Bandwidth limit, 10M
dumpcycle 2 weeks					# Backup cycle is 14 days
runspercycle 7						# Run 7 times every 14 days
tapecycle 15 tapes					# Dump to this number of different tapes during the cycle
runtapes 10						# number of virtual containers to use at most per backup run
tpchanger "chg-disk:/mnt/exthdd/slots"    # The tape-changer glue script
taper-parallel-write 2
autolabel "openHABian-openhab-dir-%%%" empty
changerfile "/etc/amanda/openhab-dir/storagestate"			# The tape-changer or SD- or disk slot or S3 state file
tapelist "/etc/amanda/openhab-dir/tapelist"				# The tapelist file
tapetype DIRECTORY
infofile "/var/lib/amanda/openhab-dir/curinfo"		# Database directory
logdir "/var/log/amanda/openhab-dir"			# Log directory
indexdir "/var/lib/amanda/openhab-dir/index"		# Index directory
define tapetype SD {
    comment "SD card size"
    length 3333 mbytes					# actual Bucket size 5GB (Amazon default for free S3)
}
define tapetype DIRECTORY {				# Define our tape behaviour
	length 3333 mbytes				# size of every virtual container (= max. usage per directory)
}
define tapetype AWS {
    comment "S3 Bucket"
    length 3333 mbytes					# actual Bucket size 5GB (Amazon default for free S3)
}

amrecover_changer "changer"				# Changer for amrecover

# don't use any holding disk for the time being
#holdingdisk hd {
#    directory "/holdingdisk/openhab-dir"
#    use 1000 Mb
#}

define dumptype global {				# The global dump definition
	maxdumps 2					# maximum number of backups run in parallel
	holdingdisk no					# Dump to temp disk (holdingdisk) before backup to tape
	index yes					# Generate index. For restoration usage
	strategy standard				#I ADDED THIS MANUALLY AS ONCE I FORCED AMANDA TO DO FULL DUMP AND SINCE THEN IT MAKES ONLY FULL DUMPS :-) THIS ADDED STRATEGY DIDN'T SOLVE THAT HOWEVER...
							# do automatic
}
define dumptype root-tar {				# How to dump root's directory
	global						# Include global (as above)
	program "GNUTAR"				# Program name for compress
	estimate server					# Estimate the backup size before dump
	comment "root partitions dumped with tar"
	compress none					# No compression
	index						# Index this dump
	priority low					# Priority level
}
define dumptype user-tar {				# How to dump user's directory
	root-tar					# Include root-tar (as above)
	comment "user partitions dumped with tar"
	priority medium					# Priority level
}
define dumptype comp-user-tar {				# How to dump & compress user's directory
	user-tar					# Include user-tar (as above)
	compress client fast				# Compress in client side with less CPU (fast)
}
define application-tool app_amraw {			# how to dump the SD card's raw device /dev/mmcblk0
        plugin "amraw"					# uses 'dd'
}
define dumptype amraw {
        global
        program "APPLICATION"
        application "app_amraw"
}
# vim: filetype=conf


(Stefan Haupt) #106

Just a guess into the blue, as I am still very surprised that my integrated EMMC storage performs much better than your SD card: What speed class is your SD Card? Like class 10? Maybe it makes sense to switch to a faster one.


(Stefan Haupt) #107

@mstormi

I noticed (although a bit late) that the suggested cron job doesn’t run successfully because of the embedded date command. Here is my syslog, you see the command is cut off.

Apr 15 02:00:01 localhost CRON[21712]: (root) CMD ((cd /; tar czf /mnt/schoofiserver2/amanda_data_$(date +)

Currently it looks like I was able to solve this changing brackets to …whatever this new character is called :wink: and escaping the percentage character. Here my own cronjob example:

(I used a screenshot as the forum editor malformes the command)


(Stefan Haupt) #108

and that this is fixed I have a new issue :wink: Amanda was working fine for me (after struggling in the beginning). Now it seems my “tapes” are full. but how do I virtually add a new tape or overwrite the oldest tape?

amcheck
$ amcheck openhab-dir
Amanda Tape Server Host Check
-----------------------------
slot 15: volume ‘openHABian-openhab-dir-014’ is still active and cannot be overwritten
slot 1: volume ‘openHABian-openhab-dir-001’ is still active and cannot be overwritten
slot 2: volume ‘openHABian-openhab-dir-001’ is still active and cannot be overwritten
slot 3: volume ‘openHABian-openhab-dir-002’ is still active and cannot be overwritten
slot 4: volume ‘openHABian-openhab-dir-003’ is still active and cannot be overwritten
slot 5: volume ‘openHABian-openhab-dir-004’ is still active and cannot be overwritten
slot 6: volume ‘openHABian-openhab-dir-005’ is still active and cannot be overwritten
slot 7: volume ‘openHABian-openhab-dir-006’ is still active and cannot be overwritten
slot 8: volume ‘openHABian-openhab-dir-007’ is still active and cannot be overwritten
slot 9: volume ‘openHABian-openhab-dir-008’ is still active and cannot be overwritten
slot 10: volume ‘openHABian-openhab-dir-009’ is still active and cannot be overwritten
slot 11: volume ‘openHABian-openhab-dir-010’ is still active and cannot be overwritten
slot 12: volume ‘openHABian-openhab-dir-011’ is still active and cannot be overwritten
slot 13: volume ‘openHABian-openhab-dir-012’ is still active and cannot be overwritten
slot 14: volume ‘openHABian-openhab-dir-013’ is still active and cannot be overwritten
volume ‘’
Taper scan algorithm did not find an acceptable volume.
(expecting a new volume)
ERROR: No acceptable volumes found
Server check took 1.356 seconds

ameport

*** A TAPE ERROR OCCURRED: [No acceptable volumes found].
No dumps are left in the holding disk.

The next 10 tapes Amanda expects to use are: 1 new tape, openHABian-openhab-dir-001, openHABian-openhab-dir-002, openHABian-openhab-dir-003, openHABian-openhab-dir-004, openHABian-openhab-dir-005, openHABian-openhab-dir-006, openHABian-openhab-dir-007, openHABian-openhab-dir-008, openHABian-openhab-dir-009.
FAILURE DUMP SUMMARY:
  smarthome /dev/mmcblk0 lev 0  FAILED [can't do degraded dump without holding disk]
  smarthome /etc/openhab2 lev 0  FAILED [can't do degraded dump without holding disk]
  smarthome /var/lib/openhab2 lev 0  FAILED [can't do degraded dump without holding disk]

(Markus Storm) #109

You’re right on escaping the percentage but wrong on the backquote (that’s how it’s called :wink: ), that isn’t guaranteed to work in cron (I wonder it does for you). I’ve changed the PR to now match what I’ve been successfully running locally.
@ThomDietrich maybe you could give that PR a review and a go.


(Markus Storm) #110

Hmm, there’s various possible reasons but the most likely one is that you have defined too few (virtual) tapes (tapecycle setting in amanda.conf, asked for at setup time) to cover the timeframe we want to be able to restore back to (dumpcycle setting, 2 weeks per default).
Note that if your tape length (tapetype DIRECTORY definition) is too small for a single dump to fit in which will easily be the case for a level 0 dump, Amanda uses as many tapes as needed so at some point the oldest one is still “not old enough” to reuse it.
Try to manually increase the tape length (tapetype) to be larger than a single level 0 dump.
Try to reduce dumpcycle, or to increase tapecycle, or both. Note you need to manually create directories and ‘amlabel’ these if you increase tapecycle. See manpage for amlabel command and /opt/openhabian/functions/backup.sh how to use it.
As that only applies to future dumps, if Amanda still tells you all tapes are active, you could try freeing one of your tapes using amrmtape command.
In general, have a look at this Wiki article.


(Stefan Haupt) #111

thanks a lot, but this is a bit of information overflow for me :wink: Can you help me to analyze?

That’s what’s in my config and to me it looks good

dumpcycle 2 weeks
tapecycle 15 tapes
lengtgh 3333 mbytes

I believe in the very first beginning I tried to use amdump a few times during the day. could it be that more than one slot have been used for this? And now I’ve reached the end of the slots but 2 weeks haven’t passed yet?

EDIT:
This is what I’ve found in ‘tapelist’. Take a look at timestamps for openHABian-openhab-dir-001 and openHABian-openhab-dir-002

20180414010001 openHABian-openhab-dir-014 reuse BLOCKSIZE:32
20180413010001 openHABian-openhab-dir-013 reuse BLOCKSIZE:32
20180412010001 openHABian-openhab-dir-012 reuse BLOCKSIZE:32
20180411010001 openHABian-openhab-dir-011 reuse BLOCKSIZE:32
20180410010002 openHABian-openhab-dir-010 reuse BLOCKSIZE:32
20180409010002 openHABian-openhab-dir-009 reuse BLOCKSIZE:32
20180408010002 openHABian-openhab-dir-008 reuse BLOCKSIZE:32
20180407010001 openHABian-openhab-dir-007 reuse BLOCKSIZE:32
20180406010001 openHABian-openhab-dir-006 reuse BLOCKSIZE:32
20180405103557 openHABian-openhab-dir-005 reuse BLOCKSIZE:32
20180405010001 openHABian-openhab-dir-004 reuse BLOCKSIZE:32
20180404010001 openHABian-openhab-dir-003 reuse BLOCKSIZE:32
20180403153638 openHABian-openhab-dir-002 reuse BLOCKSIZE:32
20180403153638 openHABian-openhab-dir-001 reuse BLOCKSIZE:32
0 Tape-16 reuse BLOCKSIZE:32

I’ve now added another free slot. Let’s see what happens next

Stefan


(Markus Storm) #112

yes

You could use amrmtape to remove some of those old dumps/free those tapes

If by that you mean you just added a line to tapelist that’s not sufficient thus no good idea.
You need to create the directories and amlabel them, as I said see what the openhabian install routine does.


Best hardware platform
(Stefan Haupt) #113

Latter is exactly what I did :wink: Will take a look at amrmtape. Thanks for your fantastic support.