Amanda howto for openhabian and NAS

Quite likely so. We had a couple of people to CIFS-mount their backup storage from a Windows server who failed, too, because of this. That’s why it is explicitly mentioned in the README as a no-go, see ?
In short, to use Windows tech is a bad idea here. Change this filesystem to ext4 or any other supported. Copying should be done across network.

I see, I read the CIFS issue but as NTFS is supported in raspbian I thought this would be ideal. also strange is that it seems that the backup of influxdb works like a charm…

Anyway, I will try again and use ext4 as you suggested. Thanks for now!

Oh, @mstormi, when I will reformat the HDD, I will have to create the slot directory incl. slot1-slot15 subdirectories there manually right? Anything else to think of? Owner rights should have the backup user?

Thanks!

Manually re-create the dirs just as they are now or re-run Amanda installation from openHABian menu.

OK, formatted the external HDD as ext4, mounted again, created the “slot” folder structure, tested manually with influx-dir backup (worked OK, same as with NTFS). And overnight the scheduled amanda backup (partially?) failed again. The good thing is that the rpi and the HDD didn’t freeze now :smiley:

See amreport and folder’s content:

Hostname: openHABianPi
Org     : openHABian openhab-dir
Config  : openhab-dir
Date    : březen 25, 2018

These dumps were to tape openHABian-openhab-dir-003.
Not using all tapes because taper found no tape.
No dumps are left in the holding disk.

The next 10 tapes Amanda expects to use are: 10 new tapes.


FAILURE DUMP SUMMARY:
  openHABianPi /dev/mmcblk0 lev 0  FAILED [data write: Broken pipe]
  openHABianPi /dev/mmcblk0 lev 0  partial taper: Error writing device fd 6: Read-only file system
  openHABianPi /dev/mmcblk0 lev 0  FAILED [data write: Broken pipe]
  openHABianPi /dev/mmcblk0 lev 0  partial taper: taper found no tape



STATISTICS:
                          Total       Full      Incr.   Level:#
                        --------   --------   --------  --------
Estimate Time (hrs:min)     0:13
Run Time (hrs:min)          0:15
Dump Time (hrs:min)         0:00       0:00       0:00
Output Size (meg)            4.0        0.0        4.0
Original Size (meg)          4.0        0.0        4.0
Avg Compressed Size (%)    100.0        --       100.0
DLEs Dumped                    2          0          2  1:2
Avg Dump Rate (k/s)        991.5        --       991.5

Tape Time (hrs:min)         0:02       0:02       0:00
Tape Size (meg)              4.0        0.0        4.0
Tape Used (%)                0.1        0.0        0.1
DLEs Taped                     4          2          2  1:2
Parts Taped                    4          2          2  1:2
Avg Tp Write Rate (k/s)     39.7        0.0    20300.0

USAGE BY TAPE:
  Label                        Time         Size      %  DLEs Parts
  openHABian-openhab-dir-003   0:02      185628k    5.4     4     4



FAILED DUMP DETAILS:
  /-- openHABianPi /dev/mmcblk0 lev 0 FAILED [data write: Broken pipe]
  sendbackup: info BACKUP=APPLICATION
  sendbackup: info APPLICATION=amraw
  sendbackup: info RECOVER_CMD=/bin/gzip -dc |/usr/lib/amanda/application/amraw restore [./file-to-restore]+
  sendbackup: info COMPRESS_SUFFIX=.gz
  sendbackup: info end
  \--------
  /-- openHABianPi /dev/mmcblk0 lev 0 FAILED [data write: Broken pipe]
  sendbackup: info BACKUP=APPLICATION
  sendbackup: info APPLICATION=amraw
  sendbackup: info RECOVER_CMD=/bin/gzip -dc |/usr/lib/amanda/application/amraw restore [./file-to-restore]+
  sendbackup: info COMPRESS_SUFFIX=.gz
  sendbackup: info end
  \--------



NOTES:
  planner: Last full dump of openHABianPi:/etc/openhab2 on tape openHABian-openhab-dir-001 overwritten in 1 run.
  planner: Last full dump of openHABianPi:/var/lib/openhab2 on tape openHABian-openhab-dir-001 overwritten in 1 run.
  planner: WARNING: no history available for openHABianPi:/etc/openhab2; guessing that size will be 1465 KB
  taper: Slot 3 without label can be labeled
  taper: Slot 4 without label can be labeled
  taper: tape openHABian-openhab-dir-003 kb 4060 fm 3 [OK]
  taper: while labeling new volume: Error checking directory /mnt/HDD/slots/drive3/data/: No such file or directory


DUMP SUMMARY:
                                                         DUMPER STATS   TAPER STATS
HOSTNAME     DISK              L ORIG-kB  OUT-kB  COMP%  MMM:SS   KB/s MMM:SS    KB/s
-------------------------------- ---------------------- -------------- --------------
openHABianPi /dev/mmcblk0      0                     --    PARTIAL       1:41     0.0 PARTIAL
openHABianPi /etc/openhab2     1    2690    2690     --    0:03  976.4   0:00 26900.0
openHABianPi /var/lib/openhab2 1    1370    1370     --    0:01 1021.3   0:00 13700.0

(brought to you by Amanda version 3.3.9)

openhabian@openHABianPi:/mnt/HDD/slots $ ls
celkem 76K
lrwxrwxrwx 1 backup     backup        5 bře 25 01:13 data -> slot4
drwxrwxrwx 3 openhabian openhabian 4,0K bře 22 21:39 drive0
drwxrwxrwx 3 openhabian openhabian 4,0K bře 22 21:39 drive1
drwx------ 2 backup     backup     4,0K bře 25 01:00 drive2
drwx------ 2 backup     backup     4,0K bře 25 01:13 drive3
drwxrwxrwx 2 openhabian openhabian 4,0K bře 22 21:44 slot1
drwxrwxrwx 2 openhabian openhabian 4,0K bře 22 21:33 slot10
drwxrwxrwx 2 openhabian openhabian 4,0K bře 22 21:33 slot11
drwxrwxrwx 2 openhabian openhabian 4,0K bře 22 21:33 slot12
drwxrwxrwx 2 openhabian openhabian 4,0K bře 22 21:33 slot13
drwxrwxrwx 2 openhabian openhabian 4,0K bře 22 21:33 slot14
drwxrwxrwx 2 openhabian openhabian 4,0K bře 22 21:33 slot15
drwxrwxrwx 2 openhabian openhabian 4,0K bře 24 18:03 slot2
drwxrwxrwx 2 openhabian openhabian 4,0K bře 25 01:13 slot3
drwxrwxrwx 2 openhabian openhabian 4,0K bře 22 21:33 slot4
drwxrwxrwx 2 openhabian openhabian 4,0K bře 22 21:33 slot5
drwxrwxrwx 2 openhabian openhabian 4,0K bře 22 21:33 slot6
drwxrwxrwx 2 openhabian openhabian 4,0K bře 22 21:33 slot7
drwxrwxrwx 2 openhabian openhabian 4,0K bře 22 21:33 slot8
drwxrwxrwx 2 openhabian openhabian 4,0K bře 22 21:33 slot9

openhabian@openHABianPi:/mnt/HDD/slots/slot3 $ ls
celkem 114M
-rw------- 1 backup backup  32K bře 25 01:13 00000.openHABian-openhab-dir-003
-rw------- 1 backup backup 1,4M bře 25 01:13 00001.openHABianPi._var_lib_openhab2.1
-rw------- 1 backup backup 2,7M bře 25 01:13 00002.openHABianPi._etc_openhab2.1
-rw------- 1 backup backup 110M bře 25 01:14 00003.openHABianPi._dev_mmcblk0.0

@mstormi Do you have any idea what’s wrong here? Thanks!

The directory you specified to backup to does not exist or is not writable, somewhere at the beginning it’s saying it’s a readonly filesystem.
You should also check the various logs in /var/log/amanda/*.
You can run amcheck to check without starting a real dump.
Please don’t expect me to debug your (nonstandard) system.

Well @mstormi I do not know what is a standard system acc. to you… I simply used amanda exactly as recommended (rpi, openhabian, amanda standard installation from openhabian-config, local storage ext4 mounted…).

Of course I could read that amanda has write issues and to avoid this I created the folders with 777 rights (as you could see from the ls command above anyway), however, WHY could then create the backup files?

openhabian@openHABianPi:/mnt/HDD/slots/slot3 $ ls
celkem 114M
-rw------- 1 backup backup  32K bře 25 01:13 00000.openHABian-openhab-dir-003
-rw------- 1 backup backup 1,4M bře 25 01:13 00001.openHABianPi._var_lib_openhab2.1
-rw------- 1 backup backup 2,7M bře 25 01:13 00002.openHABianPi._etc_openhab2.1
-rw------- 1 backup backup 110M bře 25 01:14 00003.openHABianPi._dev_mmcblk0.0

I do not expect you to do my work instead of me, I just thought that maybe the amanda standard setup in openhabian may be wrong… of course there may be some error on my side but truly, do not see it.

Thanks for what you did so far, and please leave my (if any) other posts re amanda unanswered as I could clearly understand your message.

A system that was not modified in any aspect that could affect the software’s operation.
One obviously cannot create a comprehensive list of all of these aspects, but your usage of NTFS was an example, and the message below wouldn’t appear on a standard system either, so something else is modified on your system, access rights to the raw device, ACL or you write-protected your SD card or whatever, I cannot know.

This usually indicates that the target is unwritable, but it can also happen if the source is. Most UNIX tools to dump raw devices will leave a marker on the device when they tried last to allow for ‘diff’ computation for level 0/1/… dumps.
If the user the Amanda process (here: amraw or taper) runs as does not have WRITE access rights to the raw device to dump (here: /dev/mmcblk0), it’ll fail (or at least produce a warning which Amanda might interpret as a FAIL).
On a standard system (unmodified openHABian), Amanda/amraw CAN write there, so while I don’t know why it cannot write on your system, I do know that your system is nonstandard.

Well, thanks Markus. The very strange thing is that the last night (and I didn’t change anything) the full raw backup was successful.

Hostname: openHABianPi
Org     : openHABian openhab-dir
Config  : openhab-dir
Date    : březen 26, 2018

These dumps were to tapes openHABian-openhab-dir-004, openHABian-openhab-dir-005, openHABian-openhab-dir-006, openHABian-openhab-dir-007.
The next 10 tapes Amanda expects to use are: 8 new tapes, openHABian-openhab-dir-001, openHABian-openhab-dir-002.


STATISTICS:
                          Total       Full      Incr.   Level:#
                        --------   --------   --------  --------
Estimate Time (hrs:min)     0:14
Run Time (hrs:min)          1:19
Dump Time (hrs:min)         1:05       1:05       0:00
Output Size (meg)        10465.3    10461.3        4.0
Original Size (meg)      14808.0    14804.0        4.0
Avg Compressed Size (%)     70.7       70.7      100.0
DLEs Dumped                    3          1          2  1:2
Avg Dump Rate (k/s)       2730.9     2732.9      924.2

Tape Time (hrs:min)         1:05       1:05       0:00
Tape Size (meg)          10465.3    10461.3        4.0
Tape Used (%)              314.0      313.9        0.1
DLEs Taped                     3          1          2  1:2
Parts Taped                    6          4          2  1:2
Avg Tp Write Rate (k/s)   2732.4     2733.5     1353.3

USAGE BY TAPE:
  Label                        Time         Size      %  DLEs Parts
  openHABian-openhab-dir-004   0:25     3412764k  100.0     3     3
  openHABian-openhab-dir-005   0:19     3412832k  100.0     0     1
  openHABian-openhab-dir-006   0:18     3412832k  100.0     0     1
  openHABian-openhab-dir-007   0:03      478040k   14.0     0     1



NOTES:
  planner: Last full dump of openHABianPi:/etc/openhab2 on tape openHABian-openhab-dir-001 overwritten in 1 run.
  planner: Last full dump of openHABianPi:/var/lib/openhab2 on tape openHABian-openhab-dir-001 overwritten in 1 run.
  taper: Slot 4 without label can be labeled
  taper: Slot 5 without label can be labeled
  taper: tape openHABian-openhab-dir-004 kb 3412764 fm 3 [OK]
  taper: Slot 6 without label can be labeled
  taper: tape openHABian-openhab-dir-005 kb 3412832 fm 1 [OK]
  taper: Slot 7 without label can be labeled
  taper: tape openHABian-openhab-dir-006 kb 3412832 fm 1 [OK]
  taper: Slot 8 without label can be labeled
  taper: tape openHABian-openhab-dir-007 kb 478040 fm 1 [OK]


DUMP SUMMARY:
                                                           DUMPER STATS   TAPER STATS
HOSTNAME     DISK              L  ORIG-kB   OUT-kB  COMP%  MMM:SS   KB/s MMM:SS   KB/s
-------------------------------- ------------------------ -------------- -------------
openHABianPi /dev/mmcblk0      0 15159296 10712408   70.7   65:20 2732.9  65:19 2733.5
openHABianPi /etc/openhab2     1     2690     2690     --    0:01 2116.3   0:01 2690.0
openHABianPi /var/lib/openhab2 1     1370     1370     --    0:03  438.7   0:02  685.0

(brought to you by Amanda version 3.3.9)

I myself would need to dig the Amanda docs and try it for myself in order to be able to reliably answer your question, but yes, to save /var/lib/amanda/* and /etc/amanda/* using cp or tar to some off-openhabian location is probably a good idea so you can restore your backup index (without using Amanda) if in need after a crash/reinstall.

When I taked amanda backups in use first time I also tested amrecovery like also explained in the openhabian-amanda.md. As recovery worked fine, I belived to be safe. Just learned that recovery can’t be done without index files, which is situation after full crash. This can be surprise for most of the users and cause cold sweat after full crash. Luckily I just tried to ”recover” my openHAB configs to another VM.

So what is the reason why e.g. index files are not saved by default to backup storage (where slots are) rather than /var/lib?

Historically, backup storage is on tape, and you certainly don’t want to waste time searching your whole tape (sequential access only !) when your system is down.
Also, speaking from a generic point of view, backup storage is not necessarily any ‘safer’ (i.e. less likely to loose it) than is /var/lib.
If you happen to know it is in your specific case (e.g. because you use a SD card for Amanda indices and a NAS dir as the backup destination), you are free to change index location, but it certainly does not apply to everyone or even just a majority.
To take copies is a better strategy, and if you do anyway, it doesn’t really matter that much which medium your index is on.
Btw, you do can restore without index files, too, but granted you need to know how this works (ask G**gle), and it can be a hassle to find the right file if you don’t have indices.

I meant default in openhabian amanda configuration (not general in amanda). If you lose your backup data e.g. from NAS, index files are totally useless in /var/lib as well.

Well, maybe openahabian should then automatically add cron task to copy index and other relevant data to backup storage as well.

Yes, I know and I also tested in the beginning that I can always untar those backup files manually if nothing else does not work. Maybe index files importance could be mention in otherwise really great openhabian-amanda.md.

@mstormi Markus, trying to follow the recovery process described in readme to get confirmed it works as desired and I have a suggestion to improve this part of readme as follows, because there is missing the part with directing amanda amfetchdump to the correct slot as well as I found an option to select date of the backup which should be restored.

So I would propose this, of course feel free to adjust it as you will:

Restoring a partition
To restore a raw disk partition, you need to use amfetchdump command. Unlike amdump, you have to run amfetchdump as user backup, though. Here’s another terminal session log to use amfetchdump to first retrieve the backup image from your backup storage to image called openhabianpi-image on /server/temp/

Reminder: you have to be logged in as the backup user.

backup@pi:/server/temp$ amfetchdump -p openhab-dir pi /dev/mmcblk0 > /server/temp/openhabianpi-image
 1 volume(s) needed for restoration
  The following volumes are needed: openhab-openhab-dir-001
  Press enter when ready

Before you actually press enter, you should get ready, by opening another terminal window and letting Amanda know, in which slot the required tape is (you can do this also before starting the amfetchdump command). You have to find the slot yourself by checking their content. Once you find out the requested file (probably starting with 00000.) you redirect amanda to use the slot which contains the file (e.g. slot 1) using this:

backup@pi:/server/temp$ amtape openhab-dir slot 1
slot   1: time 20170322084708 label openhab-openhab-dir-001
changed to slot 1

And finally you can go back to the first terminal window and press enter. Amanda will automatically pick up another files if the backup consists of more than one file.

amfetchdump: 4: restoring split dumpfile: date 20170322084708 host pi disk /dev/mmcblk0 part 1/UNKNOWN lev 0 comp N program APPLICATION
  927712 kb

You can also provide amfetchdump with the date of the backup that you want to restore by adding the date parameter (format e.g. 20180327) like this:

backup@pi:/server/temp$ amfetchdump -p openhab-dir pi /dev/mmcblk0 > /server/temp/openhabianpi-image 20180327

This line also shows how to restore this image file to a SD card from Linux. In this example, we have an external SD card writer with a (blank) SD card attached to /dev/sdd.

backup@pi:/server/temp$ dd bs=4M if=/server/temp/openhabianpi-image of=/dev/sdd

You could also move that temporary recovered image file to your Windows PC that has a card writer, rename the file with the .raw extension, and use Etcher or other tool in order to write the image to the card.

------------ END OF README -------------

I have also probably found the reason of my strange problems discussed previously. Seems that the external 2.5" HDD was underpowered. This couldn’t be seen during normal HDD usage but when backing up the full sd card as well as restoring it from the HDD caused some input/output reading errors. Connecting it to an external power seems to solve it (at least for now :slight_smile: ).

Hello together,
I am trying to switch to a new Pi 3B+ with newly installed Openhabian 1.4. I’ve modified it so it runs openhabian image fine. Now I wanted to try the Amanda backup for the 1st time.
I’ve read the readme, mounted my NAS NFS volume - it is accessible and writeable…
Started the backup using amdump openhab-dir as written in documentation…
My NAS is a Synology DS718+ which is fast enough but the backup of the fresh openhabian now takes 24 hours and still running. I have only 4 files in slot2 with approx. 300 MB.

Is that normal? Can anybody tell me what I can do that this runs in a ‘normal’ time frame?

I did a first attempt and that was also as slow as this so I modified amanda.conf and increased bandwidth setting and corrected the SD card size - I can’t see any difference now.

EDIT: Connection is Cat5e Ethernet cable to Pi3 and NAS.

How large is your SD card ?
The first run - if it includes the full SD card /dev/mmcblk0 raw device to be backed up - might take half a day for a 16 GB card, depending on how busy your Pi is.
For larger cards we’ve even seen the whole system lock up, that’s why there’s the recommendation to stay at 16G or less. If your slot2 is the only filled one then I guess this is what happened to you.
Net or NAS bandwidth isn’t the bottleneck.

Try removing the raw devices from the disklist file and rerun amcheck+amdump.

What do you mean? There’s no (relevant) SD card size setting if you backup to NAS. If you refer to SD tapetype definition, it’s not used thus meaningless. FWIW, I just changed that in openHABian.

I am also new to amanda and have just set it up yesterday.

Btw: Using an OrangePi with integrated EMMC flash (thus no SD card required).

Does the output of amreport mean that the WHOLE EMMC memory has been dumped and compressed from 15GB to about 2GB in less than 20min?

STATISTICS:
                          Total       Full      Incr.   Level:#
                        --------   --------   --------  --------
Estimate Time (hrs:min)     0:04
Run Time (hrs:min)          0:17
Dump Time (hrs:min)         0:13       0:13       0:00
Output Size (meg)         1537.1     1537.1        0.0
Original Size (meg)      15020.0    15020.0        0.0
Avg Compressed Size (%)     10.2       10.2        --
DLEs Dumped                    3          3          0
Avg Dump Rate (k/s)       2066.4     2066.4        --

Tape Time (hrs:min)         0:13       0:13       0:00
Tape Size (meg)           1537.1     1537.1        0.0
Tape Used (%)               46.1       46.1        0.0
DLEs Taped                     3          3          0
Parts Taped                    3          3          0
Avg Tp Write Rate (k/s)   2051.8     2051.8        --

USAGE BY TAPE:
  Label                        Time         Size      %  DLEs Parts
  openHABian-openhab-dir-003   0:13     1573955k   46.1     3     3

NOTES:
  planner: Last full dump of smarthome:/dev/mmcblk0 on tape openHABian-openhab-dir-002 overwritten in 1 run.
  planner: Last full dump of smarthome:/etc/openhab2 on tape openHABian-openhab-dir-001 overwritten in 1 run.
  planner: Last full dump of smarthome:/var/lib/openhab2 on tape openHABian-openhab-dir-001 overwritten in 1 run.
  planner: Full dump of smarthome:/var/lib/openhab2 promoted from 14 days ahead.
  planner: Full dump of smarthome:/etc/openhab2 promoted from 14 days ahead.
  taper: Slot 4 without label can be labeled
  taper: Slot 5 without label can be labeled
  taper: tape openHABian-openhab-dir-003 kb 1573955 fm 3 [OK]


DUMP SUMMARY:
                                                          DUMPER STATS    TAPER STATS
HOSTNAME     DISK              L  ORIG-kB  OUT-kB  COMP%  MMM:SS    KB/s MMM:SS    KB/s
-------------------------------- ----------------------- --------------- --------------
smarthome    /dev/mmcblk0      0 15267840 1461285    9.6   12:37  1930.7  12:36  1932.9
smarthome    /etc/openhab2     0      760     760     --    0:00  3318.7   0:00  7600.0
smarthome    /var/lib/openhab2 0   111910  111910     --    0:05 24385.8   0:11 10173.6

See

2 Likes

I don’t understand why you write the tar to /volatile/backup
Or should this be replaced with the storage path? like /mnt/externalNFS ?

Stefan

Right, that’s my storage path, I forgot to change that. Now I did.

1 Like

yes