Fatal error when installing Amanda Backup (via openhabian-config)

  • Platform information:
    • Hardware: Raspberry Pi 2
    • OS: latest version openHABian
    • Java Runtime Environment: openjdk version “1.8.0_152”
    • openHAB version: 2.4.0 Stable Release
    • Specifics: My RPi2 has an SSD drive, but for this test i used only the sd-card. Finally the backup proc. should work with the root fs on de ssd.
      The backups should be stored on my Synology NAS. I made a nfs share on the NAS and mounted this in /media/backup on the RPi. Making an backup from a folder with subfolders and files from the command prompt is working.

I try to install Amanda Backup via openhabian-config but it seems that the install is breaking. I receive the following error:

0 upgraded, 44 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/6,643 kB of archives.
After this operation, 18.9 MB of additional disk space will be used.
Extracting templates from packages: 100%
Preconfiguring packages ...
Selecting previously unselected package liblockfile-bin.
dpkg: unrecoverable fatal error, aborting:
 files list file for package 'libalgorithm-diff-xs-perl' is missing final newline
Updating FireMotD available updates count ...
E: Sub-process /usr/bin/dpkg returned an error code (2)
2019-02-19_15:47:13_CET [openHABian] Checking for default openHABian username:password combination... OK
2019-02-19_15:47:14_CET [openHABian] We hope you got what you came for! See you again soon ;)

I did an upgrade via openhabian-config, and that went OK, without errors.I searched for: “files list file for package ‘libalgorithm-diff-xs-perl’ is missing final newline” but could not find a solution for my situation.
I followed the prompted configuration with the installation of Amanda BU and chose for local backup on the mountpoint /media/backup.
Does anyone know what went wrong here?

Before using Amanda Backup i tried raspiBackup, but this gave me other user:group errors on the Synology. (ACL_TYPE_DEFAULT): Operation not supported (95))
Asking about this in an other thread, i got the advice not to hijack that thread and start a new thread.

Strictly speaking that’s no Amanda install but a Raspbian error. That package is broken. Amanda needs perl, that’s why a lot of dependency packages are installed when you install it, this one being among.

Use dpkg -l|grep amanda to see if Amanda got installed nevertheless.
If no try to install again from openhabian-config or try apt-get update;apt -y install amanda-common amanda-server amanda-client on the commandline.
to install again.

Hi @mstormi. i did this in one command:

~$ sudo apt-get update;apt -y install amanda-common amanda-server amanda-client

but then i got this error:

Reading package lists... Done
E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied)
E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?

I used sudo. but perhaps i had to use sudo before the second command also?
So, i did the two commands separately:

~$ sudo apt-get update
~$ sudo apt -y install amanda-common amanda-server amanda-client

That started the install, but ended with this:

..
..
Need to get 0 B/6,643 kB of archives.
After this operation, 18.9 MB of additional disk space will be used.
Extracting templates from packages: 100%
Preconfiguring packages ...
Selecting previously unselected package liblockfile-bin.
dpkg: unrecoverable fatal error, aborting:
 files list file for package 'libalgorithm-diff-xs-perl' is missing final newline
Updating FireMotD available updates count ...
E: Sub-process /usr/bin/dpkg returned an error code (2)
[21:10:28] openhabian@openhabianrpi2:~$

I find this strange because this afternoon i did an upgrade of my system and that went flawlesly.
I will try to update the Perl package separately.

EDIT.
I found the debian package: libalgorithm-diff-xs-perl_0.04-5+b1_armhf.deb and tried to install it:

~$ sudo dpkg -i libalgorithm-diff-xs-perl_0.04-5+b1_armhf.deb
dpkg: unrecoverable fatal error, aborting:
 files list file for package 'libalgorithm-diff-xs-perl' is missing final newline

So it must be something else what’s bothering me…

The problem is not the folder /usr/share/doc/libalgorithm-diff-xs-perl. But it must be a file which points to this folder… I can’t find a way to reinstall perl…

BTW Amanda is not installed… (dpkg -l|grep amanda)

Ok, Today a new attempt to use Amanda Backup…
I installed emoncms on a new sd-card, which went flawlwsly. (I use openhab AND emoncms together).
next i installed openHABian according to the Manual Setup procedure as described in the openhab docs.
Peculiarity however is that in my production openhabian the used user is openhab:openhabian while in my new openHABian Manual Setup the used user is openhab:openhab.
The install of Amanda backup (Clean new install) ended with this error:

update-perl-sax-parsers: Registering Perl SAX parser XML::LibXML::SAX::Parser with priority 50...
update-perl-sax-parsers: Registering Perl SAX parser XML::LibXML::SAX with priority 50...
update-perl-sax-parsers: Updating overall Perl SAX parser modules info file...
Replacing config file /etc/perl/XML/SAX/ParserDetails.ini with new version
Setting up libnet-smtp-ssl-perl (1.04-1) ...
Setting up libxml-simple-perl (2.22-1) ...
Setting up amanda-common (1:3.3.9-5) ...
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
Adding user `backup' to group `disk' ...
Adding user backup to group disk
Done.
Adding user `backup' to group `tape' ...
Adding user backup to group tape
Done.
Setting up amanda-client (1:3.3.9-5) ...
Setting up libmailtools-perl (2.18-1) ...
Setting up amanda-server (1:3.3.9-5) ...
Setting up liblwp-protocol-https-perl (6.06-2) ...
Setting up libwww-perl (6.15-1) ...
Setting up libxml-parser-perl (2.44-2+b1) ...
Setting up libxml-sax-expat-perl (0.40-2) ...
update-perl-sax-parsers: Registering Perl SAX parser XML::SAX::Expat with priority 50...
update-perl-sax-parsers: Updating overall Perl SAX parser modules info file...
Replacing config file /etc/perl/XML/SAX/ParserDetails.ini with new version
Processing triggers for systemd (232-25+deb9u9) ...
usermod: user 'openhabian' does not exist
/bin/chmod: missing operand after 'g+rwx'
Try '/bin/chmod --help' for more information.
^[[D^[[B2019-02-21_09:31:03_CET [openHABian] Checking for default openHABian username:password combination... OK (unknown user)
2019-02-21_09:31:03_CET [openHABian] We hope you got what you came for! See you again soon ;)

It seems that the openhabian install does not take into account that users may install using the officially supported openhabian install procedure… In this case the Manual Setup.
That is a pitty. I am really willing to try to learn the Amanda Backup proc, because it is the officially recommended and supported way to do the backup. But it is making me very hard (see other attempts before).
I’l next will try (with this config being emoncms AND openhab) to get raspiBackup. I will document that also (like i documented each step in this attempt).
If that works or not, i make an other attempt with a clean openhabian install (without emoncms) to see if Amanda works.
But this is taking so much time, just to get a good backup solution with nsf to my Synology nas…

Ok, the continuing story of backing up Raspberry Pi with USB SSD to Synology NAS.
My second test is using raspiBackup. In contrary to earlier posts this time it works without problems.
Again, we are talking about an system based on emoncms on which i added openHABian, using the Manual Setup.
With the first test with raspiBackup i used the default settings, no configuration changed to the default raspiBackup.conf file. By default raspiBackup uses dd to create a backup image on the nas.

emoncms@emoncmsrpi3:~ $ sudo /usr/local/bin/raspiBackup.sh -a ":" -o ":" /media/backup

This resulted in the following succesful backup:

--- RBK0009I: emoncmsrpi3: raspiBackup.sh V0.6.4.2 (2184fa5) started at Thu Feb 21 11:23:24 CET 2019.
--- RBK0151I: Using backuppath /media/backup.
!!! RBK0157W: No services to stop.
--- RBK0085I: Backup of type dd started. Please be patient.
31104+1 records in
31104+1 records out
31104958464 bytes (31 GB, 29 GiB) copied, 3444.55 s, 9.0 MB/s
--- RBK0078I: Backup time: 00:57:25.
!!! RBK0156W: No services to start.
--- RBK0033I: Please wait until cleanup has finished.
--- RBK0017I: Backup finished successfully.
--- RBK0010I: emoncmsrpi3: raspiBackup.sh V0.6.4.2 (2184fa5) stopped at Thu Feb 21 12:20:53 CET 2019.
emoncms@emoncmsrpi3:~ $

And on my nas, here are the files:

Next test with rsync an stopping services openhab and mosquitto before backup and staring them afterwards. Here i changed 3 rules in the raspiBackup.conf

# type of backup: dd, tar or rsync
DEFAULT_BACKUPTYPE="rsync"

# commands to stop services before backup separated by &&
DEFAULT_STOPSERVICES="systemctl stop openhab2.service && systemctl stop mosquitto.service && service feedwriter stop && service mqtt_input stop && service emoncms-nodes-service stop"

# commands to start services after backup separated by &&
DEFAULT_STARTSERVICES="systemctl start openhab2.service && systemctl start mosquitto.service && service emonhub start && service mqtt_input start && service emoncms-nodes-service start"

This is the result after waiting for some 40 mins.

emoncms@emoncmsrpi3:~ $ sudo /usr/local/bin/raspiBackup.sh /media/backup
--- RBK0009I: emoncmsrpi3: raspiBackup.sh V0.6.4.2 (2184fa5) started at Thu Feb 21 12:53:55 CET 2019.
--- RBK0151I: Using backuppath /media/backup.
--- RBK0036I: Saving partition layout.
43+1 records in
43+1 records out
46005248 bytes (46 MB, 44 MiB) copied, 6.1295 s, 7.5 MB/s
--- RBK0158I: Creating native rsync backup "/media/backup/emoncmsrpi3/emoncmsrpi3-rsync-backup-20190221-125354".
--- RBK0085I: Backup of type rsync started. Please be patient.
--- RBK0078I: Backup time: 00:12:13.
--- RBK0033I: Please wait until cleanup has finished.
--- RBK0017I: Backup finished successfully.
--- RBK0010I: emoncmsrpi3: raspiBackup.sh V0.6.4.2 (2184fa5) stopped at Thu Feb 21 13:06:32 CET 2019.
emoncms@emoncmsrpi3:~ $

So also successful, with this on the nas.

Now i know this is working, i tested the restore. Checked the device for my sd-card.

emoncms@emoncmsrpi3:~ $ sudo fdisk -l | egrep "^Disk /|^/dev"
Disk /dev/mmcblk0: 29 GiB, 31104958464 bytes, 60751872 sectors
/dev/mmcblk0p1       8192    98045    89854 43.9M  c W95 FAT32 (LBA)
/dev/mmcblk0p2      98304 60751871 60653568 28.9G 83 Linux

So the sd-card device listens to the name: /dev/mmcblk0
The restore command, for a /boot and / (root) partition on the sd-card, becomes then

emoncms@emoncmsrpi3:~ $ sudo raspiBackup.sh -0 /dev/mmcblk0 /media/backup/emoncmsrpi3/emoncmsrpi3-rsync-backup-20190221-125354/
--- RBK0009I: emoncmsrpi3: raspiBackup.sh V0.6.4.2 (2184fa5) started at Thu Feb 21 14:13:40 CET 2019.
??? RBK0149E: /dev/mmcblk0 not found.

Then i realized the /dev.mmcblk0 is in use, so i tried it with an USB sd-card reader and an other sd-card, and that worked. I could start the system with the new sd-card.

Unfortunately this needs manual operation at the site where the system resides. I would like to see a solution where no manual intervention is needed, so I am able to restore from a distance. (think o f a situation when you are away for several weeks… getting a notification that something seriously is wrong and getting remote access to my local lan is no problem.)

Ok, now I know I can have some sort of rsync backup to my nas working, I will give Amanda Backup an other try with a fresh openHABbian image without any other systems like emoncms.
Will be continued…

Ps. I do however have a question to the people who maintaining the openHABian solution.
(I think @ThomDietrich or @mstormi …)
Is there a reason why the openhabian image (with the Package Repository Installation) and the openHABian Manual Setup are using different user:group configuration? openhab:openhabian vs openhab:openhab?
And is possible for an end user to change the Manual Setup to use the same user:group settings?

Not sure what you are referring to so can’t answer. Probably the answer is it is coincidence.

Hi @deltabert,

The apt package sets the directory ownership to openhab:openhab but openHABian overwrites this with it’s own settings. I have created issues #533 and #534 to track conversation here.

Hi @Benjy ,
Thank you for your response and your question to the team. But i still wonder. I do not understand how this explains the difference between the installation of Amanda on a system build with an openhabian image and a system build using the openHABian Manual Setup

At first glance it seems to me that there is a difference in the install script for openHABian image (openhab:openhabian) and the install script used by the Manual Setup (openhab:openhab)…
Why does’nt openhabian overwrites the ownership with the Manual Setup…

New attempt to make backups with Amanda Backup.
This is a newly installed sd-card with openhabian, a freshly downloaden image.
As soon as openhabian was ready i did the following:

openhabian@openhabianrpi228:~$ sudo mkdir /media/backup
openhabian@openhabianrpi228:~$ sudo nano /etc/fstab
Added the line:
192.168.178.243:/volume1/NetBackup /media/backup  nfs rw,acl,vers=3,proto=tcp,hard,nolock,nofail,x-systemd.automount,x-systemd.requires=network-online.target 0 0
openhabian@openhabianrpi228:~$ sudo shutdown -r now

After reboot I went to openhabian-config and chose 01 Update, 02 Upgrade System, 10 Apply Improvements and selected Packages, 20 Optional Components::23 Mosquitto.
Then an shutdown -r now. After reboot I changed the user password passwd.
Started sudo openhabian-config 20 Backup/Restore 51 Amanda Backup.
At the end of the script Amand tels me its will make 6666 virtual containers which takes some time.
Then the procedure ended with:

...
..
Setting up libwww-perl (6.15-1) ...
Setting up libxml-parser-perl (2.44-2+b1) ...
Setting up libxml-sax-expat-perl (0.40-2) ...
update-perl-sax-parsers: Registering Perl SAX parser XML::SAX::Expat with priority 50...
update-perl-sax-parsers: Updating overall Perl SAX parser modules info file...
Replacing config file /etc/perl/XML/SAX/ParserDetails.ini with new version
Updating FireMotD available updates count ... 2019-02-21 16:43:59,133: FireMotD: Error: Template folderrun the install function "FireMotD -I -v".

/bin/chmod: missing operand after ‘g+rwx’
Try '/bin/chmod --help' for more information.

2019-02-21_16:54:38_CET [openHABian] Checking for default openHABian username:password combination.
2019-02-21_16:54:38_CET [openHABian] We hope you got what you came for! See you again soon ;)
[16:54:38] openhabian@openhabianrpi228:~$

I do not know why this typical motd stuff is needed, I try to (re)install it from the openhabian-config 10 Improvements 15 FireMotD. But this did not solve that message with the next attempt.
So I found somewhere how to solve it. I runned the Amanda Backup install again.

Now i try to make a backup with Amanda.

[22:13:09] openhabian@openhabianrpi228:~$ sudo su - backup
[sudo] password for openhabian:
[22:18:10] backup@openhabianrpi228:~$ amcheck openhab-dir
"/etc/amanda/openhab-dir/amanda.conf", line 21: an integer is expected
"/etc/amanda/openhab-dir/amanda.conf", line 21: end of line is expected
"/etc/amanda/openhab-dir/amanda.conf", line 22: tapetype parameter expected
"/etc/amanda/openhab-dir/amanda.conf", line 22: end of line is expected
"/etc/amanda/openhab-dir/amanda.conf", line 23: tape type parameter expected
"/etc/amanda/openhab-dir/amanda.conf", line 27: an integer is expected
"/etc/amanda/openhab-dir/amanda.conf", line 27: end of line is expected
"/etc/amanda/openhab-dir/amanda.conf", line 28: tapetype parameter expected
"/etc/amanda/openhab-dir/amanda.conf", line 28: end of line is expected
"/etc/amanda/openhab-dir/amanda.conf", line 29: tape type parameter expected
amcheck: errors processing config file
[22:18:32] backup@openhabianrpi228:~$ amdump openhab-dir
"/etc/amanda/openhab-dir/amanda.conf", line 21: an integer is expected
"/etc/amanda/openhab-dir/amanda.conf", line 21: end of line is expected
"/etc/amanda/openhab-dir/amanda.conf", line 22: tapetype parameter expected
"/etc/amanda/openhab-dir/amanda.conf", line 22: end of line is expected
"/etc/amanda/openhab-dir/amanda.conf", line 23: tape type parameter expected
"/etc/amanda/openhab-dir/amanda.conf", line 27: an integer is expected
"/etc/amanda/openhab-dir/amanda.conf", line 27: end of line is expected
"/etc/amanda/openhab-dir/amanda.conf", line 28: tapetype parameter expected
"/etc/amanda/openhab-dir/amanda.conf", line 28: end of line is expected
"/etc/amanda/openhab-dir/amanda.conf", line 29: tape type parameter expected
amdump: errors processing config file at /usr/sbin/amdump line 79.

I am at the end now, To be honest, i would like to throw the Amanda stuff out of the window…
I consider the test with Amanda as failed.

Sorry to hear. But I cannot duplicate your inputs so need to resort to guessing.
As you said Amanda told you it would create 6666 virtual tapes (!) you must have entered weird data.
It probably failed because of that and now amanda.conf is obviously messed up.
I suggest you
a) delete /etc/amanda/*
Before, take a copy of your current /etc/amanda/openhab-dir/amanda.conf and show it here
b) delete everything in your storage area and make sure the directory is properly mounted and writable for the user “backup”
c) re-run Amanda installation from openhabian-config

You might have been unlucky to get hit by a recently discovered bug.
Wait for https://github.com/openhab/openhabian/pull/535 to be merged and update openHABian before you retry.

Hi @mstormi,
I was already busy with an other try. I think i know what the reason was for the large amount of 6666 virtual tapes. Unfortunately i had already overwritten the sd-card from that test with an old OH v 2.3 backup of my to start a new test with Amanda (despite my disappointment yesterday). But i had kept screen copies from every step i made…
About the number 6666 the following. The procedure comes with the question:

How much storage do you want to dedicate to your backup 
in megabites ? Recommendation: 2-3 times the amount of 
data to be backed up.

I use a 32GB sd-card but didn’t look with df -am how much of it was used. In stead i calculated 3 x 32 GB and filled in 100000 being about 3 times the size in megabites for a full 32 GB sd-card.

Ok, I started today with an other attempt and now i filled in 10000 (megabites)

[16:30:53] backup@openHABianPi:~$ df -am
Filesystem                         1M-blocks    Used Available Use% Mounted on
/dev/root                              14468    2090     11759  16% /

Further for the other questions the procedure asks, i used these values:
Admin reports - No e-mail address
Create file storage area based backup - Locally attached nas - Yes
Storage directory - /media/backup
Storage capacity - 10000
Backup raw SD card, too? - No
Storage container creation - Continue
Create Amazon S3 based Backup - No

That’s it. and this results in the following messages when the configuration screens are gone:

[16:27:12] openhabian@openHABianPi:~$ sudo openhabian-config
2019-02-23_16:28:12_CET [openHABian] Checking for root privileges... OK
2019-02-23_16:28:12_CET [openHABian] Loading configuration file '/etc/openhabian.conf'... OK
2019-02-23_16:28:12_CET [openHABian] openHABian configuration tool version: [master]v1.4.1-453(1380125)
2019-02-23_16:28:12_CET [openHABian] Checking for changes in origin... OK
2019-02-23_16:28:25_CET [openHABian] Setting up the Amanda backup system ...
$ apt -y install amanda-common amanda-server amanda-client
Reading package lists... Done
Building dependency tree
Reading state information... Done
amanda-client is already the newest version (1:3.3.9-5).
amanda-common is already the newest version (1:3.3.9-5).
amanda-server is already the newest version (1:3.3.9-5).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
/bin/chmod: missing operand after ‘g+rwx’
Try '/bin/chmod --help' for more information.
2019-02-23_16:29:32_CET [openHABian] Checking for default openHABian username:password combination... OK
2019-02-23_16:29:32_CET [openHABian] We hope you got what you came for! See you again soon ;)
[16:29:33] openhabian@openHABianPi:~$

I do not know why i get this message:

/bin/chmod: missing operand after ‘g+rwx’
Try '/bin/chmod --help' for more information.

But i will wait for a message from you, when i can test again

EDIT; The number of megabites one has to fill in is what misleading. If i look now in the /slots folder there are still a huge amount of slots there…
What should i fill in for my sd-card (used 14468 1 MB blocks). The procedure says 3 times the data size in megabytes… So it should be 3 x 14468 is ~ 44000… So why does my 10000 gives so many slots?
Or do i make a miscalculation of x1000?

When i now try oo test i’l get this error:

[16:30:40] backup@openHABianPi:~$ amcheck openhab-dir
"/etc/amanda/openhab-dir/amanda.conf", line 21: an integer is expected
"/etc/amanda/openhab-dir/amanda.conf", line 21: end of line is expected
"/etc/amanda/openhab-dir/amanda.conf", line 22: tapetype parameter expected
"/etc/amanda/openhab-dir/amanda.conf", line 22: end of line is expected
"/etc/amanda/openhab-dir/amanda.conf", line 23: tape type parameter expected
"/etc/amanda/openhab-dir/amanda.conf", line 27: an integer is expected
"/etc/amanda/openhab-dir/amanda.conf", line 27: end of line is expected
"/etc/amanda/openhab-dir/amanda.conf", line 28: tapetype parameter expected
"/etc/amanda/openhab-dir/amanda.conf", line 28: end of line is expected
"/etc/amanda/openhab-dir/amanda.conf", line 29: tape type parameter expected
amcheck: errors processing config file

This is my Amanda config file:

[17:06:03] backup@openHABianPi:~$ cat /etc/amanda/openhab-dir/amanda.conf
org "openHABian openhab-dir"                            # Organization name for reports
mailto "%ADMIN"                                         # Email address to receive reports
netusage 90000 Kbps                                     # Bandwidth limit, 90M
dumpcycle 2 weeks                                       # Backup cycle is 14 days
runspercycle 7                                          # Run 7 times every 14 days
tapecycle 666 tapes                                     # Dump to this number of different tapes during the cycle
runtapes 10                                             # number of virtual containers to use at most per backup run
tpchanger "chg-disk:/slots"    # The tape-changer glue script
taper-parallel-write 2
autolabel "openHABian-openhab-dir-%%%" empty
tapelist "/etc/amanda/openhab-dir/tapelist"                             # The tapelist file
tapetype DIRECTORY
infofile "/var/lib/amanda/openhab-dir/curinfo"          # Database directory
logdir "/var/log/amanda/openhab-dir"                    # Log directory
indexdir "/var/lib/amanda/openhab-dir/index"            # Index directory
define tapetype SD {
    comment "SD card size"
    length 16 gbytes                                    # default SD card size (1 bucket = 1 SD d)
}
define tapetype DIRECTORY {                             # Define our tape behaviour
        length /media/backup mbytes                             # size of every virtual container (= max. usage per directory)
}
define tapetype AWS {
    comment "S3 Bucket"
    length /media/backup mbytes                                 # actual bucket size 5GB (Amazon default for free S3)
}

amrecover_changer "changer"                             # Changer for amrecover

# don't use any holding disk for the time being
#holdingdisk hd {
#    directory "/holdingdisk/openhab-dir"
#    use 1000 Mb
#}

define dumptype global {                                # The global dump definition
        maxdumps 2                                      # maximum number of backups run in parallel
        holdingdisk no                                  # Dump to temp disk (holdingdisk) before backup to tape
        index yes                                       # Generate index. For restoration usage
}
define dumptype root-tar {                              # How to dump root's directory
        global                                          # Include global (as above)
        program "GNUTAR"                                # Program name for compress
        estimate server                                 # Estimate the backup size before dump
        comment "root partitions dumped with tar"
        compress none                                   # No compression
        index                                           # Index this dump
        priority low                                    # Priority level
}
define dumptype user-tar {                              # How to dump user's directory
        root-tar                                        # Include root-tar (as above)
        comment "user partitions dumped with tar"
        priority medium                                 # Priority level
}
define dumptype comp-user-tar {                         # How to dump & compress user's directory
        user-tar                                        # Include user-tar (as above)
        compress client fast                            # Compress in client side with less CPU (fast)
}
define application-tool app_amraw {                     # how to dump the SD card's raw device /dev/mmcblk0
        plugin "amraw"                                  # uses 'dd'
}
define dumptype amraw {
        global
        program "APPLICATION"
        application "app_amraw"
}
# vim: filetype=conf

The problem seems to be that the internal routine to generate amanda.conf is called with shifted parameters, this results in 6666 containers instead of 15 containers of 6666 in size.

Can you edit /opt/openhabian/functions/backup.sh and add set -x as a line below the line to read create_backup_config() {
Then try again. Enter an admin mail address this time and send a copy of the output.

Hi, i added the:

create_backup_config() {
  set -x
  local config=$1
  local confdir=/etc/amanda/${config}
  local backupuser=$2
  etc..

as the first line under create_backup_config() {
in the file /opt/openhabian/functions/backup.sh.

Then I start openhabian-config again.
Now i fill in my email@ at the question about that.
Further I again fill in 10000 at the question about Storage Capacity
At the end, the procedure ‘hangs’ and i see the terminal session with this result:

[12:44:47] openhabian@openHABianPi:~$ sudo openhabian-config
2019-02-24_12:45:01_CET [openHABian] Checking for root privileges... OK
2019-02-24_12:45:01_CET [openHABian] Loading configuration file '/etc/openhabian.conf'... OK
2019-02-24_12:45:01_CET [openHABian] openHABian configuration tool version: [master]v1.4.1-454(0f38945)
2019-02-24_12:45:01_CET [openHABian] Checking for changes in origin... OK
2019-02-24_12:45:31_CET [openHABian] Setting up the Amanda backup system ...
$ apt -y install amanda-common amanda-server amanda-client
Reading package lists... Done
Building dependency tree
Reading state information... Done
amanda-client is already the newest version (1:3.3.9-5).
amanda-common is already the newest version (1:3.3.9-5).
amanda-server is already the newest version (1:3.3.9-5).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
+ local config=openhab-dir
+ local confdir=/etc/amanda/openhab-dir
+ local backupuser=backup
+ local adminmail=btammer1@gmail.com
+ local tapes=15
+ local size=666
+ local storage=/media/backup
+ local S3site=
+ local S3bucket=
+ local S3accesskey=
+ local S3secretkey=openhab-dir0
+ TMP=/tmp/.amanda-setup.2493
+ local 'introtext=We need to prepare (to "label") your removable storage media.'
+ /bin/grep -v openhab-dir /etc/cron.d/amanda
+ mv /tmp/.amanda-setup.2493 /etc/cron.d/amanda
+ echo '0 1 * * * backup /usr/sbin/amdump openhab-dir >/dev/null 2>&1'
+ echo '0 18 * * * backup /usr/sbin/amcheck -m openhab-dir >/dev/null 2>&1'
+ '[' '' = DIRECTORY ']'
+ mkdir -p /etc/amanda/openhab-dir
+ touch /etc/amanda/openhab-dir/tapelist
++ /bin/hostname
+ local hostname=openHABianPi
+ echo 'openHABianPi backup'
+ echo 'openHABianPi root amindexd amidxtaped'
+ echo 'localhost backup'
+ echo 'localhost root amindexd amidxtaped'
+ infofile=/var/lib/amanda/openhab-dir/curinfo
+ logdir=/var/log/amanda/openhab-dir
+ indexdir=/var/lib/amanda/openhab-dir/index
+ /bin/mkdir -p /var/lib/amanda/openhab-dir/curinfo /var/log/amanda/openhab-dir /var/lib/amanda/openhab-dir/index
+ /bin/chown -R backup:backup /var/backups/.amandahosts /etc/amanda/openhab-dir /var/lib/amanda/openhab-dir/curinfo /var/log/amanda/openhab-dir /var/lib/a         manda/openhab-dir/index
+ '[' openhab-dir = openhab-dir ']'
+ /bin/chown -R backup:backup /var/backups/.amandahosts /media/backup
+ /bin/chmod -R g+rwx /media/backup

And this is my amanda.conf file:

[12:51:56] openhabian@openHABianPi:~$ sudo cat /etc/amanda/openhab-dir/amanda.conf
[sudo] password for openhabian:
org "openHABian openhab-dir"                            # Organization name for reports
mailto "%ADMIN"                                         # Email address to receive reports
netusage 90000 Kbps                                     # Bandwidth limit, 90M
dumpcycle 2 weeks                                       # Backup cycle is 14 days
runspercycle 7                                          # Run 7 times every 14 days
tapecycle 15 tapes                                      # Dump to this number of different tapes during the cycle
runtapes 10                                             # number of virtual containers to use at most per backup run
tpchanger "chg-disk:/media/backup/slots"    # The tape-changer glue script
taper-parallel-write 2
autolabel "openHABian-openhab-dir-%%%" empty
tapelist "/etc/amanda/openhab-dir/tapelist"                             # The tapelist file
tapetype DIRECTORY
infofile "/var/lib/amanda/openhab-dir/curinfo"          # Database directory
logdir "/var/log/amanda/openhab-dir"                    # Log directory
indexdir "/var/lib/amanda/openhab-dir/index"            # Index directory
define tapetype SD {
    comment "SD card size"
    length 16 gbytes                                    # default SD card size (1 bucket = 1 SD d)
}
define tapetype DIRECTORY {                             # Define our tape behaviour
        length 666 mbytes                               # size of every virtual container (= max. usage per directory)
}
define tapetype AWS {
    comment "S3 Bucket"
    length 666 mbytes                                   # actual bucket size 5GB (Amazon default for free S3)
}

amrecover_changer "changer"                             # Changer for amrecover

# don't use any holding disk for the time being
#holdingdisk hd {
#    directory "/holdingdisk/openhab-dir"
#    use 1000 Mb
#}

define dumptype global {                                # The global dump definition
        maxdumps 2                                      # maximum number of backups run in parallel
        holdingdisk no                                  # Dump to temp disk (holdingdisk) before backup to tape
        index yes                                       # Generate index. For restoration usage
}
define dumptype root-tar {                              # How to dump root's directory
        global                                          # Include global (as above)
        program "GNUTAR"                                # Program name for compress
        estimate server                                 # Estimate the backup size before dump
        comment "root partitions dumped with tar"
        compress none                                   # No compression
        index                                           # Index this dump
        priority low                                    # Priority level
}
define dumptype user-tar {                              # How to dump user's directory
        root-tar                                        # Include root-tar (as above)
        comment "user partitions dumped with tar"
        priority medium                                 # Priority level
}
define dumptype comp-user-tar {                         # How to dump & compress user's directory
        user-tar                                        # Include user-tar (as above)
        compress client fast                            # Compress in client side with less CPU (fast)
}
define application-tool app_amraw {                     # how to dump the SD card's raw device /dev/mmcblk0
        plugin "amraw"                                  # uses 'dd'
}
define dumptype amraw {
        global
        program "APPLICATION"
        application "app_amraw"
}
# vim: filetype=conf
[12:53:06] openhabian@openHABianPi:~$

Any suggestions?

amanda.conf basically looks fine to me now. Run amdump for a test.
Eventually you need to clean your storage again and re-run install.

EDIT: update openhabian-config first (it should ask you to on start).

Yes, updating openhabian-config when it starts is a standard which I use even it it does not ask me to do so… I do this everytime I start openhabian-config. And when I am working on a Pi, the first time a day I start openhabian-config, I also run the system update.
Ok, I runned amdump but it gives no output:

[13:44:26] backup@openHABianPi:~$ amdump openhab-dir
[13:45:41] backup@openHABianPi:~$ amreport openhab-dir
amreport: Errors processing disklist at /usr/sbin/amreport line 575.

[13:45:56] backup@openHABianPi:~$

I have had this before; the process for installing Amanda ‘halts’ and Ctrl C does not get me to the command prompt again… I can’t identify the process with ps -A from an other ssh session.
So I decided to give a shutdown -r now. And then the system did not boot again!
So, today I already burned the sd-card again and started al over (updating etc…). Now I’m in the same situation. But I’l wait with the shutdown, perhaps you have any idea’s.
I see some tty and some ssh processes but i am not sure which i could kill.

[14:00:23] openhabian@openHABianPi:~$ ps -A | grep tty
  490 ?        00:00:00 agetty
  492 tty1     00:00:00 agetty
[14:05:51] openhabian@openHABianPi:~$

[14:07:17] openhabian@openHABianPi:~$ ps -A | grep ssh
  636 ?        00:00:00 sshd
 1105 ?        00:00:00 sshd
 1123 ?        00:00:00 sshd
 2751 ?        00:00:00 sshd
 2762 ?        00:00:00 sshd
 4467 ?        00:00:00 sshd
 4478 ?        00:00:00 sshd
[14:08:06] openhabian@openHABianPi:~$

Ok, I killed a few ssh processes, which removed the ssh session where the command was given.
Then I rebooted the Pi and this time it booted properly. Pfffff…
But now I do not know what else I can do to get Amanda working…
It is a very difficult/laborious process… doing this

I don’t know but if I was to guess it is related to your NFS mount. If you want to exclude that possibility add a USB stick and install Amanda and use that stick as your storage.

Ok, i will try that tomorrow.
But it think it is strange because my test with raspiBackup and the same NFS mount was working perfectly. But i will give it a try. I have a 32 GB USB stick and a 128GB SDD-USB available. I’l start with the stick.

Start small and disable raw SD partition backup when asked. You can add it later to the disklist file.