Amanda failure report need help interpreting it

Hi all,
I recently realised that Amanda was no longer making backups every night and had not been for around a year eek.
So I went back to the openhabian script and set it up once again.
I have a mounted remote volume I use for backups and all seemed good. but in the last couple of days I noticed the successful looking daily reports have stopped and I am getting unsuccessful reports.

Not wanting a repeat of not having backups for another year I would like to takle this head on, however I am really not sure what the report is telling me.
I get it makes files that it thinks of as virtual tapes . And it rotates the tapes used so that a long period of time is covered.

here is the report, any assistance in what I need to look at would be good. I am wondering if it is unable to delete the files and thus free up the name for reuse??? But really guessing right now.
I will take a look to see if anything in /var/logs provides a clue.

Hostname: openhab
Org     : openHABian openhab-dir
Config  : openhab-dir
Date    : September 12, 2019

These dumps were to tapes openHABian-openhab-dir-015, openHABian-openhab-dir-016.
*** A TAPE ERROR OCCURRED: [No acceptable volumes found].
No dumps are left in the holding disk.

The next 10 tapes Amanda expects to use are: openHABian-openhab-dir-001, openHABian-openhab-dir-002, openHABian-openhab-dir-003, openHABian-openhab-dir-004, openHABian-openhab-dir-005, openHABian-openhab-dir-006, openHABian-openhab-dir-007, openHABian-openhab-dir-008, openHABian-openhab-dir-009, openHABian-openhab-dir-010.


STATISTICS:
                          Total       Full      Incr.   Level:#
                        --------   --------   --------  --------
Estimate Time (hrs:min)     0:00
Run Time (hrs:min)          0:00
Dump Time (hrs:min)         0:00       0:00       0:00
Output Size (meg)          325.1      324.9        0.1
Original Size (meg)        325.1      324.9        0.1
Avg Compressed Size (%)    100.0      100.0      100.0
DLEs Dumped                    2          1          1  1:1
Avg Dump Rate (k/s)      31070.7    31712.7      588.2

Tape Time (hrs:min)         0:00       0:00       0:00
Tape Size (meg)            325.1      324.9        0.1
Tape Used (%)              119.1      119.0        0.0
DLEs Taped                     2          1          1  1:1
Parts Taped                    3          2          1  1:1
Avg Tp Write Rate (k/s)  27509.1    27727.5     1300.0

USAGE BY TAPE:
  Label                        Time         Size      %  DLEs Parts
  openHABian-openhab-dir-015   0:00      279330k   99.9     2     2
  openHABian-openhab-dir-016   0:00       53530k   19.1     0     1

NOTES:
  planner: Last full dump of openhab:/etc/openhab2 on tape openHABian-openhab-dir-027 overwritten in 2 runs.
  planner: Last full dump of openhab:/var/lib/openhab2 on tape openHABian-openhab-dir-015 overwritten in 1 run.
  planner: Last level 1 dump of openhab:/var/lib/openhab2 on tape openHABian-openhab-dir-027 overwritten in 2 runs.
  driver: Taper error: "No acceptable volumes found"
  taper: Slot 1 with label openHABian-openhab-dir-015 is usable
  taper: Slot 2 with label openHABian-openhab-dir-016 is usable
  taper: tape openHABian-openhab-dir-015 kb 279330 fm 2 [OK]
  taper: Slot 3 is a volume in error: Error loading device header -- unlabeled volume?, autolabel disabled
  taper: Slot 4 is a volume in error: Error loading device header -- unlabeled volume?, autolabel disabled
  taper: Slot 5 is a volume in error: Error loading device header -- unlabeled volume?, autolabel disabled
  taper: Slot 6 is a volume in error: Error loading device header -- unlabeled volume?, autolabel disabled
  taper: Slot 7 is a volume in error: Error loading device header -- unlabeled volume?, autolabel disabled
  taper: Slot 8 is a volume in error: Error loading device header -- unlabeled volume?, autolabel disabled
  taper: Slot 9 is a volume in error: Error loading device header -- unlabeled volume?, autolabel disabled
  taper: Slot 10 is a volume in error: Error loading device header -- unlabeled volume?, autolabel disabled
  taper: Slot 11 is a volume in error: Error loading device header -- unlabeled volume?, autolabel disabled
  taper: Slot 12 is a volume in error: Error loading device header -- unlabeled volume?, autolabel disabled
  taper: Slot 13 is a volume in error: Error loading device header -- unlabeled volume?, autolabel disabled
  taper: Slot 14 is a volume in error: Error loading device header -- unlabeled volume?, autolabel disabled
  taper: Slot 15 is a volume in error: Error loading device header -- unlabeled volume?, autolabel disabled
  taper: tape openHABian-openhab-dir-016 kb 53530 fm 1 [OK]


DUMP SUMMARY:
                                                         DUMPER STATS    TAPER STATS
HOSTNAME     DISK              L ORIG-kB  OUT-kB  COMP%  MMM:SS    KB/s MMM:SS    KB/s
-------------------------------- ---------------------- --------------- --------------
openhab      /etc/openhab2     1     130     130     --    0:00   586.2   0:00  1300.0
openhab      /var/lib/openhab2 0  332730  332730     --    0:10 31710.2   0:12 27727.5

(brought to you by Amanda version 3.3.6)

Thanks

Paul

So further investigation indicates the issue could be because that volume has run out of space, due to a separate issue where 2TB of data is getting stored there when it should not be, I will keep looking for the root cause but in the meantime I have created space and will see what happens on tonights run.

so far none of the logs have provided meaningful help to identify there was an inability to create a file or no space left…

Regards

Paul

The Amanda error message means that it has no (virtual) tape (i.e. subdirectory) available to write to while staying within its parameters. There’s a number of possible reasons why this can fail , for example if the defined total storage size was too low, it would need to overwrite old backups sooner than allowed by config (backups have to reach back a defined number of days or weeks as defined by parameters in amanda.conf).
And (v)tapes have to be “labelled” for use with Amanda.
This is done at openhabian install time. Your problem seems to be that whyever, that label got lost for some of your subdirectories in the storage area. I think it’s easiest to reinstall Amanda from the menu, I hope that does the trick. If not, it’s getting tricky. You can use the amlabel command to manually do that (again). Also check if/why it does not autolabel them, there should be an “autolabel” line in amanda.conf.
Eventually NFS permissions are wrong. For example you must use no_root_squash in exports on the NFS server so UID 0 (root) on your NFS client is not mapped to some other UID.

Thanks for the response, checked this morning and it is still complaining so providing disk space did not solve the issue.

Also in my fstab file I have this definition for the mount location (not NFS).

//venus/Data2 /media/Data2 cifs username=Paul,password=somesecret,iocharset=utf8,sec=ntlm,rw,file_mode=0777,dir_mode=0777 0 0

Should this be changed? if so can you advise?
Note it was working for a couple of weeks at least before failing.

[EDIT] I have just checked the destination directory and stoped after the total number of folders reached 8500 and the size on the disk is >180GB. There is something wrong. It seems that it creates the folder recursively within itself.
I will delete the destination and rerun the Amanda setup script.

[EDIT2]
I have deleted all files and folders in the destination locations and rerun the Amanda setup option. now when I check the number of folders I get 786 folders and 0 bytes.

doing the check says the following:

[14:03:11] backup@openhab:~$ amcheck openhab-dir
Amanda Tape Server Host Check
-----------------------------
Searching for label 'openHABian-openhab-dir-001': volume ''
slot 1: contains an empty volume
Will write label 'openHABian-openhab-dir-030' to new volume in slot 1.
NOTE: skipping tape-writable test
Server check took 0.889 seconds

Amanda Backup Client Hosts Check
--------------------------------
Client check: 1 host checked in 2.605 seconds.  0 problems found.

(brought to you by Amanda 3.3.6)

Thanks

Paul

Ouch, you didn’t read the manual !
It does not work to install Amanda on CIFS. Change to NFS.

created by Amanda inside the storage destination dir ?? What did you enter during Amanda install ?
Compare with the README. It shouldn’t be more than 15. Large numbers can happen if one of the parameters you enter evaluates to (empty), resulting in a shifted-by-one input to the setup routine.

I will take a look at setting up MFS exports off my Linux server. I found the note that you refer to in the docs, reproduced for future readers.

NOTE: don't use CIFS (Windows sharing). If you have a NAS, use NFS instead. It does not work with CIFS because of issues with symlinks, and it doesn't make sense to use a Windows protocol to share a disk from a UNIX server (all NAS) to a UNIX client (openHABian) at all. If you don't have a NAS, DON'T use your Windows box as the storage server. Attach a USB stick to your Pi instead for storage. 

As the issue recorded is one of symlinks I will recreate again once I have NFS set up correctly and see how many folders are created then.

Thanks

Paul