Adding a directory for ZRAM and Amanda

I need some help trying to use the zram feature correctly.
I added a program to control some devices and need to store some variables that frequently update.
I chose to use the ZRAM for this variables file ‘abc’ (combined with Amanda for backup).

The file is located at /var/lib/rflink/abc

I added the following line to /etc/ztab:
dir zstd 10M 50M /var/lib/rflink /rflink.bind
and to /etc/amanda/openhab-dir/disklist
openhabian /opt/zram/rflink.bind comp-user-tar

After starting up zramctl shows an extra disk now:
/dev/zram3 zstd 50M 4.2M 12.4K 204K 4 /opt/zram/zram3

I now see different locations of file ‘abc’ my program can write to:

  • /var/lib/rflink/abc : this doesn’t appear to work (/opt/zram/*/abc (zram and Amanda) never get updated)
  • /opt/zram/rflink.bind/abc : this doesn’t appear to work (/opt/zram/zram3/upper/abc doesn’t update)
  • /opt/zram/zram3/upper/abc : this doesn’t appear to work (/opt/zram/rflink.bind/abc (Amanda) doesn’t update)

Not being a Linux specialist and some g**gling didn’t help neither, so I have the next question:
What is the correct directory for my program to write to, or do I need something extra to make the zram overlay work? Some setting needed for Amanda to have the rflink.bind dir to sync?

For as far as I can tell, the zram log file looks OK (zram3 logging below)

ztab create dir zstd 10M 50M /var/lib/rflink /rflink.bind
Stopping services that interfere with zram device configuration
dirPerm /var/lib/rflink 755 0:0
mount: /var/lib/rflink bound on /opt/zram/rflink.bind.
mount: /opt/zram/rflink.bind propagation flags changed.
dirMountOpt: rw,noatime; dirFsType: ext4
zram3 created comp_algorithm=zstd mem_limit=10M disksize=50M
mke2fs 1.44.5 (15-Dec-2018)
fs_types for mke2fs.conf resolution: 'ext4', 'small'
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
12800 inodes, 12800 blocks
640 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=14680064
1 block group
32768 blocks per group, 32768 fragments per group
12800 inodes per group

Allocating group tables: done
Writing inode tables: done
Creating journal (1024 blocks): done
Writing superblocks and filesystem accounting information: done

mount: /dev/zram3 mounted on /opt/zram/zram3.
mount: overlay3 mounted on /var/lib/rflink.
createZlog: no oldlog directory provided in ztab
Restarting services that interfere with zram device configuration

I can also recover the file ‘abc’ from amanda, but it does not have the changes that were written to the file that day.
Any help is appreciated.

This. It is the current version. This is also what you should enter in Amanda’s disklist.
Why do you overcomplicate things ?
/opt/whatever are just pointers to the lower filesystem. Files in there are not current, they have the contents from the time of ZRAM start.
ZRAM will only copy everything from RAM to lower FS on shutdown/reboot.

Eventually you have a problem that the directory you specified for ZRAM (/var/lib/rflink) did not exist at ZRAM startup. Stop OH, stop ZRAM and check if it’s still there.

Ah, not really knowing well enough how everything actually works has led to following what I saw in the existing OH files - and doing so clearly overcomplicated…
I changed my program’s configuration to write /var/lib/rflink/abc again and changed Amanda’s disklist as you suggested. Tomorrow I’ll check Amanda’s nightly backup, but that will clearly be correct now.

The ‘abc’ file contains incremental codes to operate the devices and if lost makes them no longer respond.
So it is quite important having these correctly backed-up.

Thanks a lot for helping me out here.

The ZRAM is working as expected now. Reading and writing to /var/lib/rflink/abc gives the latest values, whereas /opt/zram/rflink.bind/abc shows the not updated flash contents and /opt/zram/zram3/upper/abc shows the latest (zram) contents.
However, there still is a problem with Amanda. When using /var/lib/rflink in the disklist and I try to recover the ‘abc’ file, Amanda thinks the directory is empty. I checked Amanda’s log files and some of the admin commands but everything looks alright. There is a data file for that disk in the storage slot - but amrecover ‘ls’ (after setdisk /var/lib/rflink) shows an empty directory. I also checked the owner and permissions of the rflink directory (they are indentical to the openhab stuff).
It looks as if somehow Amanda thinks that the /var/lib/rflink directory is not a real directory. For a test, I now also added /opt/zram/zram3/upper to Amanda’s disklist and that does result in Amanda backing up its files

At first when adding the directory I started following what is already a directory in OH that uses both ZRAM and backup, namely the persistence directory. In Amanda’s disklist this uses /persistence.bind. So I used rflink.bind only to find out it was not the updated version - and started this community thread.

Although I have a usable backup now using the /opt/…/upper directory, I still have some questions now:

  • Why does the persistence backup not use the actual directory path and refer to the lower overlay directory? Is it also because Amanda may have a problem with overlayed directory paths?
  • According to what I see, the backup of the persistence uses the (unsynchronized lower overlay) flash data at /opt/zram/persistence.bind and not the latest data that is in the upper overlay (/opt/zram/zram1/upper). Which is rather worrying…

Here is the report I got which shows a 0K output for /var/lib/rflink and non-zero for /opt/zram/zram3/upper.
It also shows the path for the persistence, which I checked to see that it points to the flash data, not the zram contents.

Hostname: openhabian
Org     : openHABian openhab-dir
Config  : openhab-dir
Date    : April 23, 2021

These dumps were to tape openHABian-openhab-dir-002.
The next 10 tapes Amanda expects to use are: openHABian-openhab-dir-003, openHABian-openhab-dir-004, openHABian-openhab-dir-005, openHABian-openhab-dir-006, openHABian-openhab-dir-007, openHABian-openhab-dir-008, openHABian-openhab-dir-009, openHABian-openhab-dir-010, openHABian-openhab-dir-011, openHABian-openhab-dir-012.


STATISTICS:
                          Total       Full      Incr.   Level:#
                        --------   --------   --------  --------
Estimate Time (hrs:min)     0:00
Run Time (hrs:min)          0:00
Dump Time (hrs:min)         0:00       0:00       0:00
Output Size (meg)            0.4        0.0        0.4
Original Size (meg)          3.7        0.0        3.7
Avg Compressed Size (%)     11.7        5.0       11.8
DLEs Dumped                    6          2          4  1:4
Avg Dump Rate (k/s)         56.0        0.4       78.4

Tape Time (hrs:min)         0:00       0:00       0:00
Tape Size (meg)              0.4        0.0        0.4
Tape Used (%)                0.0        0.0        0.0
DLEs Taped                     6          2          4  1:4
Parts Taped                    6          2          4  1:4
Avg Tp Write Rate (k/s)    741.7        5.0     1110.0


USAGE BY TAPE:
  Label                        Time         Size      %  DLEs Parts
  openHABian-openhab-dir-002   0:00         445K    0.0     6     6


NOTES:
  planner: tapecycle (14) <= runspercycle (14)
  planner: Last full dump of openhabian:/boot on tape openHABian-openhab-dir-001 overwritten in 1 run.
  planner: Last full dump of openhabian:/etc on tape openHABian-openhab-dir-001 overwritten in 1 run.
  planner: Last full dump of openhabian:/var/lib/openhab2 on tape openHABian-openhab-dir-001 overwritten in 1 run.
  planner: Last full dump of openhabian:/opt/zram/persistence.bind on tape openHABian-openhab-dir-001 overwritten in 1 run.
  planner: Last full dump of openhabian:/var/lib/rflink on tape openHABian-openhab-dir-001 overwritten in 1 run.
  planner: Adding new disk openhabian:/opt/zram/zram3/upper.
  planner: WARNING: no history available for openhabian:/opt/zram/zram3/upper; guessing that size will be 279552 KB
  planner: WARNING: no history available for openhabian:/var/lib/rflink; guessing that size will be 279552 KB
  taper: tape openHABian-openhab-dir-002 kb 449 fm 6 [OK]
  big estimate: openhabian /var/lib/rflink 0
                  est: 279584K    out 0K
  big estimate: openhabian /opt/zram/zram3/upper 0
                  est: 279584K    out 1K


DUMP SUMMARY:
                                                                  DUMPER STATS   TAPER STATS
HOSTNAME     DISK                       L ORIG-KB  OUT-KB  COMP%  MMM:SS   KB/s MMM:SS   KB/s
----------------------------------------- ---------------------- -------------- -------------
openhabian   /boot                      1      10       1   10.0    0:01    0.9   0:00   10.0
openhabian   /etc                       1     350      40   11.4    0:01   31.5   0:00  400.0
openhabian   /opt/zram/persistence.bind 1      80       3    3.8    0:01    2.6   0:00   30.0
openhabian   /opt/zram/zram3/upper      0      10       1   10.0    0:01    0.9   0:00   10.0
openhabian   /var/lib/openhab2          1    3330     400   12.0    0:02  190.4   0:00 4000.0
openhabian   /var/lib/rflink            0      10       1   10.0    0:01    0.9   0:00    0.0

(brought to you by Amanda version 3.5.1)

and the ‘ls’ output when I tried to recover:

[11:39:09] root@openhabian:/etc/amanda# amrecover openhab-dir
AMRECOVER Version 3.5.1. Contacting server on localhost ...
220 openhabian AMANDA index server (3.5.1) ready.
Setting restore date to today (2021-04-23)
200 Working date set to 2021-04-23.
200 Config set to openhab-dir.
200 Dump host set to openhabian.
Use the setdisk command to choose dump disk to recover
amrecover> setdisk /var/lib/rflink
200 Disk set to /var/lib/rflink.
amrecover> ls
2021-04-23-11-36-48 .
amrecover> setdisk /opt/zram/zram3/upper
200 Disk set to /opt/zram/zram3/upper.
amrecover> ls
2021-04-23-11-36-48 log4j2.xml
2021-04-23-11-36-48 RollingCode.txt
2021-04-23-11-36-48 .

I don’t know the details of your setup and modifications but in fact it probably is not.
If you or Amanda call tar or gtar with standard parameters it’ll not cross filesystems, and what you created is in fact one. If so you should see that in the Amanda reports or server logs.

It also depends on when you have setup Amanda, there have been changes over time for example to the default disklist but you only get them on install.

Hi Markus,
I am using the latest openhabian 1.6.4 from a fresh install and had the backup drive and its parameters changed in openhabian.conf before the install. It is in fact a very standard openhabian - openhab 2.5.12 system. I only added a binary in /usr/local/bin to be called from a rule for controlling the rflink devices, and the extra data directory /var/lib/rflink that I then added to the ZRAM and Amanda configuration files.

I now added the line property "ONE-FILE-SYSTEM" "NO" to app_amgtar in amanda.conf. Amanda now does backup the directory /var/lib/rflink. So I no longer need to use the /opt/…upper directory.

I do think that the line /opt/zram/persistence.bind in the standard OH Amanda disklist points to the flash and not the latest version. However, there is also an entry /var/lib/openhab2 that contains the persistence directory with the latest data. Hence, the /opt/zram/persistence.bind entry in Amada’s disklist seems to be obsolete in disklist.

Thanks for your support for getting my backup fully operational now.

Argh. Why did you not mention you use OH2 ? :roll_eyes:
The disklist generation was changed quite some time ago but in OH3/main branch only.
openHABian is no longer maintained for OH2 (except critical patches but this one isn’t critical).

Yes, when reading your previous reply I realized I completely forgot to mention OH2, sorry for that.
Now that most of our daily used items here are “openhab’d”, I can start making time for moving to OH3.

OH does require some learning but I haven’t regret for one moment choosing it. With the wide range of add-ons, rules programming and the help of the community, I’m sure no other system could have brought me so much gain in home comfort in such short time. Thanks!

This topic was automatically closed 41 days after the last reply. New replies are no longer allowed.