Mapdb: no restore at startup for item status

flashed the image you posted, tried with clonebranch=master

[16:35:58] root@openhab:/home/openhabian# zramctl
NAME       ALGORITHM DISKSIZE  DATA  COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo-rle       500M 16.4M   7.1K   88K       4 /opt/zram/zram2
/dev/zram1 lzo-rle       500M 18.6M 573.9K 1012K       4 /opt/zram/zram1
/dev/zram0 lzo-rle       600M    4K    87B   12K       4 [SWAP]
[16:36:01] root@openhab:/home/openhabian# systemctl status zram-config.service
● zram-config.service - zram-config
   Loaded: loaded (/etc/systemd/system/zram-config.service; enabled; vendor preset: enabled)
   Active: active (exited) since Sun 2020-08-30 15:41:38 CEST; 54min ago
  Process: 361 ExecStartPre=/usr/local/sbin/zramsync recover /storage/zram (code=exited, status=0/SUCCESS)
  Process: 469 ExecStart=/usr/local/sbin/zram-config start (code=exited, status=0/SUCCESS)
 Main PID: 469 (code=exited, status=0/SUCCESS)

Aug 30 15:41:38 openhab zram-config[469]: + mkdir -p /opt/zram/zram2
Aug 30 15:41:38 openhab zram-config[469]: + mount --verbose --types ext4 -o rw,noatime /dev/zram2 /opt/zram/zram2/
Aug 30 15:41:38 openhab zram-config[469]: + mkdir -p /opt/zram/zram2/upper /opt/zram/zram2/workdir /var/lib/openhab2/per
Aug 30 15:41:38 openhab zram-config[469]: + mount --verbose --types overlay -o redirect_dir=on,lowerdir=/opt/zram/persis
Aug 30 15:41:38 openhab zram-config[469]: + chown 110:115 /opt/zram/zram2/upper /opt/zram/zram2/workdir /var/lib/openhab
Aug 30 15:41:38 openhab zram-config[469]: + chmod 775 /opt/zram/zram2/upper /opt/zram/zram2/workdir /var/lib/openhab2/pe
Aug 30 15:41:38 openhab zram-config[469]: + echo 'dir                /zram2                /var/lib/openhab2/persistence
Aug 30 15:41:38 openhab zram-config[469]: + read -r line
Aug 30 15:41:38 openhab zram-config[469]: + [[ false == \t\r\u\e ]]
Aug 30 15:41:38 openhab systemd[1]: Started zram-config.

log firstboot: https://pastebin.com/bbh6tWZE

[16:36:10] root@openhab:/home/openhabian# date > /var/lib/openhab2/persistence/date
[16:40:12] root@openhab:/home/openhabian# cat /var/lib/openhab2/persistence/date
Sun 30 Aug 16:40:12 CEST 2020

reboot

[16:42:51] root@openhab:/home/openhabian# zramctl
[16:42:56] root@openhab:/home/openhabian# systemctl status zram-config.service
● zram-config.service - zram-config
   Loaded: loaded (/etc/systemd/system/zram-config.service; enabled; vendor preset: enabled)
   Active: inactive (dead)
[16:43:04] root@openhab:/home/openhabian# cat /var/lib/openhab2/persistence/date
Sun 30 Aug 16:40:12 CEST 2020

this is not correct, is it?
another try:

[16:43:21] root@openhab:/home/openhabian# date > /var/lib/openhab2/persistence/date
[16:44:02] root@openhab:/home/openhabian# cat /var/lib/openhab2/persistence/date
Sun 30 Aug 16:44:02 CEST 2020

reboot

[16:45:44] root@openhab:/home/openhabian# zramctl
[16:45:47] root@openhab:/home/openhabian# systemctl status zram-config.service
● zram-config.service - zram-config
   Loaded: loaded (/etc/systemd/system/zram-config.service; enabled; vendor preset: enabled)
   Active: inactive (dead)
[16:45:56] root@openhab:/home/openhabian# cat /var/lib/openhab2/persistence/date
Sun 30 Aug 16:44:02 CEST 2020

hmm, i see the correct file, but i don’t think it’s in zram?

should i test with clonebranch=testbuild ?

ZRAM didn’t run so no. After boot, run

systemctl status zram-config.service zramsync.service
journalctl -xu zram-config.service
journalctl -xu zramsync.service
journalctl -xe --no-pager

If zram does not run systemctl start zram-config zramsync before you (re)start openhab2.

Yes

strange… after last reboot zram was running again:

[08:03:04] root@openhab:/home/openhabian# zramctl
NAME       ALGORITHM DISKSIZE  DATA  COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo-rle       500M 16.4M   7.1K   76K       4 /opt/zram/zram2
/dev/zram1 lzo-rle       500M 26.8M 573.4K  992K       4 /opt/zram/zram1
/dev/zram0 lzo-rle       600M    4K    87B   12K       4 [SWAP]

here’s the rest (probably not very useful as zram is running?):

[08:06:18] root@openhab:/home/openhabian# systemctl status zram-config.service zramsync.service
● zram-config.service - zram-config
   Loaded: loaded (/etc/systemd/system/zram-config.service; enabled; vendor preset: enabled)
   Active: active (exited) since Mon 2020-08-31 08:00:24 CEST; 6min ago
  Process: 337 ExecStartPre=/usr/local/sbin/zramsync recover /storage/zram (code=exited, status=0/SUCCESS)
  Process: 459 ExecStart=/usr/local/sbin/zram-config start (code=exited, status=0/SUCCESS)
 Main PID: 459 (code=exited, status=0/SUCCESS)

Aug 31 08:00:24 openhab zram-config[459]: + mkdir -p /opt/zram/zram2
Aug 31 08:00:24 openhab zram-config[459]: + mount --verbose --types ext4 -o rw,noatime /dev/zram2 /opt/zram/zram2/
Aug 31 08:00:24 openhab zram-config[459]: + mkdir -p /opt/zram/zram2/upper /opt/zram/zram2/workdir /var/lib/openhab2/per
Aug 31 08:00:24 openhab zram-config[459]: + mount --verbose --types overlay -o redirect_dir=on,lowerdir=/opt/zram/persis
Aug 31 08:00:24 openhab zram-config[459]: + chown 110:115 /opt/zram/zram2/upper /opt/zram/zram2/workdir /var/lib/openhab
Aug 31 08:00:24 openhab zram-config[459]: + chmod 775 /opt/zram/zram2/upper /opt/zram/zram2/workdir /var/lib/openhab2/pe
Aug 31 08:00:24 openhab zram-config[459]: + echo 'dir                /zram2                /var/lib/openhab2/persistence
Aug 31 08:00:24 openhab zram-config[459]: + read -r line
Aug 31 08:00:24 openhab zram-config[459]: + [[ false == \t\r\u\e ]]
Aug 31 08:00:24 openhab systemd[1]: Started zram-config.

● zramsync.service - zramsync
   Loaded: loaded (/etc/systemd/system/zramsync.service; enabled; vendor preset: enabled)
   Active: active (exited) since Mon 2020-08-31 08:00:24 CEST; 6min ago

Aug 31 08:00:24 openhab systemd[1]: Started zramsync.
[08:06:29] root@openhab:/home/openhabian# journalctl -xu zram-config.service
-- Logs begin at Sun 2020-08-30 16:32:54 CEST, end at Mon 2020-08-31 08:06:14 CEST. --
Aug 31 08:00:21 openhab systemd[1]: Starting zram-config...
-- Subject: A start job for unit zram-config.service has begun execution
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit zram-config.service has begun execution.
--
-- The job identifier is 63.
Aug 31 08:00:21 openhab zramsync[337]: + LOG=/storage/zram/zramsync.log
Aug 31 08:00:21 openhab zramsync[337]: + dir=/storage/zram
Aug 31 08:00:21 openhab zramsync[337]: + storeFile=/storage/zram/zram.tar
Aug 31 08:00:21 openhab zramsync[337]: + [[ 2 -ne 2 ]]
Aug 31 08:00:21 openhab zramsync[337]: + [[ -f /etc/ztab ]]
Aug 31 08:00:21 openhab zramsync[337]: + [[ -d /storage/zram ]]
Aug 31 08:00:21 openhab zramsync[337]: + [[ recover == \r\e\c\o\v\e\r ]]
Aug 31 08:00:21 openhab zramsync[337]: + mode=recover
Aug 31 08:00:21 openhab zramsync[337]: ++ date
Aug 31 08:00:21 openhab zramsync[337]: + tee -a /storage/zram/zramsync.log
Aug 31 08:00:21 openhab zramsync[337]: + echo 'zramsync recovery starting @ Mon 31 Aug 08:00:21 CEST 2020'
Aug 31 08:00:21 openhab zramsync[337]: zramsync recovery starting @ Mon 31 Aug 08:00:21 CEST 2020
Aug 31 08:00:21 openhab zramsync[337]: + inFile=/etc/ztab
Aug 31 08:00:21 openhab zramsync[337]: + read -r line
Aug 31 08:00:21 openhab zramsync[337]: + case "$line" in
Aug 31 08:00:21 openhab zramsync[337]: + continue
Aug 31 08:00:21 openhab zramsync[337]: + read -r line
Aug 31 08:00:21 openhab zramsync[337]: + case "$line" in
Aug 31 08:00:21 openhab zramsync[337]: + continue
Aug 31 08:00:21 openhab zramsync[337]: + read -r line
Aug 31 08:00:21 openhab zramsync[337]: + case "$line" in
Aug 31 08:00:21 openhab zramsync[337]: + continue
Aug 31 08:00:21 openhab zramsync[337]: + read -r line
Aug 31 08:00:21 openhab zramsync[337]: + case "$line" in
Aug 31 08:00:21 openhab zramsync[337]: + continue
Aug 31 08:00:21 openhab zramsync[337]: + read -r line
Aug 31 08:00:21 openhab zramsync[337]: + case "$line" in
Aug 31 08:00:21 openhab zramsync[337]: + continue
Aug 31 08:00:21 openhab zramsync[337]: + read -r line
Aug 31 08:00:21 openhab zramsync[337]: + case "$line" in
[08:06:34] root@openhab:/home/openhabian# journalctl -xu zramsync.service
-- Logs begin at Sun 2020-08-30 16:32:54 CEST, end at Mon 2020-08-31 08:06:14 CEST. --
Aug 31 08:00:24 openhab systemd[1]: Started zramsync.
-- Subject: A start job for unit zramsync.service has finished successfully
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit zramsync.service has finished successfully.
--
-- The job identifier is 71.

journalctl -xe --no-pager: https://pastebin.com/iNgQ1cmz

i’ll try testbuild tonight.

i made some reboots since yesterday and most of the times zram was running after reboot. but not always:

here zram was running:

[11:38:21] root@openhab:/home/openhabian# zramctl
NAME       ALGORITHM DISKSIZE  DATA  COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo-rle       500M 16.4M   7.1K   84K       4 /opt/zram/zram2
/dev/zram1 lzo-rle       500M   26M 419.5K  880K       4 /opt/zram/zram1
/dev/zram0 lzo-rle       600M    4K    87B   12K       4 [SWAP]
[11:38:24] root@openhab:/home/openhabian# cat /var/lib/openhab2/persistence/date
Tue  1 Sep 11:10:32 CEST 2020
[11:38:27] root@openhab:/home/openhabian# date > /var/lib/openhab2/persistence/date
[11:38:40] root@openhab:/home/openhabian# cat /var/lib/openhab2/persistence/date
Tue  1 Sep 11:38:40 CEST 2020
[11:38:42] root@openhab:/home/openhabian# reboot

after reboot not so much:

[11:41:57] root@openhab:/home/openhabian# zramctl
[11:42:00] root@openhab:/home/openhabian# cat /var/lib/openhab2/persistence/date
Tue  1 Sep 11:10:32 CEST 2020
[11:41:57] root@openhab:/home/openhabian# zramctl
[11:42:00] root@openhab:/home/openhabian# cat /var/lib/openhab2/persistence/date
Tue  1 Sep 11:10:32 CEST 2020
[11:42:49] root@openhab:/home/openhabian# systemctl status zram-config.service zramsync.service
● zram-config.service - zram-config
   Loaded: loaded (/etc/systemd/system/zram-config.service; enabled; vendor preset: enabled)
   Active: inactive (dead)

● zramsync.service - zramsync
   Loaded: loaded (/etc/systemd/system/zramsync.service; enabled; vendor preset: enabled)
   Active: inactive (dead)
[11:43:15] root@openhab:/home/openhabian# journalctl -xu zram-config.service
-- Logs begin at Tue 2020-09-01 11:06:20 CEST, end at Tue 2020-09-01 11:41:57 CEST. --
Sep 01 11:06:45 openhab systemd[1]: Starting zram-config...
-- Subject: A start job for unit zram-config.service has begun execution
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit zram-config.service has begun execution.
--
-- The job identifier is 64.
Sep 01 11:06:45 openhab zramsync[239]: + LOG=/storage/zram/zramsync.log
Sep 01 11:06:45 openhab zramsync[239]: + dir=/storage/zram
Sep 01 11:06:45 openhab zramsync[239]: + storeFile=/storage/zram/zram.tar
Sep 01 11:06:45 openhab zramsync[239]: + [[ 2 -ne 2 ]]
Sep 01 11:06:45 openhab zramsync[239]: + [[ -f /etc/ztab ]]
Sep 01 11:06:45 openhab zramsync[239]: + [[ -d /storage/zram ]]
Sep 01 11:06:45 openhab zramsync[239]: + [[ recover == \r\e\c\o\v\e\r ]]
Sep 01 11:06:45 openhab zramsync[239]: + mode=recover
Sep 01 11:06:45 openhab zramsync[239]: + tee -a /storage/zram/zramsync.log
Sep 01 11:06:45 openhab zramsync[239]: ++ date
Sep 01 11:06:45 openhab zramsync[239]: + echo 'zramsync recovery starting @ Tue  1 Sep 11:06:45 CEST 2020'
Sep 01 11:06:45 openhab zramsync[239]: zramsync recovery starting @ Tue  1 Sep 11:06:45 CEST 2020
Sep 01 11:06:45 openhab zramsync[239]: + inFile=/etc/ztab
Sep 01 11:06:45 openhab zramsync[239]: + read -r line
Sep 01 11:06:45 openhab zramsync[239]: + case "$line" in
Sep 01 11:06:45 openhab zramsync[239]: + continue
Sep 01 11:06:45 openhab zramsync[239]: + read -r line
Sep 01 11:06:45 openhab zramsync[239]: + case "$line" in
Sep 01 11:06:45 openhab zramsync[239]: + continue
Sep 01 11:06:45 openhab zramsync[239]: + read -r line
Sep 01 11:06:45 openhab zramsync[239]: + case "$line" in
Sep 01 11:06:45 openhab zramsync[239]: + continue
Sep 01 11:06:45 openhab zramsync[239]: + read -r line
Sep 01 11:06:45 openhab zramsync[239]: + case "$line" in
Sep 01 11:06:45 openhab zramsync[239]: + continue
Sep 01 11:06:45 openhab zramsync[239]: + read -r line
Sep 01 11:06:45 openhab zramsync[239]: + case "$line" in
Sep 01 11:06:45 openhab zramsync[239]: + continue
Sep 01 11:06:45 openhab zramsync[239]: + read -r line
Sep 01 11:06:45 openhab zramsync[239]: + case "$line" in
[11:43:44] root@openhab:/home/openhabian# journalctl -xu zramsync.service
-- Logs begin at Tue 2020-09-01 11:06:20 CEST, end at Tue 2020-09-01 11:41:57 CEST. --
Sep 01 11:06:49 openhab systemd[1]: Started zramsync.
-- Subject: A start job for unit zramsync.service has finished successfully
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit zramsync.service has finished successfully.
--
-- The job identifier is 65.
Sep 01 11:10:41 openhab systemd[1]: Stopping zramsync...
-- Subject: A stop job for unit zramsync.service has begun execution
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A stop job for unit zramsync.service has begun execution.
--
-- The job identifier is 859.
Sep 01 11:10:41 openhab zramsync[1399]: + LOG=/storage/zram/zramsync.log
Sep 01 11:10:41 openhab zramsync[1399]: + dir=/storage/zram
Sep 01 11:10:41 openhab zramsync[1399]: + storeFile=/storage/zram/zram.tar
Sep 01 11:10:41 openhab zramsync[1399]: + [[ 2 -ne 2 ]]
Sep 01 11:10:41 openhab zramsync[1399]: + [[ -f /etc/ztab ]]
Sep 01 11:10:41 openhab zramsync[1399]: + [[ -d /storage/zram ]]
Sep 01 11:10:41 openhab zramsync[1399]: + [[ sync == \r\e\c\o\v\e\r ]]
Sep 01 11:10:41 openhab zramsync[1399]: + mode=sync
Sep 01 11:10:41 openhab zramsync[1399]: + tee -a /storage/zram/zramsync.log
Sep 01 11:10:41 openhab zramsync[1399]: ++ date
Sep 01 11:10:41 openhab zramsync[1399]: + echo 'zramsync creating backup @ Tue  1 Sep 11:10:41 CEST 2020'
Sep 01 11:10:41 openhab zramsync[1399]: zramsync creating backup @ Tue  1 Sep 11:10:41 CEST 2020
Sep 01 11:10:41 openhab zramsync[1399]: + inFile=/tmp/zram-device-list
Sep 01 11:10:41 openhab zramsync[1399]: + rm -f /storage/zram/zram.tar
Sep 01 11:10:41 openhab zramsync[1399]: + read -r line
Sep 01 11:10:41 openhab zramsync[1399]: + case "$line" in
Sep 01 11:10:41 openhab zramsync[1399]: + set -- swap /zram0 zram-config0
Sep 01 11:10:41 openhab zramsync[1399]: + [[ swap == \d\i\r ]]
Sep 01 11:10:41 openhab zramsync[1399]: + [[ swap == \l\o\g ]]
Sep 01 11:10:41 openhab zramsync[1399]: + read -r line
Sep 01 11:10:41 openhab zramsync[1399]: + case "$line" in
Sep 01 11:10:41 openhab zramsync[1399]: + set -- log /zram1 /var/log /log.bind

journalctl -xe --no-pager: https://pastebin.com/cV8xpYG5

all the things above happened with clonebranch=master
hopefully i’ll find the time for testbuild tonight!

ZRAM not starting happens when there’s a circular dependency of services that systemd cannot resolve such as can be seen on L1144 of your log.
The exact start order is coincidential and it is also coincidence where systemd decides to break it.
So sometimes you run into cycles and sometimes you do not.

@narf27 as a test, before you reinstall can you try this with your current test system:
add DefaultDependencies=no in the [Unit] section of all /etc/systemd/system/srv-openhab*.mount
Note the star in the filename so edit all 5 files, after that enter systemctl daemon-reload and reboot to see if zram starts reliably.

If that does not work, remove the files completely then reload,reboot,check again.

If that does not work, remove those After= and WantedBy= lines, too (and no DefaultDependencies line any more), then reload,reboot,check again.

If ZRAM startup succeeds, check what Samba exports. Create a file in /var/log/openhab2/ and see if it appears on the Windows share.

Sep 01 11:38:57 openhab systemd[1]: local-fs.target: Found ordering cycle on srv-openhab2\x2duserdata.mount/start
Sep 01 11:38:57 openhab systemd[1]: local-fs.target: Found dependency on zram-config.service/start
Sep 01 11:38:57 openhab systemd[1]: local-fs.target: Found dependency on sysinit.target/start
Sep 01 11:38:57 openhab systemd[1]: local-fs.target: Found dependency on systemd-tmpfiles-setup.service/start
Sep 01 11:38:57 openhab systemd[1]: local-fs.target: Found dependency on local-fs.target/start
Sep 01 11:38:57 openhab systemd[1]: local-fs.target: Job srv-openhab2\x2duserdata.mount/start deleted to break ordering cycle

Try again with master (but flash again), I’ve just copied the changes from testbuild into master.

ok.

[20:25:01] root@openhab:/home/openhabian# zramctl
NAME       ALGORITHM DISKSIZE  DATA  COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo-rle       500M 16.4M   7.1K   84K       4 /opt/zram/zram2
/dev/zram1 lzo-rle       500M 26.1M 429.4K  916K       4 /opt/zram/zram1
/dev/zram0 lzo-rle       600M    4K    87B   12K       4 [SWAP]
[20:25:04] root@openhab:/home/openhabian# systemctl status zram-config.service zramsync.service
● zram-config.service - zram-config
   Loaded: loaded (/etc/systemd/system/zram-config.service; enabled; vendor preset: enabled)
   Active: active (exited) since Tue 2020-09-01 20:23:53 CEST; 1min 20s ago
  Process: 241 ExecStartPre=/usr/local/sbin/zramsync recover /storage/zram (code=exited, status=0/SUCCESS)
  Process: 388 ExecStart=/usr/local/sbin/zram-config start (code=exited, status=0/SUCCESS)
 Main PID: 388 (code=exited, status=0/SUCCESS)

Sep 01 20:23:53 openhab zram-config[388]: + mkdir -p /opt/zram/zram2
Sep 01 20:23:53 openhab zram-config[388]: + mount --verbose --types ext4 -o rw,noatime /dev/zram2 /opt/zram/zram2/
Sep 01 20:23:53 openhab zram-config[388]: + mkdir -p /opt/zram/zram2/upper /opt/zram/zram2/workdir /var/lib/openhab2/per
Sep 01 20:23:53 openhab zram-config[388]: + mount --verbose --types overlay -o redirect_dir=on,lowerdir=/opt/zram/persis
Sep 01 20:23:53 openhab zram-config[388]: + chown 110:115 /opt/zram/zram2/upper /opt/zram/zram2/workdir /var/lib/openhab
Sep 01 20:23:53 openhab zram-config[388]: + chmod 775 /opt/zram/zram2/upper /opt/zram/zram2/workdir /var/lib/openhab2/pe
Sep 01 20:23:53 openhab zram-config[388]: + echo 'dir                /zram2                /var/lib/openhab2/persistence
Sep 01 20:23:53 openhab zram-config[388]: + read -r line
Sep 01 20:23:53 openhab zram-config[388]: + [[ false == \t\r\u\e ]]
Sep 01 20:23:53 openhab systemd[1]: Started zram-config.

● zramsync.service - zramsync
   Loaded: loaded (/etc/systemd/system/zramsync.service; enabled; vendor preset: enabled)
   Active: active (exited) since Tue 2020-09-01 20:23:53 CEST; 1min 20s ago

Sep 01 20:23:53 openhab systemd[1]: Started zramsync.
[20:25:16] root@openhab:/home/openhabian# date > /var/lib/openhab2/persistence/date
[20:25:30] root@openhab:/home/openhabian# cat /var/lib/openhab2/persistence/date
Tue  1 Sep 20:25:30 CEST 2020

reboot1 - ok:

[20:26:48] root@openhab:/home/openhabian# zramctl
NAME       ALGORITHM DISKSIZE  DATA  COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo-rle       500M 16.4M   7.1K   84K       4 /opt/zram/zram2
/dev/zram1 lzo-rle       500M 26.2M 453.6K  912K       4 /opt/zram/zram1
/dev/zram0 lzo-rle       600M    4K    87B   12K       4 [SWAP]
[20:26:51] root@openhab:/home/openhabian# cat /var/lib/openhab2/persistence/date
Tue  1 Sep 20:25:30 CEST 2020
[20:26:56] root@openhab:/home/openhabian# date > /var/lib/openhab2/persistence/date
[20:27:06] root@openhab:/home/openhabian# cat /var/lib/openhab2/persistence/date
Tue  1 Sep 20:27:06 CEST 2020

reboot2 - ok:

[20:28:12] root@openhab:/home/openhabian# zramctl
NAME       ALGORITHM DISKSIZE  DATA  COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo-rle       500M 16.4M   7.1K   84K       4 /opt/zram/zram2
/dev/zram1 lzo-rle       500M 26.2M 472.5K  944K       4 /opt/zram/zram1
/dev/zram0 lzo-rle       600M    4K    87B   12K       4 [SWAP]
[20:28:22] root@openhab:/home/openhabian# cat /var/lib/openhab2/persistence/date
Tue  1 Sep 20:27:06 CEST 2020
[20:28:26] root@openhab:/home/openhabian# date > /var/lib/openhab2/persistence/date
[20:28:30] root@openhab:/home/openhabian# cat /var/lib/openhab2/persistence/date
Tue  1 Sep 20:28:30 CEST 2020

reboot3 - ok:

[20:29:37] root@openhab:/home/openhabian# cat /var/lib/openhab2/persistence/date
Tue  1 Sep 20:28:30 CEST 2020
[20:29:39] root@openhab:/home/openhabian# zramctl
NAME       ALGORITHM DISKSIZE  DATA  COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo-rle       500M 16.4M   7.1K   84K       4 /opt/zram/zram2
/dev/zram1 lzo-rle       500M 26.3M 487.5K  936K       4 /opt/zram/zram1
/dev/zram0 lzo-rle       600M    4K    87B   12K       4 [SWAP]
[20:29:49] root@openhab:/home/openhabian# date > /var/lib/openhab2/persistence/date
[20:29:52] root@openhab:/home/openhabian# cat /var/lib/openhab2/persistence/date
Tue  1 Sep 20:29:52 CEST 2020

reboot4 - ok:

[20:33:19] root@openhab:/home/openhabian# zramctl
NAME       ALGORITHM DISKSIZE  DATA  COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo-rle       500M 16.4M   7.1K   84K       4 /opt/zram/zram2
/dev/zram1 lzo-rle       500M 26.3M 524.5K  980K       4 /opt/zram/zram1
/dev/zram0 lzo-rle       600M    4K    87B   12K       4 [SWAP]
[20:33:24] root@openhab:/home/openhabian# cat /var/lib/openhab2/persistence/date
Tue  1 Sep 20:29:52 CEST 2020
[20:33:27] root@openhab:/home/openhabian# date > /var/lib/openhab2/persistence/date
[20:33:37] root@openhab:/home/openhabian# cat /var/lib/openhab2/persistence/date
Tue  1 Sep 20:33:37 CEST 2020
[20:33:38] root@openhab:/home/openhabian# reboot

reboot5 - ok:

[20:35:35] root@openhab:/home/openhabian# cat /var/lib/openhab2/persistence/date
Tue  1 Sep 20:33:37 CEST 2020
[20:35:39] root@openhab:/home/openhabian# zramctl
NAME       ALGORITHM DISKSIZE  DATA  COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo-rle       500M 16.4M   7.1K   84K       4 /opt/zram/zram2
/dev/zram1 lzo-rle       500M 26.4M 532.3K  984K       4 /opt/zram/zram1
/dev/zram0 lzo-rle       600M    4K    87B   12K       4 [SWAP]
[20:35:43] root@openhab:/home/openhabian# date > /var/lib/openhab2/persistence/date
[20:35:46] root@openhab:/home/openhabian# cat /var/lib/openhab2/persistence/date
Tue  1 Sep 20:35:46 CEST 2020

reboot6 - ok:

[20:41:55] root@openhab:/home/openhabian# cat /var/lib/openhab2/persistence/date
Tue  1 Sep 20:35:46 CEST 2020
[20:41:57] root@openhab:/home/openhabian# zramctl
NAME       ALGORITHM DISKSIZE  DATA  COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo-rle       500M 16.4M   7.1K   84K       4 /opt/zram/zram2
/dev/zram1 lzo-rle       500M 26.8M 581.9K 1016K       4 /opt/zram/zram1
/dev/zram0 lzo-rle       600M    4K    87B   12K       4 [SWAP]
[20:42:02] root@openhab:/home/openhabian# date > /var/lib/openhab2/persistence/date
[20:42:16] root@openhab:/home/openhabian# cat /var/lib/openhab2/persistence/date
Tue  1 Sep 20:42:16 CEST 2020

reboot7 - ok:

[20:43:57] root@openhab:/home/openhabian# zramctl
NAME       ALGORITHM DISKSIZE  DATA  COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo-rle       500M 16.4M   6.6K   92K       4 /opt/zram/zram2
/dev/zram1 lzo-rle       500M 26.4M 549.2K 1012K       4 /opt/zram/zram1
/dev/zram0 lzo-rle       600M    4K    87B   12K       4 [SWAP]
[20:43:59] root@openhab:/home/openhabian# cat /var/lib/openhab2/persistence/date
Tue  1 Sep 20:42:16 CEST 2020
[20:44:01] root@openhab:/home/openhabian# date > /var/lib/openhab2/persistence/date
[20:44:11] root@openhab:/home/openhabian# cat /var/lib/openhab2/persistence/date
Tue  1 Sep 20:44:11 CEST 2020

reboot8 - ok:

[20:47:35] root@openhab:/home/openhabian# zramctl
NAME       ALGORITHM DISKSIZE  DATA  COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo-rle       500M 16.4M   7.1K   84K       4 /opt/zram/zram2
/dev/zram1 lzo-rle       500M 26.6M 591.9K    1M       4 /opt/zram/zram1
/dev/zram0 lzo-rle       600M    4K    87B   12K       4 [SWAP]
[20:47:37] root@openhab:/home/openhabian# cat /var/lib/openhab2/persistence/date
Tue  1 Sep 20:44:11 CEST 2020
[20:47:40] root@openhab:/home/openhabian# date > /var/lib/openhab2/persistence/date
[20:47:53] root@openhab:/home/openhabian# cat /var/lib/openhab2/persistence/date
Tue  1 Sep 20:47:53 CEST 2020
[20:47:54] root@openhab:/home/openhabian# reboot

reboot9 - ok:

[20:49:33] root@openhab:/home/openhabian# cat /var/lib/openhab2/persistence/date
Tue  1 Sep 20:47:53 CEST 2020
[20:49:46] root@openhab:/home/openhabian# zramctl
NAME       ALGORITHM DISKSIZE  DATA  COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo-rle       500M 16.4M   7.1K   84K       4 /opt/zram/zram2
/dev/zram1 lzo-rle       500M   27M 624.7K  1.1M       4 /opt/zram/zram1
/dev/zram0 lzo-rle       600M    4K    87B   12K       4 [SWAP]
[20:49:55] root@openhab:/home/openhabian# date > /var/lib/openhab2/persistence/date
[20:49:58] root@openhab:/home/openhabian# cat /var/lib/openhab2/persistence/date
Tue  1 Sep 20:49:58 CEST 2020

reboot10 - ok:

[20:53:18] root@openhab:/home/openhabian# zramctl
NAME       ALGORITHM DISKSIZE  DATA  COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo-rle       500M 16.4M   7.1K   84K       4 /opt/zram/zram2
/dev/zram1 lzo-rle       500M   27M 658.2K  1.1M       4 /opt/zram/zram1
/dev/zram0 lzo-rle       600M    4K    87B   12K       4 [SWAP]
[20:53:30] root@openhab:/home/openhabian# cat /var/lib/openhab2/persistence/date
Tue  1 Sep 20:49:58 CEST 2020

10/10 would reboot again :slight_smile:

should i do “all the other stuff” you mentioned?

[20:53:34] root@openhab:/home/openhabian# touch /var/log/openhab2/somefile

yes, i can see the file.

2 Likes

No. Thanks for testing.
Ok, let’s hope that this is a persistent solution. I’ve added this line to the master branch.

Check journalctl -xe for lines indicating a service dependency cycle.
Does your mapdb restore work now ?

i’ll continue testing as soon as i have a working display for my working station :face_with_symbols_over_mouth:

I’ve been following along with very little understanding!
But I’m curious if this was a more general issue or something very specific to the environment? I suppose my real question is, are openhabian users getting rubbish restore-on-startup, but don’t realise it?

It’s a general issue hence my persistence in tracking this down.
But there’s a big coincidence factor involved in whether it applies for a specific installation.
So yes some users might have been affected without noticing.
Unless they upgrade to latest master version where it’s fixed.

2 Likes

alright, finally here are my tests with openHAB, mapdb and a testitem:

2020-09-05 14:24:15.693 [vent.ItemStateChangedEvent] - Test changed from NULL to 2020-09-05T14:24:15.668+0200

reboot

2020-09-05 14:26:16.613 [vent.ItemStateChangedEvent] - Test changed from NULL to 2020-09-05T14:24:15.000+0200
2020-09-05 15:12:36.723 [vent.ItemStateChangedEvent] - Test changed from 2020-09-05T14:24:15.000+0200 to 2020-09-05T15:12:36.712+0200

reboot

2020-09-05 15:14:28.781 [vent.ItemStateChangedEvent] - Test changed from NULL to 2020-09-05T15:12:36.000+0200
2020-09-05 16:57:47.798 [vent.ItemStateChangedEvent] - Test changed from 2020-09-05T15:12:36.000+0200 to 2020-09-05T16:57:47.784+0200

reboot

2020-09-05 16:59:34.772 [vent.ItemStateChangedEvent] - Test changed from NULL to 2020-09-05T16:57:47.000+0200
2020-09-05 17:02:32.716 [vent.ItemStateChangedEvent] - Test changed from 2020-09-05T16:57:47.000+0200 to 2020-09-05T17:02:32.704+0200

reboot

2020-09-05 17:04:19.350 [vent.ItemStateChangedEvent] - Test changed from NULL to 2020-09-05T17:02:32.000+0200
2020-09-05 17:07:50.566 [vent.ItemStateChangedEvent] - Test changed from 2020-09-05T17:02:32.000+0200 to 2020-09-05T17:07:50.542+0200

reboot

2020-09-05 17:09:45.615 [vent.ItemStateChangedEvent] - Test changed from NULL to 2020-09-05T17:07:50.000+0200
2020-09-05 17:12:42.885 [vent.ItemStateChangedEvent] - Test changed from 2020-09-05T17:07:50.000+0200 to 2020-09-05T17:12:42.873+0200

reboot

2020-09-05 17:14:37.189 [vent.ItemStateChangedEvent] - Test changed from NULL to 2020-09-05T17:12:42.000+0200
2020-09-05 17:25:32.686 [vent.ItemStateChangedEvent] - Test changed from 2020-09-05T17:12:42.000+0200 to 2020-09-05T17:25:32.675+0200

reboot

2020-09-05 17:27:04.305 [vent.ItemStateChangedEvent] - Test changed from NULL to 2020-09-05T17:25:32.000+0200
2020-09-05 17:31:09.773 [vent.ItemStateChangedEvent] - Test changed from 2020-09-05T17:25:32.000+0200 to 2020-09-05T17:31:09.759+0200

reboot

2020-09-05 17:32:49.567 [vent.ItemStateChangedEvent] - Test changed from NULL to 2020-09-05T17:31:09.000+0200
2020-09-05 17:40:37.769 [vent.ItemStateChangedEvent] - Test changed from 2020-09-05T17:31:09.000+0200 to 2020-09-05T17:40:37.754+0200

reboot

2020-09-05 17:42:09.763 [vent.ItemStateChangedEvent] - Test changed from NULL to 2020-09-05T17:40:37.000+0200
2020-09-05 17:43:53.297 [vent.ItemStateChangedEvent] - Test changed from 2020-09-05T17:40:37.000+0200 to 2020-09-05T17:43:53.284+0200

reboot

2020-09-05 17:46:22.496 [vent.ItemStateChangedEvent] - Test changed from NULL to 2020-09-05T17:43:53.000+0200

10/10 ok.
is there anything else i should test?

3 Likes

I bet you’re pleased, well done sticking with it, both :slight_smile:

1 Like

Try uninstalling + reinstalling ZRAM via menu.
Compare files such as /usr/local/sbin/zram-config and /etc/systemd/system/zram* with the ones in the openhabian-zram repo (I just want to be sure they get updated) and try again (some less attempts will do fine a well).
No fundamental change there I’d expect it to still work but I’d like someone to validate.

found these three files to compare, all identical to files on your openhabian-zram repo:

[19:25:11] openhabian@openhab:~$ diff -s zram-config /usr/local/sbin/zram-config
Files zram-config and /usr/local/sbin/zram-config are identical

[19:29:14] openhabian@openhab:~$ diff -s zramsync.service /etc/systemd/system/zramsync.service
Files zramsync.service and /etc/systemd/system/zramsync.service are identical

[19:29:43] openhabian@openhab:~$ diff -s zram-config.service /etc/systemd/system/zram-config.service
Files zram-config.service and /etc/systemd/system/zram-config.service are identical

testing follows. should i test with my file in persistence, should i test with openhab persistence or is it all the same?

it’s the same

before i do anything else, is this ok?

[09:45:07] openhabian@openhab:~$ zramctl
NAME       ALGORITHM DISKSIZE  DATA  COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram4 lzo-rle       500M 16.4M   7.1K   84K       4 /opt/zram/zram4
/dev/zram3 lzo-rle       500M 19.8M   1.3M  1.8M       4 /opt/zram/zram3
/dev/zram0 lzo-rle       600M    4K    87B   12K       4 [SWAP]
/dev/zram2 lzo-rle       500M 16.5M  17.3K  180K       4
/dev/zram1 lzo-rle       500M   28M 986.9K  1.4M       4

made no reboot after un- / installing zram via openhabian-config.

it’s normal

2020-09-06 12:13:49.114 [vent.ItemStateChangedEvent] - Test changed from 2020-09-05T17:43:53.000+0200 to 2020-09-06T12:13:49.103+0200

reboot ok

2020-09-06 12:15:53.472 [vent.ItemStateChangedEvent] - Test changed from NULL to 2020-09-06T12:13:49.000+0200
2020-09-06 12:21:00.362 [vent.ItemStateChangedEvent] - Test changed from 2020-09-06T12:13:49.000+0200 to 2020-09-06T12:21:00.350+0200

reboot ok

2020-09-06 12:22:31.017 [vent.ItemStateChangedEvent] - Test changed from NULL to 2020-09-06T12:21:00.000+0200
2020-09-06 12:26:22.018 [vent.ItemStateChangedEvent] - Test changed from 2020-09-06T12:21:00.000+0200 to 2020-09-06T12:26:22.004+0200

reboot ok

2020-09-06 12:28:11.544 [vent.ItemStateChangedEvent] - Test changed from NULL to 2020-09-06T12:26:22.000+0200
2020-09-06 12:47:41.902 [vent.ItemStateChangedEvent] - Test changed from 2020-09-06T12:26:22.000+0200 to 2020-09-06T12:47:41.888+0200

reboot not ok!

2020-09-06 12:49:26.167 [vent.ItemStateChangedEvent] - Test changed from NULL to 2020-09-06T12:26:22.000+0200
[13:35:12] root@openhab:/home/openhabian# zramctl
[13:35:16] root@openhab:/home/openhabian# systemctl status zram-config.service zramsync.service
● zram-config.service - zram-config
   Loaded: loaded (/etc/systemd/system/zram-config.service; enabled; vendor preset: enabled)
   Active: inactive (dead)

● zramsync.service - zramsync
   Loaded: loaded (/etc/systemd/system/zramsync.service; enabled; vendor preset: enabled)
   Active: inactive (dead)

what does journalctl -xu and systemctl status say for `zram-config.service zramsync.service unattended-upgrades.service (3x2 commands please) ?

There wasn’t a lot of time between

2020-09-06 12:47:41.902 ... Test changed ... to 2020-09-06T12:47:41.888+0200

which would have to persist, then reboot, then restore.

2020-09-06 12:49:26.167 ... [vent.ItemStateChangedEvent] - Test changed from NULL to 2020-09-06T12:26:22.000+0200

I’m just suggesting not to leap to the conclusion that the previous change was correctly persisted before reboot. I guess strategy is everyChange? Which should work quite quickly, of course.
I don’t know if the zram version of db gets written to hard storage periodically or when you close system, that might come into play.
It’s difficult to “prove” what goes on because you can’t get a history from mapdb.