ZRAM problem with OH3? Filesystem is read only suddenly?

I upgraded a few moth ago from OH 2.5 to OH 3, a few weeks ago to.

In my openhabian/raspi log environment my openhab.log suddenly stops logging.
Timestamp of file is up to date, but content stopps during midnight.
I enabled zram in OH 2.5, my filesystem (at least log directory) is suddenly read only:

Jun  5 00:23:40 openhab influxd[643]: [httpd] 192.168.5.101 - openhab [05/Jun/2021:00:23:40 +0200] "POST /write?db=openhab_db&rp=autogen&precision=n&consistency=one HTTP/1.1 " 204 0 "-" "okhttp/3.14.4" 83a8c067-c583-11eb-ae5a-dca6326ac7e1 6809
Jun  5 00:23:50 openhab influxd[643]: [httpd] 192.168.5.101 - openhab [05/Jun/2021:00:23:50 +0200] "POST /write?db=openhab_db&rp=autogen&precision=n&consistency=one HTTP/1.1 " 204 0 "-" "okhttp/3.14.4" 89930dc9-c583-11eb-ae5b-dca6326ac7e1 5999
Jun  5 00:24:00 openhab influxd[643]: [httpd] 192.168.5.101 - openhab [05/Jun/2021:00:24:00 +0200] "POST /write?db=openhab_db&rp=autogen&precision=n&consistency=one HTTP/1.1 " 204 0 "-" "okhttp/3.14.4" 8f8c8216-c583-11eb-ae5c-dca6326ac7e1 6584
Jun  5 00:24:03 openhab kernel: [565322.814735] EXT4-fs warning (device zram1): ext4_end_bio:349: I/O error 10 writing to inode 68 starting block 70408)
Jun  5 00:24:03 openhab kernel: [565322.814756] Buffer I/O error on device zram1, logical block 70408
Jun  5 00:24:03 openhab kernel: [565322.814985] Buffer I/O error on dev zram1, logical block 35991, lost async page write
Jun  5 00:24:03 openhab kernel: [565322.821481] EXT4-fs error (device zram1): ext4_check_bdev_write_error:216: comm systemd-journal: Error while async write back metadata
Jun  5 00:24:03 openhab kernel: [565322.821659] Buffer I/O error on dev zram1, logical block 0, lost sync page write
Jun  5 00:24:03 openhab kernel: [565322.821680] EXT4-fs (zram1): I/O error while writing superblock

If i want to change/write something into this log directory, the whole directory is read-only.
Enough space is left on device.

Any idea? I thought that zram is used to reduce writing onto SD card.
After reastart of raspi it works as usual. My idea now is to stop zram again and just write it onto SD card. Good/bad idea?

Are you sure? what’s the outputs of zramctl and df -h ?

image
not sure if the screenshot is exactly after restart of my raspi or before.
I installed a cronjob which writes every 15min my df -h & zramctl info & tail of openhab.log & tail of syslog:

Hopefully at the next logfile stop (should be every 2-3 weeks) I will check and post the last content again:

[20:08:49] openhabian@openhab:~$ cat logsize.sh
#!/bin/bash
LOGFILE=/home/openhabian/loginfo.log

# loginfo
echo "################################# `date`" >> $LOGFILE
echo "*** openhab.log: " >> $LOGFILE
tail /var/log/openhab/openhab.log >> $LOGFILE
echo "  " >> $LOGFILE
echo "*** syslog.log: " >> $LOGFILE
tail /var/log/syslog >> $LOGFILE
echo "  " >> $LOGFILE
echo "*** df -h: " >> $LOGFILE
df -h >> $LOGFILE
echo "*** zramctl: " >> $LOGFILE
/sbin/zramctl >> $LOGFILE
echo "  " >> $LOGFILE
[20:08:52] openhabian@openhab:~$

[20:10:16] openhabian@openhab:~$ crontab -l | grep logsize
*/15 * * * * /home/openhabian/logsize.sh 2>&1


...

but anyway: doesn’t it look strange, that my filesystem (or at least some parts at zram) are readonly:

normally a full file system gives an error with something similar to “no space left on device” or something else but not changing into “read only mode”, even with “sudo” I got the error message that I don’t have mermissions

no screenshots please. What about zramctl ?

now after restart of my raspi everything is OK again (without deleting any logfiles and so on).

[20:38:36] openhabian@openhab:/var/log$ zramctl
NAME       ALGORITHM DISKSIZE  DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram1 lzo-rle       500M 69,2M 13,4M 34,5M       4 /opt/zram/zram1
/dev/zram0 lzo-rle       600M    4K   87B    4K       4 [SWAP]

After my next logfile crash hopefully I have more loginfo shortly before zram error.
Probably I should make some checkdisks on my filesystem/raspi 4

Same situation 5 days after last raspi restart again: at 22:46 (11.06.2021) my logfiles suddenly stopped writing, raspi is still working (events happen, mqtt works, everything fine), but logs stopped.

Situation right now (did not restart up to now):

[07:00:02] openhabian@openhab:/var/log/openhab$ ll -rt
insgesamt 145M
-rw-r--r-- 1 openhab openhab    0 Mai 29 11:22 audit.log
-rw-r--r-- 1 openhab openhab  12M Jun  5 09:07 openhab.log.4
-rw-r--r-- 1 openhab openhab 4,7M Jun  6 15:39 events.log.2
-rw-r--r-- 1 openhab openhab 2,3M Jun  6 21:29 openhab.log.5
-rw-r--r-- 1 openhab openhab 4,1M Jun  6 21:29 events.log.3
-rw-r--r-- 1 openhab openhab  17M Jun  7 21:32 events.log.4
-rw-r--r-- 1 openhab openhab  17M Jun  8 16:29 openhab.log.6
-rw-r--r-- 1 openhab openhab  17M Jun  8 22:07 events.log.5
-rw-r--r-- 1 openhab openhab  17M Jun  9 23:14 events.log.6
-rw-r--r-- 1 openhab openhab  17M Jun 10 12:11 openhab.log.7
drwxr-xr-x 1 root    root    4,0K Jun 11 00:00 ../
-rw-r--r-- 1 openhab openhab  17M Jun 11 01:50 events.log.7
drwxr-xr-x 1 openhab openhab 4,0K Jun 11 01:50 ./
-rw-r--r-- 1 openhab openhab  13M Jun 12 07:00 openhab.log
-rw-r--r-- 1 openhab openhab  14M Jun 12 07:00 events.log



[07:00:03] openhabian@openhab:/var/log/openhab$ tail -f openhab.log
2021-06-11 22:46:20.620 [INFO ] [openhab.core.model.script.pool.rules] - Bilde Zisternen Mittelwert aus 50.456 + 50.456 + 50.456 + 50.456 + 50.456 + 50.456
2021-06-11 22:46:20.623 [INFO ] [openhab.core.model.script.pool.rules] - Bilde Zisternen Mittelwert: 50.5
2021-06-11 22:46:20.628 [INFO ] [openhab.core.model.script.pool.rules] - Rule: Poolpumpe Steuern ...Pool_Filterpumpe_Switch (Type=SwitchItem, State=ON, Label=Pool Filterpumpe, Category=null)
2021-06-11 22:46:20.630 [INFO ] [openhab.core.model.script.pool.rules] - Rückspülen in Arbeit... keine weiteren Anpassungen erlaubt
2021-06-11 22:46:21.752 [INFO ] [.reconnect.PeriodicReconnectStrategy] - Try to restore connection to '192.168.5.101'. Next attempt in 60000ms
2021-06-11 22:46:21.757 [INFO ] [.transport.mqtt.MqttBrokerConnection] - Starting MQTT broker connection to '192.168.5.101' with clientid c09745c2-2729-455e-8e16-b708eb3279b1
2021-06-11 22:46:30.620 [INFO ] [openhab.core.model.script.pool.rules] - Bilde Zisternen Mittelwert aus 50.456 + 50.456 + 50.456 + 50.456 + 50.456 + 50.456
2021-06-11 22:46:30.623 [INFO ] [openhab.core.model.script.pool.rules] - Bilde Zisternen Mittelwert: 50.5
2021-06-11 22:46:30.628 [INFO ] [openhab.core.model.script.pool.rules] - Rule: Poolpumpe Steuern ...Pool_Filterpumpe_Switch (Type=SwitchItem, State=ON, Label=Pool Filterpumpe, Category=null)
2021-06-11 22:46:30.630 [INFO ] [openhab.core.model.script.pool.rules] - Rückspülen in Arbeit... keine weiteren Anpassungen erlaubt
^C



[07:00:15] openhabian@openhab:/var/log/openhab$ df -h
Dateisystem    Größe Benutzt Verf. Verw% Eingehängt auf
/dev/root        29G    9,0G   19G   33% /
devtmpfs        1,8G       0  1,8G    0% /dev
tmpfs           1,9G       0  1,9G    0% /dev/shm
tmpfs           1,9G    2,8M  1,9G    1% /run
tmpfs           5,0M    4,0K  5,0M    1% /run/lock
tmpfs           1,9G       0  1,9G    0% /sys/fs/cgroup
/dev/mmcblk0p1  253M     48M  205M   19% /boot
/dev/sda1        29G     23G  4,6G   84% /media/usbstick
/dev/zram1      469M    240M  195M   56% /opt/zram/zram1
overlay1        469M    240M  195M   56% /var/log
tmpfs           388M       0  388M    0% /run/user/1000



[07:00:16] openhabian@openhab:/var/log/openhab$ zramctl
NAME       ALGORITHM DISKSIZE   DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram1 lzo-rle       500M 314,8M   74M  150M       4 /opt/zram/zram1
/dev/zram0 lzo-rle       600M  37,1M 15,6M 22,2M       4 [SWAP]
[07:00:26] openhabian@openhab:/var/log/openhab$ date
Sa 12. Jun 07:00:29 CEST 2021
[07:00:29] openhabian@openhab:/var/log/openhab$ uptime
 07:00:42 up 5 days,  9:30,  1 user,  load average: 0,92, 1,17, 0,72
[07:00:42] openhabian@openhab:/var/log/openhab$

And once again zram error messages in /var/log/syslog (operating system = raspi + openhabian):

[07:00:02] openhabian@openhab:/var/log/openhab$ ll -rt
insgesamt 145M
-rw-r--r-- 1 openhab openhab    0 Mai 29 11:22 audit.log
-rw-r--r-- 1 openhab openhab  12M Jun  5 09:07 openhab.log.4
-rw-r--r-- 1 openhab openhab 4,7M Jun  6 15:39 events.log.2
-rw-r--r-- 1 openhab openhab 2,3M Jun  6 21:29 openhab.log.5
-rw-r--r-- 1 openhab openhab 4,1M Jun  6 21:29 events.log.3
-rw-r--r-- 1 openhab openhab  17M Jun  7 21:32 events.log.4
-rw-r--r-- 1 openhab openhab  17M Jun  8 16:29 openhab.log.6
-rw-r--r-- 1 openhab openhab  17M Jun  8 22:07 events.log.5
-rw-r--r-- 1 openhab openhab  17M Jun  9 23:14 events.log.6
-rw-r--r-- 1 openhab openhab  17M Jun 10 12:11 openhab.log.7
drwxr-xr-x 1 root    root    4,0K Jun 11 00:00 ../
-rw-r--r-- 1 openhab openhab  17M Jun 11 01:50 events.log.7
drwxr-xr-x 1 openhab openhab 4,0K Jun 11 01:50 ./
-rw-r--r-- 1 openhab openhab  13M Jun 12 07:00 openhab.log
-rw-r--r-- 1 openhab openhab  14M Jun 12 07:00 events.log
[07:00:03] openhabian@openhab:/var/log/openhab$ tail -f openhab.log
2021-06-11 22:46:20.620 [INFO ] [openhab.core.model.script.pool.rules] - Bilde Zisternen Mittelwert aus 50.456 + 50.456 + 50.456 + 50.456 + 50.456 + 50.456
2021-06-11 22:46:20.623 [INFO ] [openhab.core.model.script.pool.rules] - Bilde Zisternen Mittelwert: 50.5
2021-06-11 22:46:20.628 [INFO ] [openhab.core.model.script.pool.rules] - Rule: Poolpumpe Steuern ...Pool_Filterpumpe_Switch (Type=SwitchItem, State=ON, Label=Pool Filterpumpe, Category=null)
2021-06-11 22:46:20.630 [INFO ] [openhab.core.model.script.pool.rules] - Rückspülen in Arbeit... keine weiteren Anpassungen erlaubt
2021-06-11 22:46:21.752 [INFO ] [.reconnect.PeriodicReconnectStrategy] - Try to restore connection to '192.168.5.101'. Next attempt in 60000ms
2021-06-11 22:46:21.757 [INFO ] [.transport.mqtt.MqttBrokerConnection] - Starting MQTT broker connection to '192.168.5.101' with clientid c09745c2-2729-455e-8e16-b708eb3279b1
2021-06-11 22:46:30.620 [INFO ] [openhab.core.model.script.pool.rules] - Bilde Zisternen Mittelwert aus 50.456 + 50.456 + 50.456 + 50.456 + 50.456 + 50.456
2021-06-11 22:46:30.623 [INFO ] [openhab.core.model.script.pool.rules] - Bilde Zisternen Mittelwert: 50.5
2021-06-11 22:46:30.628 [INFO ] [openhab.core.model.script.pool.rules] - Rule: Poolpumpe Steuern ...Pool_Filterpumpe_Switch (Type=SwitchItem, State=ON, Label=Pool Filterpumpe, Category=null)
2021-06-11 22:46:30.630 [INFO ] [openhab.core.model.script.pool.rules] - Rückspülen in Arbeit... keine weiteren Anpassungen erlaubt
^C
[07:00:15] openhabian@openhab:/var/log/openhab$ df -h
Dateisystem    Größe Benutzt Verf. Verw% Eingehängt auf
/dev/root        29G    9,0G   19G   33% /
devtmpfs        1,8G       0  1,8G    0% /dev
tmpfs           1,9G       0  1,9G    0% /dev/shm
tmpfs           1,9G    2,8M  1,9G    1% /run
tmpfs           5,0M    4,0K  5,0M    1% /run/lock
tmpfs           1,9G       0  1,9G    0% /sys/fs/cgroup
/dev/mmcblk0p1  253M     48M  205M   19% /boot
/dev/sda1        29G     23G  4,6G   84% /media/usbstick
/dev/zram1      469M    240M  195M   56% /opt/zram/zram1
overlay1        469M    240M  195M   56% /var/log
tmpfs           388M       0  388M    0% /run/user/1000
[07:00:16] openhabian@openhab:/var/log/openhab$ zramctl
NAME       ALGORITHM DISKSIZE   DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram1 lzo-rle       500M 314,8M   74M  150M       4 /opt/zram/zram1
/dev/zram0 lzo-rle       600M  37,1M 15,6M 22,2M       4 [SWAP]
[07:00:26] openhabian@openhab:/var/log/openhab$ date
Sa 12. Jun 07:00:29 CEST 2021
[07:00:29] openhabian@openhab:/var/log/openhab$ uptime
 07:00:42 up 5 days,  9:30,  1 user,  load average: 0,92, 1,17, 0,72
[07:00:42] openhabian@openhab:/var/log/openhab$

Might the reason be: zramctl: total = MEM-LIMIT = MEM-USED? so totallimit is reached?

[07:17:10] root@openhab:/var/log/apache2# zramctl --output-all
NAME       DISKSIZE   DATA COMPR ALGORITHM STREAMS ZERO-PAGES TOTAL MEM-LIMIT MEM-USED MIGRATED MOUNTPOINT
/dev/zram1     500M 314,8M   74M lzo-rle         4       4598  150M      150M     150M       0B /opt/zram/zram1
/dev/zram0     600M  37,1M 15,6M lzo-rle         4        640 22,2M      200M    28,8M     396B [SWAP]

what I did now because of ZRAM link (ZRAM status):

I increased in /etc/ztab my mem_limit from 150M to 350M (Raspi 4)

stopping service before, then editing, then restarting again - and for safety reasons: restart raspi again.
after that at least the zramctl output tells me that my max-size is now 350m instead of 150m

I checked my /var/log/openhab directory (du -h) and it tells me that my openhab.log* and events.log* have a size of 140M (don’t know exactly what of these old files is also loaded into zram…)

[08:02:56] openhabian@openhab:/var/log/openhab$
[08:02:56] openhabian@openhab:/var/log/openhab$ zramctl --output-all
NAME       DISKSIZE  DATA COMPR ALGORITHM STREAMS ZERO-PAGES TOTAL MEM-LIMIT MEM-USED MIGRATED MOUNTPOINT
/dev/zram1     500M  214M 45,2M lzo-rle         4       3862 99,5M      350M    99,5M       0B /opt/zram/zram1
/dev/zram0     600M    4K   87B lzo-rle         4          0    4K      200M       4K       0B [SWAP]
[08:03:12] openhabian@openhab:/var/log/openhab$



[08:03:28] openhabian@openhab:/var/log/openhab$ df -h
Dateisystem    Größe Benutzt Verf. Verw% Eingehängt auf
/dev/root        29G    9,2G   19G   33% /
devtmpfs        1,8G       0  1,8G    0% /dev
tmpfs           1,9G       0  1,9G    0% /dev/shm
tmpfs           1,9G    1,4M  1,9G    1% /run
tmpfs           5,0M    4,0K  5,0M    1% /run/lock
tmpfs           1,9G       0  1,9G    0% /sys/fs/cgroup
/dev/mmcblk0p1  253M     48M  205M   19% /boot
/dev/sda1        29G     23G  4,6G   84% /media/usbstick
tmpfs           388M       0  388M    0% /run/user/1000
/dev/zram1      469M    199M  235M   46% /opt/zram/zram1
overlay1        469M    199M  235M   46% /var/log



[08:03:31] openhabian@openhab:/var/log/openhab$ du -h
141M    .
[08:03:35] openhabian@openhab:/var/log/openhab$ pwd
/var/log/openhab
[08:03:36] openhabian@openhab:/var/log/openhab$

I will give an update if everything is still working in the next days.
@mstormi : please tell me if I made some basic thinking-errors :wink: thx

as you can see in

the content of the complete /var/log directory.

I am wondering a bit that although the last entry in openhab.log is from 2021-06-11 22:46:30.630 the timestamp of the file is from Jun 12 07:00.

In one of your previous posts you referred to the syslog entry but the code block does not contain syslog content but a copy of the same content with regard to directory, file listings that is also part of the same post.

Since I enlarged my zram1 to 350M the OH3/raspi is running stable

*** zramctl:
NAME       DISKSIZE   DATA  COMPR ALGORITHM STREAMS ZERO-PAGES TOTAL MEM-LIMIT MEM-USED MIGRATED MOUNTPOINT
/dev/zram1     500M 397,4M 106,9M lzo-rle         4        428  204M      350M     204M       2B /opt/zram/zram1
/dev/zram0     600M  70,6M  28,9M lzo-rle         4       1335 42,5M      200M      70M     294B [SWAP]

The size/used memory in zram1 is still growing, but with the new mem limit at least it looks like I have more time now until zram is full:

Last column shows the used memory, info comes from crontab every 15 minutes:

grep zram1 loginfo.log | grep lzo-rle
/dev/zram1     500M 386,6M 100,7M lzo-rle         4        439  196M      350M     196M       2B /opt/zram/zram1
/dev/zram1     500M 386,6M 100,7M lzo-rle         4        439 196,1M      350M   196,1M       2B /opt/zram/zram1
/dev/zram1     500M 386,7M 100,7M lzo-rle         4        439 196,1M      350M   196,1M       2B /opt/zram/zram1
/dev/zram1     500M 386,9M  104M lzo-rle         4        435 200,5M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 387,3M 103,8M lzo-rle         4        550 200,5M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 387,3M 103,7M lzo-rle         4        507 200,4M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 387,3M 103,5M lzo-rle         4        506 200,3M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 387,3M 103,3M lzo-rle         4        506 200,3M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 387,3M 103,2M lzo-rle         4        506 200,3M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 387,3M 103,1M lzo-rle         4        506 200,3M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 387,3M 103,1M lzo-rle         4        506 200,2M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 387,3M 103,1M lzo-rle         4        506 200,2M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 387,3M 103,2M lzo-rle         4        506 200,2M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 387,3M 103,2M lzo-rle         4        506 200,2M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 387,3M 103,2M lzo-rle         4        506 200,2M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 387,3M 103,2M lzo-rle         4        506 200,2M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 387,3M 103,2M lzo-rle         4        506 200,2M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 387,3M 103,2M lzo-rle         4        506 200,2M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 387,3M 103,3M lzo-rle         4        506 200,1M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 387,3M 103,3M lzo-rle         4        506 200,1M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 387,3M 103,3M lzo-rle         4        506 200,1M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 387,3M 103,3M lzo-rle         4        506 200,1M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 387,3M 103,3M lzo-rle         4        506 200,1M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 387,4M 103,3M lzo-rle         4        499 200,1M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 387,5M 103,3M lzo-rle         4        495 200,1M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 387,7M 103,4M lzo-rle         4        493 200,1M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 387,8M 103,4M lzo-rle         4        491  200M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M  388M 103,5M lzo-rle         4        488  200M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 388,2M 103,5M lzo-rle         4        485  200M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 388,4M 103,6M lzo-rle         4        484  200M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 388,5M 103,7M lzo-rle         4        482  200M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 388,7M 103,7M lzo-rle         4        480 200,1M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 388,9M 103,8M lzo-rle         4        477 200,1M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M  389M 103,9M lzo-rle         4        475 200,1M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 389,2M 103,9M lzo-rle         4        468 200,1M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 389,4M  104M lzo-rle         4        425 200,1M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 389,6M 104,1M lzo-rle         4        398 200,2M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M  390M 104,2M lzo-rle         4        394 200,3M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 390,3M 104,3M lzo-rle         4        391 200,4M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M 390,6M 104,4M lzo-rle         4        388 200,6M      350M   200,6M       2B /opt/zram/zram1
/dev/zram1     500M  391M 104,6M lzo-rle         4        385 200,8M      350M   200,8M       2B /opt/zram/zram1
/dev/zram1     500M 391,3M 104,7M lzo-rle         4        382  201M      350M     201M       2B /opt/zram/zram1
/dev/zram1     500M 391,6M 104,8M lzo-rle         4        379 201,2M      350M   201,2M       2B /opt/zram/zram1
/dev/zram1     500M  392M 104,9M lzo-rle         4        375 201,3M      350M   201,3M       2B /opt/zram/zram1
/dev/zram1     500M 392,3M  105M lzo-rle         4        373 201,5M      350M   201,5M       2B /opt/zram/zram1
/dev/zram1     500M 392,6M 105,1M lzo-rle         4        369 201,7M      350M   201,7M       2B /opt/zram/zram1
/dev/zram1     500M  393M 105,1M lzo-rle         4        450 201,8M      350M   201,8M       2B /opt/zram/zram1
/dev/zram1     500M 393,4M 105,3M lzo-rle         4        428  202M      350M     202M       2B /opt/zram/zram1
/dev/zram1     500M 393,9M 105,6M lzo-rle         4        428 202,2M      350M   202,2M       2B /opt/zram/zram1
/dev/zram1     500M 394,1M 105,9M lzo-rle         4        428 202,3M      350M   202,3M       2B /opt/zram/zram1
/dev/zram1     500M 394,6M 106,1M lzo-rle         4        428 202,6M      350M   202,6M       2B /opt/zram/zram1
/dev/zram1     500M  395M 106,3M lzo-rle         4        428 202,8M      350M   202,8M       2B /opt/zram/zram1
/dev/zram1     500M 395,4M 106,4M lzo-rle         4        428  203M      350M     203M       2B /opt/zram/zram1
/dev/zram1     500M 395,8M 106,5M lzo-rle         4        428 203,2M      350M   203,2M       2B /opt/zram/zram1
/dev/zram1     500M 396,2M 106,6M lzo-rle         4        428 203,4M      350M   203,4M       2B /opt/zram/zram1
/dev/zram1     500M 396,6M 106,7M lzo-rle         4        428 203,6M      350M   203,6M       2B /opt/zram/zram1
/dev/zram1     500M 396,9M 106,8M lzo-rle         4        428 203,7M      350M   203,7M       2B /opt/zram/zram1
/dev/zram1     500M 397,4M 106,9M lzo-rle         4        428  204M      350M     204M       2B /opt/zram/zram1
[13:55:44] openhabian@openhab:~$

yes, you’re right. file system is full, no more logs can be written but it seams like updating timestamp is still possible. I’m not a linux expert, but maybe updating tstamp is possible because it does not need more memory. Therefore it looks like the file (openhab.log) is still up2date, but if I check via tail the content then it stopped exaclty at the moment where the asyncron-error comes up in /var/log/syslog

The /var/log ZRAM dir is 600M by default on new installations so you must have had a very old one.
Btw you should check your /var/lib/openhab/etc/log4j2.xml and limit openhab logsize there.

1 Like

ok, thx for log4j hint. I reduzed openhab.log from 16M to 10m.
ZRAM was activated in my 2.5 installation (openhabian/raspi 4).

with disksize you’re right: it is set to 600M, but mem_limit had to be set to 350M for /var/log because default value (maximum 1 year ago) was 150M

cat /etc/ztab
# Once finished, restart ZRAM using 'systemctl restart zram-config.service'.

# swap  alg     mem_limit       disk_size       swap_priority   page-cluster    swappiness
swap    lz4     200M            600M            75              0               90

# log   alg     mem_limit       disk_size       target_dir                      bind_dir                oldlog_dir
log     lzo     350M            500M            /var/log                        /log.bind

# dir   alg     mem_limit       disk_size       target_dir                      bind_dir
dir     lz4     150M            500M            /var/lib/openhab2/persistence   /persistence.bind
[20:25:17] openhabian@openhab:/etc$