Extending ZRAM

i have zram active now and i’d like to move influxdb to ZRAM too, i know that i have to make a new entry in /etc/zram, but i’m afraid i don’t how the entry should look…

dir   alg     mem_limit       disk_size       target_dir      bind_dir

dir = dir?
alg = ??? (couldn’t really find info for what to use)
mem_limit/disk_size = ok: disk_size is virtual uncompressed size approx 220-450% of mem allocated depending on algorithm and input file. but what do i need to set for mem_limit?
target_dir = /var/lib/influxdb/data :slight_smile:
bind_dir = influxdb.bind ? (will this be created by ZRAM?)

i’ve read the opening post several times but i couln’t find anything on these options :frowning:

See https://github.com/mstormi/openhabian-zram

yes, i’ve seen this, but it seems this is way above my “level”…
wild guess:

dir     lz4     150M            500M           /var/lib/influxdb   /influxdb.bind

?

still not sure what size i should set for mem_limit …

It’ll use up to that amount of RAM - depends on how much mem is free on your box. That I don’t know.

i have a rpi4 with 4gb ram. so, i’d say plenty free ram:

##    Memory = Free: 3.15GB (82%), Used: 0.71GB (18%), Total: 3.86GB

are the other options in my example ok?

I don’t know. I have not tried to use InfluxDB, I don’t know how much data it occupies.
It’s a trial and error thing you have to do for yourself.

ok, i guess i’ll get a warning in zram logs if size is too small?

no, it’ll refuse writes

i did just that (sudo reboot) and logs + persistence data of the last days are lost.
made another reboot and previous logs of today also vanished.

another try with “sudo shutdown -r”, same result, all data from 2020-07-11 until “before reboot” are lost.

log:

zram-config stop 2020-07-16-11:30:43
ztab remove log /zram2 /var/log /log.bind
/zram2
Warning: Stopping rsyslog.service, but it can still be activated by:
  syslog.socket
umount: /var/log (overlay2) unmounted
overlay --lowerdir=/opt/zram/log.bind --upperdir=/opt/zram/zram2/upper
Upper directory not specified.
Try './overlay --help' for more information.
ztab remove dir /zram1 /var/lib/openhab2/persistence /persistence.bind
/zram1
umount: /var/lib/openhab2/persistence (overlay1) unmounted
overlay --lowerdir=/opt/zram/persistence.bind --upperdir=/opt/zram/zram1/upper
Upper directory not specified.
Try './overlay --help' for more information.
ztab remove swap /zram0 zram-config0
/dev/zram0 removed
'/usr/local/share/zram-config/zram-device-list.rev' wurde entfernt
'/usr/local/share/zram-config/zram-device-list' wurde entfernt
zram-config start 2020-07-16-11:30:56
ztab create swap lz4 200M 600M 75 0 90
insmod /lib/modules/4.19.118-v7l+/kernel/mm/zsmalloc.ko
insmod /lib/modules/4.19.118-v7l+/kernel/drivers/block/zram/zram.ko
zram0 created comp_algorithm=lz4 mem_limit=200M disksize=600M
Setting up swapspace version 1, size = 600 MiB (629141504 bytes)
LABEL=zram-config0, UUID=e759bb49-455d-498c-a3fc-2306348745e1
swapon: /dev/zram0: found signature [pagesize=4096, signature=swap]
swapon: /dev/zram0: pagesize=4096, swapsize=629145600, devsize=629145600
swapon /dev/zram0
vm.page-cluster = 0
vm.swappiness = 90
ztab create dir lz4 150M 500M /var/lib/openhab2/persistence /persistence.bind
dirPerm /var/lib/openhab2/persistence 775 110:115
mount: /var/lib/openhab2/persistence bound on /opt/zram/persistence.bind.
mount: /opt/zram/persistence.bind propagation flags changed.
dirMountOpt rw,noatime dirFsType  ext4
zram1 created comp_algorithm=lz4 mem_limit=150M disksize=500M
mke2fs 1.44.5 (15-Dec-2018)
fs_types for mke2fs.conf resolution: 'ext4', 'small'
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
128000 inodes, 128000 blocks
6400 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=132120576
4 block groups
32768 blocks per group, 32768 fragments per group
32000 inodes per group
Filesystem UUID: 7cc07125-3e9b-4bff-b2e2-18f679dfe0dd
Superblock backups stored on blocks:
        32768, 98304

Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

mount: /dev/zram1 mounted on /opt/zram/zram1.
mount: overlay1 mounted on /var/lib/openhab2/persistence.
ztab create log lzo 150M 500M /var/log /log.bind
Warning: Stopping rsyslog.service, but it can still be activated by:
  syslog.socket
dirPerm /var/log 755 0:0
mount: /var/log bound on /opt/zram/log.bind.
mount: /opt/zram/log.bind propagation flags changed.
dirMountOpt rw,noatime dirFsType  ext4
zram2 created comp_algorithm=lzo mem_limit=150M disksize=500M
mke2fs 1.44.5 (15-Dec-2018)
fs_types for mke2fs.conf resolution: 'ext4', 'small'
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
128000 inodes, 128000 blocks
6400 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=132120576
4 block groups
32768 blocks per group, 32768 fragments per group
32000 inodes per group
Filesystem UUID: bf2d6042-9f63-4e96-9e0c-b401ef546047
Superblock backups stored on blocks:
        32768, 98304

Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

mount: /dev/zram2 mounted on /opt/zram/zram2.
mount: overlay2 mounted on /var/log.
createZlog no oldlog dir in ztab
zram-config stop 2020-07-16-11:45:20
ztab remove log /zram2 /var/log /log.bind
/zram2
Warning: Stopping rsyslog.service, but it can still be activated by:
  syslog.socket
umount: /var/log (overlay2) unmounted
overlay --lowerdir=/opt/zram/log.bind --upperdir=/opt/zram/zram2/upper
Upper directory not specified.
Try './overlay --help' for more information.
ztab remove dir /zram1 /var/lib/openhab2/persistence /persistence.bind
/zram1
umount: /var/lib/openhab2/persistence (overlay1) unmounted
overlay --lowerdir=/opt/zram/persistence.bind --upperdir=/opt/zram/zram1/upper
Upper directory not specified.
Try './overlay --help' for more information.
ztab remove swap /zram0 zram-config0
/dev/zram0 removed
'/usr/local/share/zram-config/zram-device-list.rev' wurde entfernt
'/usr/local/share/zram-config/zram-device-list' wurde entfernt

zramctl
gives no output.

i am 100% sure that i didn’t mess with anything zram related in the last days…
please help :pray:

edit: is this / could this be related?

[12:06:52] openhabian@OHab2:~$ sudo systemctl restart openhab2
Warning: The unit file, source configuration file or drop-ins of openhab2.service changed on disk. Run 'systemctl daemon-reload' to reload units.

(systemctl daemon-reload doesn’t help)

Your log shows ZRAM dirs were properly unmounted hence synced.
But this is a tutorial thread, not a helpline. So the rest you have to find out for yourself.

too bad… i never installed influxDB and didn’t mess with zram in any way and i’m pretty sure i’m not gonna “find out for myself”.
so i’m going to have to live with data loss on standard openhabian or deactivate ZRAM again ( > going to ssd again)?

No it works on standard openHABian. So you must have messed something but I don’t have the time and willingness to debug everyone’s private system, sorry.

ok, new setup, again.
downloaded openhabian, wrote to new sd card (via balena etcher).
configured some things on openhabian-config:

  • hostname
  • password
  • installed mosquitto
  • set locales
  • installed amanda
    reboot
[18:38:21] openhabian@ohab2:~$ sudo systemctl restart openhab2
Warning: The unit file, source configuration file or drop-ins of openhab2.service changed on disk. Run 'systemctl daemon-reload' to reload units.

i don’t know if this is ZRAM related or not, but i’m pretty shure this should not happen on a new setup, right?

Then why don’t you execute that?

Hi, I’ve read this thread and without going into details it feels like we have similar symptoms (in my case had similar symptoms). That is, losing data when doing a reboot. In my case it was a memory issue. My journal logging completly filled up a zram directory. Have you investigated how you zram dir fills up over time?

Also I’m curious, why would you like to include also the influxdb data in a zram dir?

I assume that you have a UPS-solution powering your device, correct?. Otherwise you would lose all of your persisted data residing in zram in case of a power outage.
Wouldn’t a more appropriate strategy be do exclude influxdb from the sd-card altogether and put it on a usb-stick instead (or any external device)? The usb-stick (or e.g. network share) could then be backuped every 24h or so. This would mean that you have a low wear system (in respect of the sd-card) while still making certain that you have secured your persisted data.

But I might have completely misinterpreted your use case.

if i’m honest: i had this problem with my previous setup (2 weeks old, the one with zram probs) and in that case the error was fixed only until next reboot, then reappeared. so (assuming it would be the same behaviour) i didn’t bother this time.

yesterday i started another new blank openhabian setup and i’ll try to understand when exactly this error begins to appear - and i’ll also give systemctl daemon-reload a try.

no, i haven’t investigated how it fills up. thanks for the advice, if the problem reappears with new setup maybe i’ll ask you how this is done :slight_smile:?

for now i won’t use influxdb and i’m planning to attach my device to a UPS. i’m aware of the dataloss.
but loss of persisted data applies also to all the other services… so i’m not shure if i sould rely on zram working or if i should move it to some USB device.

As a first step I made a cron job that runs every morning at 8 o’clock.
If you run the command zramctl you will get feedback on the memory use of each zram directory. If you’ve identified a problem related to running out of memory then you’ll also have to look into where in the specific directory memory is used.

#!/usr/bin/python3
import datetime, time, os, sys, threading

def print_thread():
  now = datetime.datetime.now()
  print (now.strftime("%Y-%m-%d %H:%M:%S"))
  #time.sleep(1)

  os.system('/sbin/zramctl >> /home/pi/zramStatus.log')
  sys.exit(0)

t1 = threading.Thread(target=print_thread)
t1.start()

In my case it was also quite clear when in time the memory had been used up. As it was the log zram directory which was affected (/opt/zram/zram2/upper/) there wasn’t any space left for the openhab log files. This also made the real time frontail gui empty from the time instance of running out of space.
Persistance data seems to be residing in the zram1 directory however.

In my case it was the journal log directory which increased in size quite quickly. I’ve made adjustments to how much memory the journal log is allowed to take. But I’ve also added a cron job which reboots my pi every week.

Which ones?

That now is a bad move. It won’t help with dirs/partitions filling either as it won’t delete anything.

The systemd journal, by editing journald.conf. I’ve changed the allowed disc space use to a minimum (now: SystemMaxUse=30M, RuntimeMaxUse=30M)

Sure it wont delete anything from disc but the reboot will sync the zram to disc and at reboot providing a more or less empty zram directory and thus a relatively long period of time until the zram is almost full again. At least, it seems to work for me.

Please explain why a reboot is a bad idea. Do you mean that its a bad idea in general or that its a bad idea if the goal is to create free zram space? Is it actually a “bad idea” or just an action which wont resolve my issue?

My understanding is that the zram will eventually fill up. Perhaps not if the user is 100% sure that all of the content is rotated and automatically flushing out the oldest data. Please correct me if I’m misstaken.

Both. Generally speaking, reboots hide but don’t solve problems, and quite often generate new problems (services fail to restart, items become uninitialized etc).

No. It’ll rotate logfiles away (to non-ZRAM dirs) when a defined size is reached. That limit just was missing/too high for the journal.