Possibly so. I admit it ain’t intuitive and consistent to have it again under 30 but it clearly reads “6A” not “38” in both, the docs and my post, doesn’t it ?
Which leads us back to “people don’t read the docs”. The only reason I was prompted to look under 60 is because of this post.
The docs don’t even mention zram by name.
- (BETA) Reduce wear on SD-card by move write intensive actions temporary to RAM during operation (logs,persistant-data). Warning: powerfailure will result in lost data. [Menu option: 6A]
Given that everything that is in 60 except for the removal of zram is represented in the other menu options, I don’t think it’s unreasonable to expect that there would be a menu option outside of 60 for the removal of zram as well. There is for the installation of it and intuitively one would expect the removal menu option to be colocated with the installation option.
To be consistent with everything else that openhabian-config offers, there really should be an option under 30 to remove zram as well. If not, then perhaps most if not all of those the menu options, 30 in particular, should be removed in their entirety.
I installed from openhabian.
then used sudo openhabian-config to update to 2.5.0 M2 and then installed zram.
Below the “38 | Use zram” menu option ther was no further option to ununstall zram.
That’s all I find in the openhabian-config utility.
See 6A in screenshots above …
Guess what, in buster ‘reboot’ and ‘halt’ are also still available and DO properly execute the rundown scripts.
@mstormi I tried to setup zram after a manual install of openhabian-config, but it fails:
2019-09-10_11:33:02_CEST [openHABian] Loading configuration file '/etc/openhabian.conf'... OK
2019-09-10_11:33:02_CEST [openHABian] openHABian configuration tool version: [master]v1.5-519(9af5552)
2019-09-10_11:33:03_CEST [openHABian] Checking for changes in origin... OK
Username for 'https://github.com': removed
Password for 'https://removed@github.com':
remote: Repository not found.
fatal: repository 'https://github.com/mstormi/zram-config/' not found
/opt/openhabian/functions/zram.bash: line 14: cd: /tmp/openhabian.1Mhof85ivh: No such file or directory
/bin/sh: 0: Can't open ./install.sh
Failed to start zram-config.service: Unit zram-config.service not found.
And sure enough the repo https://github.com/mstormi/zram-config/
does not exist?
Am I missing something?
You’re right it doesn’t exist at the moment.
You can use GitHub - ecdye/zram-config: A complete zram-config utility for swap, directories, and logs to reduce SD, NAND and eMMC block wear. instead, we’re transferring that to my account but it seems that takes more time.
@mstormi Running zram for 17d, everything fine so far. Today i checked df- h
to find /dev/zram1
usage at 100%.
Is this to expected? I’m using zram for /var/log
and swap
only btw…
You need to be more precise. What real-world filesystem does /dev/zram1 point to and how filled is that one ?
Use zramctl
to see how much RAM is really in use.
Thanks @mstormi
This is my /etc/ztab
:
# swap alg mem_limit disk_size swap_priority page-cluster swappiness
swap lz4 250M 750M 75 0 80
# log alg mem_limit disk_size target_dir bind_dir oldlog_dir
log lz4 50M 150M /var/log /log.bind /opt/zram/oldlog
swap usage was at 95M and /var/log
is:
% sudo du -sh /var/log
246M /var/log
Ah yes, should have done that. I’ve meanwhile rebooted and /dev/zram1
is back at 12%. I’ll have to check this next time…
Assuming /dev/zram0 was swap and zram1 was /var/log (validate this next time), you shouldn’t be surprised. You produced more logs than you provided space for.
I guess openHABian rotated some logs to the bind_dir directory ?
So decrease logfile size/number of kept logs in OH (and in whatever fills /var/log)
Content was the same as in /var/log
, so the size was probably too big to begin with? Most of it were OH logs (236M).
Easier said than done
I’ve added the following to /var/lib/openhab2/etc/org.ops4j.pax.logging.cfg
:
log4j2.appender.out.strategy.max = 2
log4j2.appender.event.strategy.max = 2
The result was that OH logging stopped completely
Also, I’ve never changed the logging configuration. So isn’t this a problem that will happen to all users sooner or later?
Regarding zramctl
: I just noticed that this is not even installed. I thought it’s part of the util-linux
package, which is already installed on my raspian system, but apparently it’s not. At least not on raspian.
Right file but wrong entry. I think the block for that already exists so don’t add lines but change those. Can’t lookup ATM but there’s threads on this and I think it’s even in the docs.
Just to those that chose a logsize of 150M like you did
(did you change that in ztab after installing or was that the default? I have increased it meanwhile bit I think the PR is still pending) .
did you originally install the openhabian image? Let me know which package you need so I can add it to openhabian.
Thanks, I’ll take another look.
It was the default.
No, I just cloned the repo for zram setup.
Nvm, it’s there. Just didn’t have /sbin
in my zsh path…
Zram is a great addition and definitely has its place, and you just need to get a ups in case of power outages. You could even get an inexpensive Pi UPS card, which can last about 9-11 hours or so, depending on the model. If you go with the USB booting to ssd, (Here’s an article how to set it up… https://www.raspberrypi.org/documentation/hardware/raspberrypi/bootmodes/msd.md) you will have a very stable platform that will survive for years and will only have to worry about the your frequency of database writes - if you have that setup, which is not too difficult. I believe the default value is to write changes immediately, but you can check that in the …/openhab2/persistence/jdbc.persist file. If you go that route. So, the default setup would generally be ok.
For reference:
This thread is marked Solved but the discussion is related to my problem. Perhaps it should me moved to a different/new thread.
After a reboot and a couple of days the logging to openhab.log and events.log stops. The frontail frontend is empty part from headers showing events.log and openhab.log, there is no actual content. I will remember to take a screenshot at the next occurence.
I’m using the zram-feature and as I havent seen this behavior before I’m suspecting that zram is out of memory. But the openhab logging documentation does not describe how to change the logging preferences like size and number of logs.
The configuration file org.ops4j.pax.logging.cfg does not mention any log4j2.appender.out.strategy.max options. Only a size option is configured, defauilt seems to be 16M. I have tried to change this to 8M but logging still stops.
I think the openhab docs should be updated to also descibe this logging adjustments in size and rotation.
Luckily only the logging related to /opt/zram/zram2/ seems to be affected. All my persistence data such as rrd4j is still complete and persistence resides in /opt/zram/zram1/ as I understand it.
I have logged the zram memory usage daily, please see below:
2020-04-23 08:42:45
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo 500M 95,6M 21,6M 23,1M 4 /opt/zram/zram2
/dev/zram1 lz4 600M 42,8M 9,2M 10,2M 4 /opt/zram/zram1
/dev/zram0 lz4 600M 16,6M 7,6M 8,6M 4 [SWAP]
2020-04-24 10:50:54
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo 500M 277,8M 70M 73,7M 4 /opt/zram/zram2
/dev/zram1 lz4 600M 47,1M 10,5M 11,7M 4 /opt/zram/zram1
/dev/zram0 lz4 600M 32,6M 14,5M 15,6M 4 [SWAP]
2020-04-26 08:54:31
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo 500M 453,9M 108,8M 113,6M 4 /opt/zram/zram2
/dev/zram1 lz4 600M 59,3M 13,7M 14,6M 4 /opt/zram/zram1
/dev/zram0 lz4 600M 213,1M 73,7M 77,6M 4 [SWAP]
2020-04-27 16:29:11
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo 500M 453,9M 108,6M 113,4M 4 /opt/zram/zram2
/dev/zram1 lz4 600M 60M 14M 14,9M 4 /opt/zram/zram1
/dev/zram0 lz4 600M 204,7M 71,5M 75,8M 4 [SWAP]
2020-04-28 08:00:02
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo 500M 453,9M 109,1M 114M 4 /opt/zram/zram2
/dev/zram1 lz4 600M 60M 14M 14,9M 4 /opt/zram/zram1
/dev/zram0 lz4 600M 203,9M 71,4M 75,2M 4 [SWAP]
2020-04-28 08:51:23
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo 500M 40,3M 6,6M 7,2M 4 /opt/zram/zram2
/dev/zram1 lz4 600M 22,8M 1,9M 2,4M 4 /opt/zram/zram1
/dev/zram0 lz4 600M 4K 76B 4K 4 [SWAP]
2020-04-29 08:00:01
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo 500M 82,3M 19,1M 20,3M 4 /opt/zram/zram2
/dev/zram1 lz4 600M 41,3M 8,9M 9,8M 4 /opt/zram/zram1
/dev/zram0 lz4 600M 12,9M 5,8M 6,5M 4 [SWAP]
2020-04-30 08:00:01
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo 500M 108,4M 26,2M 28M 4 /opt/zram/zram2
/dev/zram1 lz4 600M 51,7M 11,7M 12,6M 4 /opt/zram/zram1
/dev/zram0 lz4 600M 23,6M 10,6M 11,5M 4 [SWAP]
2020-05-01 08:00:01
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo 500M 366,5M 91,5M 96M 4 /opt/zram/zram2
/dev/zram1 lz4 600M 55,8M 12,7M 13,7M 4 /opt/zram/zram1
/dev/zram0 lz4 600M 71,3M 29M 30,7M 4 [SWAP]
2020-05-02 08:00:02
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo 500M 455,1M 112,2M 117,2M 4 /opt/zram/zram2
/dev/zram1 lz4 600M 57,8M 13,4M 14,2M 4 /opt/zram/zram1
/dev/zram0 lz4 600M 226M 79,4M 84,4M 4 [SWAP]
2020-05-03 08:00:01
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo 500M 455,1M 112,2M 117,2M 4 /opt/zram/zram2
/dev/zram1 lz4 600M 59,8M 14M 15M 4 /opt/zram/zram1
/dev/zram0 lz4 600M 198,5M 72M 76,4M 4 [SWAP]
2020-05-03 09:50:17
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo 500M 44,1M 6,6M 7,3M 4 /opt/zram/zram2
/dev/zram1 lz4 600M 22M 1,9M 2,4M 4 /opt/zram/zram1
/dev/zram0 lz4 600M 4K 76B 4K 4 [SWAP]
I did at reboot on the 28th of april and on the 3rd of may.
The data usage of zram2 is quite close to the available disksize but its not exceeding it. But is this the problem?
Some background:
Rapbian-buster
openhab install though apt-get (openHAB 2.5.3-1)
openhabian install through apt-get ([master]v1.5-552(869c2ea))
zram feature install though openhabian-config
Is there a need to do a periodic reboot or a zram-stop/start to write data to disc? Or should it work continuously if I set the logging size to something more appropriate?
Thanks!
Search the forum for pax.logging to find out how you can tune OH logging and separate that issue from ZRAM.
I don’t think so because while the uncompressed size is close, the relevant, compressed size (TOTAL
column, to be precise) is not.
No (although that should not hurt).
Are you sure logging just simply stops ?
Check if logrotate is in operation (see /etc/logrotate.d/zram-config
).
Also check if the rotate target dir has sufficient space to rotate there.
It’s on SD and not meant to provide space for hundreds of MBs. You should reduce your loglevel.
Is your ZRAM install up to date ? It does not get updated when openhabian-config updates.
Compare /usr/local/bin/zram-config with latest in repo
Eventually uninstall and reinstall ZRAM.