[SOLVED] Using Zram

Assuming /dev/zram0 was swap and zram1 was /var/log (validate this next time), you shouldn’t be surprised. You produced more logs than you provided space for.
I guess openHABian rotated some logs to the bind_dir directory ?

So decrease logfile size/number of kept logs in OH (and in whatever fills /var/log)

Content was the same as in /var/log, so the size was probably too big to begin with? Most of it were OH logs (236M).

Easier said than done :slight_smile:

I’ve added the following to /var/lib/openhab2/etc/org.ops4j.pax.logging.cfg:

log4j2.appender.out.strategy.max = 2
log4j2.appender.event.strategy.max = 2 

The result was that OH logging stopped completely :confused:

Also, I’ve never changed the logging configuration. So isn’t this a problem that will happen to all users sooner or later?

Regarding zramctl: I just noticed that this is not even installed. I thought it’s part of the util-linux package, which is already installed on my raspian system, but apparently it’s not. At least not on raspian.

Right file but wrong entry. I think the block for that already exists so don’t add lines but change those. Can’t lookup ATM but there’s threads on this and I think it’s even in the docs.

Just to those that chose a logsize of 150M like you did :slight_smile:
(did you change that in ztab after installing or was that the default? I have increased it meanwhile bit I think the PR is still pending) .

did you originally install the openhabian image? Let me know which package you need so I can add it to openhabian.

Thanks, I’ll take another look.

It was the default.

No, I just cloned the repo for zram setup.

Nvm, it’s there. Just didn’t have /sbin in my zsh path…

Zram is a great addition and definitely has its place, and you just need to get a ups in case of power outages. You could even get an inexpensive Pi UPS card, which can last about 9-11 hours or so, depending on the model. If you go with the USB booting to ssd, (Here’s an article how to set it up… https://www.raspberrypi.org/documentation/hardware/raspberrypi/bootmodes/msd.md) you will have a very stable platform that will survive for years and will only have to worry about the your frequency of database writes - if you have that setup, which is not too difficult. I believe the default value is to write changes immediately, but you can check that in the …/openhab2/persistence/jdbc.persist file. If you go that route. So, the default setup would generally be ok.

For reference:

4 posts were split to a new topic: Cannot change password on openHABian

This thread is marked Solved but the discussion is related to my problem. Perhaps it should me moved to a different/new thread.

After a reboot and a couple of days the logging to openhab.log and events.log stops. The frontail frontend is empty part from headers showing events.log and openhab.log, there is no actual content. I will remember to take a screenshot at the next occurence.

I’m using the zram-feature and as I havent seen this behavior before I’m suspecting that zram is out of memory. But the openhab logging documentation does not describe how to change the logging preferences like size and number of logs.

The configuration file org.ops4j.pax.logging.cfg does not mention any log4j2.appender.out.strategy.max options. Only a size option is configured, defauilt seems to be 16M. I have tried to change this to 8M but logging still stops.
I think the openhab docs should be updated to also descibe this logging adjustments in size and rotation.

Luckily only the logging related to /opt/zram/zram2/ seems to be affected. All my persistence data such as rrd4j is still complete and persistence resides in /opt/zram/zram1/ as I understand it.

I have logged the zram memory usage daily, please see below:

2020-04-23 08:42:45
NAME       ALGORITHM DISKSIZE  DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo           500M 95,6M 21,6M 23,1M       4 /opt/zram/zram2
/dev/zram1 lz4           600M 42,8M  9,2M 10,2M       4 /opt/zram/zram1
/dev/zram0 lz4           600M 16,6M  7,6M  8,6M       4 [SWAP]
2020-04-24 10:50:54
NAME       ALGORITHM DISKSIZE   DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo           500M 277,8M   70M 73,7M       4 /opt/zram/zram2
/dev/zram1 lz4           600M  47,1M 10,5M 11,7M       4 /opt/zram/zram1
/dev/zram0 lz4           600M  32,6M 14,5M 15,6M       4 [SWAP]
2020-04-26 08:54:31
NAME       ALGORITHM DISKSIZE   DATA  COMPR  TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo           500M 453,9M 108,8M 113,6M       4 /opt/zram/zram2
/dev/zram1 lz4           600M  59,3M  13,7M  14,6M       4 /opt/zram/zram1
/dev/zram0 lz4           600M 213,1M  73,7M  77,6M       4 [SWAP]
2020-04-27 16:29:11
NAME       ALGORITHM DISKSIZE   DATA  COMPR  TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo           500M 453,9M 108,6M 113,4M       4 /opt/zram/zram2
/dev/zram1 lz4           600M    60M    14M  14,9M       4 /opt/zram/zram1
/dev/zram0 lz4           600M 204,7M  71,5M  75,8M       4 [SWAP]
2020-04-28 08:00:02
NAME       ALGORITHM DISKSIZE   DATA  COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo           500M 453,9M 109,1M  114M       4 /opt/zram/zram2
/dev/zram1 lz4           600M    60M    14M 14,9M       4 /opt/zram/zram1
/dev/zram0 lz4           600M 203,9M  71,4M 75,2M       4 [SWAP]
2020-04-28 08:51:23
NAME       ALGORITHM DISKSIZE  DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo           500M 40,3M  6,6M  7,2M       4 /opt/zram/zram2
/dev/zram1 lz4           600M 22,8M  1,9M  2,4M       4 /opt/zram/zram1
/dev/zram0 lz4           600M    4K   76B    4K       4 [SWAP]
2020-04-29 08:00:01
NAME       ALGORITHM DISKSIZE  DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo           500M 82,3M 19,1M 20,3M       4 /opt/zram/zram2
/dev/zram1 lz4           600M 41,3M  8,9M  9,8M       4 /opt/zram/zram1
/dev/zram0 lz4           600M 12,9M  5,8M  6,5M       4 [SWAP]
2020-04-30 08:00:01
NAME       ALGORITHM DISKSIZE   DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo           500M 108,4M 26,2M   28M       4 /opt/zram/zram2
/dev/zram1 lz4           600M  51,7M 11,7M 12,6M       4 /opt/zram/zram1
/dev/zram0 lz4           600M  23,6M 10,6M 11,5M       4 [SWAP]
2020-05-01 08:00:01
NAME       ALGORITHM DISKSIZE   DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo           500M 366,5M 91,5M   96M       4 /opt/zram/zram2
/dev/zram1 lz4           600M  55,8M 12,7M 13,7M       4 /opt/zram/zram1
/dev/zram0 lz4           600M  71,3M   29M 30,7M       4 [SWAP]
2020-05-02 08:00:02
NAME       ALGORITHM DISKSIZE   DATA  COMPR  TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo           500M 455,1M 112,2M 117,2M       4 /opt/zram/zram2
/dev/zram1 lz4           600M  57,8M  13,4M  14,2M       4 /opt/zram/zram1
/dev/zram0 lz4           600M   226M  79,4M  84,4M       4 [SWAP]
2020-05-03 08:00:01
NAME       ALGORITHM DISKSIZE   DATA  COMPR  TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo           500M 455,1M 112,2M 117,2M       4 /opt/zram/zram2
/dev/zram1 lz4           600M  59,8M    14M    15M       4 /opt/zram/zram1
/dev/zram0 lz4           600M 198,5M    72M  76,4M       4 [SWAP]
2020-05-03 09:50:17
NAME       ALGORITHM DISKSIZE  DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo           500M 44,1M  6,6M  7,3M       4 /opt/zram/zram2
/dev/zram1 lz4           600M   22M  1,9M  2,4M       4 /opt/zram/zram1
/dev/zram0 lz4           600M    4K   76B    4K       4 [SWAP]

I did at reboot on the 28th of april and on the 3rd of may.
The data usage of zram2 is quite close to the available disksize but its not exceeding it. But is this the problem?

Some background:
Rapbian-buster
openhab install though apt-get (openHAB 2.5.3-1)
openhabian install through apt-get ([master]v1.5-552(869c2ea))
zram feature install though openhabian-config

Is there a need to do a periodic reboot or a zram-stop/start to write data to disc? Or should it work continuously if I set the logging size to something more appropriate?

Thanks!

Search the forum for pax.logging to find out how you can tune OH logging and separate that issue from ZRAM.

I don’t think so because while the uncompressed size is close, the relevant, compressed size (TOTAL column, to be precise) is not.

No (although that should not hurt).

Are you sure logging just simply stops ?
Check if logrotate is in operation (see /etc/logrotate.d/zram-config).
Also check if the rotate target dir has sufficient space to rotate there.
It’s on SD and not meant to provide space for hundreds of MBs. You should reduce your loglevel.

Is your ZRAM install up to date ? It does not get updated when openhabian-config updates.
Compare /usr/local/bin/zram-config with latest in repo
Eventually uninstall and reinstall ZRAM.

Will search for logging topics. But still, shouldn’t this information be a part of the documentation. Or perhaps useful topics could be added as reference links in the “zram status” thread.

Ok, see below. It seems to be in operation.

/usr/local/share/zram-config/log/zram-config.log
{
        rotate 4
        weekly
        missingok
        notifempty
        compress
        delaycompress
        sharedscripts
}

Regarding sufficient space, I should be ok I guess. This is the summary after a reboot:

##    Memory = Free: 0.04GB (4%), Used: 0.91GB (96%), Total: 0.95GB
##      Swap = Free: 0.67GB (99%), Used: 0.00GB (1%), Total: 0.68GB
##      Root = Free: 9.32GB (70%), Used: 3.99GB (30%), Total: 13.91GB

Yes noted. I´ve been thinking of doing this anyway. There is way to much activity in the events.log.

Hmm, how to get the version number/date… The local zram-config has a modification date saying 31th of march and the latest update on the git was done three days ago so, there has been adjustments, its not up-to-date.
Will update and come back with feedback weather or not the issue persists.

Thanks again!

Gather the information and make a PR on the official docs page on loggin if you think it is.

That a loglevel will not reduce.

Well actually changing the loglevel of smarthome.event.ItemStateChangedEvent to WARN from INFO made a huge difference. But I feel I lost a bit too much.

I will have to adjust the logging size and and rotation. Will need to do some reading.
I’m fetching a lot data from the Fronius solar inverter every 5 and 30 seconds so that’s why there is so much activity in the event log.

Edit:
A lightbulb appeared!

The discs adds up to 1.2 Gb, I’m using the Pi 3b with only 1 Gb of RAM. My earlier reply showed a 96 % usage of the memory and this was after a reboot.

I really thought that the openhabian and zram feature expected a 1 Gb memory Pi as the default choice of platforms. Is this the source of my problems? If that’s the case what disc sizes are recommended for the 1 Gb version?


Ok, an update to the issue.

As I reduced the logging enormously by going from INFO to WARN for the ItemStateChangedEvent-occurrences I expected a significant prolonged delay in when to expect a full zram disc. But now it seems to be full again.

Please see below some data from the last week.

2020-05-03 08:00:01
NAME       ALGORITHM DISKSIZE   DATA  COMPR  TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo           500M 455,1M 112,2M 117,2M       4 /opt/zram/zram2
/dev/zram1 lz4           600M  59,8M    14M    15M       4 /opt/zram/zram1
/dev/zram0 lz4           600M 198,5M    72M  76,4M       4 [SWAP]
2020-05-03 09:50:17
NAME       ALGORITHM DISKSIZE  DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo           500M 44,1M  6,6M  7,3M       4 /opt/zram/zram2
/dev/zram1 lz4           600M   22M  1,9M  2,4M       4 /opt/zram/zram1
/dev/zram0 lz4           600M    4K   76B    4K       4 [SWAP]
2020-05-04 08:00:02
NAME       ALGORITHM DISKSIZE  DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo           500M 60,6M 11,9M 12,8M       4 /opt/zram/zram2
/dev/zram1 lz4           600M 35,8M  7,3M  8,1M       4 /opt/zram/zram1
/dev/zram0 lz4           600M    9M  3,6M  4,2M       4 [SWAP]
2020-05-05 08:00:01
NAME       ALGORITHM DISKSIZE  DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo           500M 77,1M 16,1M 17,2M       4 /opt/zram/zram2
/dev/zram1 lz4           600M 46,4M 10,2M   11M       4 /opt/zram/zram1
/dev/zram0 lz4           600M 16,4M  7,4M  8,2M       4 [SWAP]
2020-05-06 08:00:01
NAME       ALGORITHM DISKSIZE  DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo           500M 86,8M 19,6M 20,9M       4 /opt/zram/zram2
/dev/zram1 lz4           600M   51M 11,4M 12,4M       4 /opt/zram/zram1
/dev/zram0 lz4           600M   36M 15,5M 16,6M       4 [SWAP]
2020-05-07 08:00:01
NAME       ALGORITHM DISKSIZE   DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo           500M 262,3M 64,4M 67,9M       4 /opt/zram/zram2
/dev/zram1 lz4           600M    53M 11,9M 12,8M       4 /opt/zram/zram1
/dev/zram0 lz4           600M 126,3M 47,1M 49,7M       4 [SWAP]
2020-05-08 08:00:01
NAME       ALGORITHM DISKSIZE   DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo           500M 341,4M 80,3M 85,6M       4 /opt/zram/zram2
/dev/zram1 lz4           600M  58,5M 13,4M 14,2M       4 /opt/zram/zram1
/dev/zram0 lz4           600M 158,7M 59,2M 62,4M       4 [SWAP]
2020-05-09 08:00:02
NAME       ALGORITHM DISKSIZE   DATA  COMPR  TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo           500M 469,2M 112,7M 118,9M       4 /opt/zram/zram2
/dev/zram1 lz4           600M  60,3M  13,9M  14,8M       4 /opt/zram/zram1
/dev/zram0 lz4           600M 186,6M  69,1M    73M       4 [SWAP]
2020-05-10 08:00:01
NAME       ALGORITHM DISKSIZE   DATA  COMPR  TOTAL STREAMS MOUNTPOINT
/dev/zram2 lzo           500M 469,2M 112,6M 117,7M       4 /opt/zram/zram2
/dev/zram1 lz4           600M  64,9M  15,1M    16M       4 /opt/zram/zram1
/dev/zram0 lz4           600M 227,3M  80,4M  84,9M       4 [SWAP]

Also a disk usage output for the zram2 disc:

[10:46:30] pi@myhabpi:/opt/zram$ sudo du zram2/ -lh
16K     zram2/lost+found
4,0K    zram2/workdir/work
8,0K    zram2/workdir
32K     zram2/upper/unattended-upgrades
68K     zram2/upper/exim4
1,8M    zram2/upper/openhab2
268K    zram2/upper/amanda/amandad
8,0K    zram2/upper/amanda/log.error
24K     zram2/upper/amanda/server/myhabpi-complete
544K    zram2/upper/amanda/server/openhab-dir
608K    zram2/upper/amanda/server
88K     zram2/upper/amanda/openhab-dir/oldlog
264K    zram2/upper/amanda/openhab-dir
20K     zram2/upper/amanda/client/myhabpi-complete
160K    zram2/upper/amanda/client/openhab-dir
192K    zram2/upper/amanda/client
1,4M    zram2/upper/amanda
360M    zram2/upper/journal/181de35982dd42109cd0a94f75909b10
360M    zram2/upper/journal
36K     zram2/upper/apt
84K     zram2/upper/grafana
612K    zram2/upper/samba
458M    zram2/upper
458M    zram2/
[10:46:36] pi@myhabpi:/opt/zram$

And a screenshot showing the Frontail web interface for the period where the logging stopped.

I have not yet changed the log rotation preferences from the default ones. I have only done the INFO to WARN change as stated.
Also, I did a reinstall of the zram feature on the 3rd of may as recommended.

Adjusting the logging settings will not be effective in dealing with this issue as I now believe. What’s filling up the /zram2/upper/journal directory?

I don’t understand your post. If you want to know what’s filling up a dir then just read what’s in there.
It doesn’t happen in the default config. So noone but you (who modified logging and God knows what else) can know on your box.

Note there’s an upcoming change to ZRAM default deployments that you can apply as well.
Change /var/lib/openhab2 in /etc/ztab to /var/lib/openhab2/persistence, that’ll consume a lot less memory so you could eventually increase the limit for /var/log.

Ok, I might have been vague in my post.
The thing is I don’t now whats filling up the journal directory. But this is then perhaps not an openhab or zram question

There is only one subdirectory in the journal folder and in that one there is only one file, see below:

/journal/181de35982dd42109cd0a94f75909b10/user-1000.journal

As I mentioned it seems to me that the problem is not really related to openhab logging size but I’m not sure. From my earlier post it can be seen that most of the content resides in the journal directory and not in the openhab one.

I haven’t done that many changes to the vanilla buster release. In addition to following the install steps presented in the openhab docs the only addition I can think of is the PiJuice software which is needed for my UPS-hat.

But for now I’ll look into whats filling up the journal directory. The file is binary but I’ll do some reading.

That’s the systemd journal service.

Does that mean you do NOT use openHABian ? Why didn’t you tell ?
That’s misleading, horribly annoying and a major pain to people willing to support if you don’t tell these basicmost facts upfront.
Migrate to openHABian if you want support.
The journal issue is also fixed in there along with many others.

I mentioned that fact in my first post from 7 days ago.

If there has been a journal fix that sounds very relevant to my problem, perhaps.

Do I need to do a migration also if I did the install through the apt-get strategy? Are there additinal tweeks in the “openhabian-image”-release compared to whats achieved by adding the openhabian package to the buster-release?

Your system today obviously is different from the one the image would get you, so YES.
You need to reinstall anyway so where’s the point in asking that [a rhetorical question that is].
No I’m not gonna help people with working around the recommended installation procedure.

No, actually I do not need to reinstall anything.

I also have the choices of abandoning this zram feature or just try and tolerate any strange behaviors.

My ups solution expects a clean buster install to solve all preconditions for setting it up. It apparently can not, easily, be installed on a “lite” openhabian image (of-course it could be if one possesses the needed insight).

Both openhab and openhabian offers the choice of installing them as packages, perhaps this choice should be removed if the bugs aren’t dealt with on all flanks.

Perhaps an additional text could be added to the warning notice presented when choosing to install the zram feature. “This is an even more hazardous action if the openhabian install isn’t part of the openhabian image distribution.”

When in your opinion did I deviate from the recommended install procedure? Where does it say that one has to start from the openhabian image and that problems will most likely come if using the apt-get install of openhabian?

Thank you for your very friendly answers, without any emotional tendencies [irony that was].
As I said, I gave you the background to my setup in my first post. You failed to pay attention.

Sure, choice is yours. It just happens to be the fastest solution. Use NUT for your UPS.

It’s always the same story. People think they’re more clever than those to bundle distros, hence choose to ignore the recommendations and when it doesn’t work they complain “hey but you didn’t tell me I may not do XXX so that is expected to work”.

No it isn’t. That’s not how the world and how SW development works, let alone in complex all-volunteer’s projects like this.
Any developer can develop, test and debug against a defined environment ONLY.
That means a need to define a starting point such as and an unmodified Raspbian (hence the image).
That implies that if you deviate from that (“I haven’t done that many changes to the vanilla buster release”) and something fails thereafter while on the unchanged system it works ok then it must have been caused by your changes. And it means that noone but you can know what it was, so don’t try inverting responsibilities.

Yes you did but one literally had to search for it. And you didn’t mention your changes. You didn’t mention the UPS. You use vague and somewhat incomprehensible and misunderstandable wording. You’ve put it below an awkwardly large code window instead of at the beginning where it belongs.
Possibly because you were afraid people wouldn’t care about your post if they read about the modifications upfront (no offense, yes I’m guessing here and I may be wrong in doing so. But that is an often encountered pattern to observe).

irony on which part, the one before or the one after the comma ? Makes for a huge difference.

To me it reads like you’re lacking the respect for work and time others spend to enable and help you (for free, needless to say).
If you want help you have to play by the helper’s rules.
If you don’t want to do that you may not expect anybody to spend any of his time.