ZRAM issues

I tried reproducing and think it would happen with an old version of zram-config.
So replace yours with latest, append a -x to first line to get meaningful output nect time it is run.
Also check the log in /usr/local/share/zram-config/logs.
@Lars_R FYI, too.

What I did:

  1. followed your link and instructions
  • sudo apt-get install git
    
  • git clone https://github.com/StuartIanNaylor/zram-config
    
  • cd zram-config
    

manually append a -x in zram-config like this:

  • #! /bin/bash -x
    
  • sudo sh install.sh
    
  1. Installation performed without errors

  2. Rebooting the system sudo reboot

  3. Checked sudo systemctl status zram-config (no errors)

● zram-config.service - zram-config
Loaded: loaded (/etc/systemd/system/zram-config.service; enabled; vendor preset: enabled)
Active: active (exited) since Fri 2020-06-12 12:36:09 CEST; 2min 16s ago
Process: 352 ExecStart=/usr/local/bin/zram-config start (code=exited, status=0/SUCCESS)
Main PID: 352 (code=exited, status=0/SUCCESS)

Jun 12 12:36:09 openhab zram-config[352]: + echo ‘log /zram1 /var/log /log.bind’
Jun 12 12:36:09 openhab zram-config[352]: + invoke-rc.d rsyslog restart
Jun 12 12:36:09 openhab zram-config[352]: + journalctl --flush
Jun 12 12:36:09 openhab zram-config[352]: + ‘[’ ‘!’ -z /opt/zram/oldlog ‘]’
Jun 12 12:36:09 openhab zram-config[352]: + echo ‘olddir /opt/zram/oldlog’
Jun 12 12:36:09 openhab zram-config[352]: + echo ‘createolddir 755 root root’
Jun 12 12:36:09 openhab zram-config[352]: + echo renamecopy
Jun 12 12:36:09 openhab zram-config[352]: + read -r line
Jun 12 12:36:09 openhab zram-config[352]: + ‘[’ false = true ‘]’
Jun 12 12:36:09 openhab systemd[1]: Started zram-config.

  1. Checked systemctl status nginx.service (with errors)

nginx.service - A high performance web server and a reverse proxy server

Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)

Active: failed (Result: exit-code) since Fri 2020-06-12 12:36:25 CEST; 2min 57s ago

Docs: man:nginx(8)

Process: 627 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=1/FAILURE)

Jun 12 12:36:20 openhab systemd[1]: Starting A high performance web server and a reverse proxy server…

Jun 12 12:36:21 openhab nginx[627]: nginx: [alert] could not open error log file: open() “/var/log/nginx/error.log” failed (28: No space left on device)

Jun 12 12:36:25 openhab nginx[627]: 2020/06/12 12:36:21 [emerg] 627#627: open() “/var/log/nginx/access.log” failed (28: No space left on device)

Jun 12 12:36:25 openhab nginx[627]: nginx: configuration file /etc/nginx/nginx.conf test failed

Jun 12 12:36:25 openhab systemd[1]: nginx.service: Control process exited, code=exited, status=1/FAILURE

Jun 12 12:36:25 openhab systemd[1]: nginx.service: Failed with result ‘exit-code’.

Jun 12 12:36:25 openhab systemd[1]: Failed to start A high performance web server and a reverse proxy server

  1. checked zram-config.log (entries since start)

zram-config start 2020-06-12-12:36:08

ztab create swap lz4 250M 750M 75 0 80

insmod /lib/modules/4.19.118-v7l+/kernel/mm/zsmalloc.ko

insmod /lib/modules/4.19.118-v7l+/kernel/drivers/block/zram/zram.ko

zram0 created comp_algorithm=lz4 mem_limit=250M disksize=750M

Setting up swapspace version 1, size = 750 MiB (786427904 bytes)

LABEL=zram-config0, UUID=bde800a9-ae40-4472-8d7a-5fc3ac5de39c

swapon: /dev/zram0: found signature [pagesize=4096, signature=swap]

swapon: /dev/zram0: pagesize=4096, swapsize=786432000, devsize=786432000

swapon /dev/zram0

vm.page-cluster = 0

vm.swappiness = 80

ztab create log lz4 50M 150M /var/log /log.bind /opt/zram/oldlog

Warning: Stopping rsyslog.service, but it can still be activated by:

syslog.socket

dirPerm /var/log 755 0:0

mount: /var/log bound on /opt/zram/log.bind.

mount: /opt/zram/log.bind propagation flags changed.

dirMountOpt rw,noatime dirFsType ext4

zram1 created comp_algorithm=lz4 mem_limit=50M disksize=150M

mke2fs 1.44.5 (15-Dec-2018)

fs_types for mke2fs.conf resolution: ‘ext4’, ‘small’

Discarding device blocks: 4096/38400^H^H^H^H^H^H^H^H^H^H^H ^H^H^H^H^H^H^H^H^H^H^Hdone

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

Stride=0 blocks, Stripe width=0 blocks

38400 inodes, 38400 blocks

1920 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=39845888

2 block groups

32768 blocks per group, 32768 fragments per group

19200 inodes per group

Filesystem UUID: e24c059e-d7b7-4430-abd3-10a5954a021b

Superblock backups stored on blocks:

32768

Allocating group tables: 0/2^H^H^H ^H^H^Hdone

Writing inode tables: 0/2^H^H^H ^H^H^Hdone

Creating journal (4096 blocks): done

Writing superblocks and filesystem accounting information: 0/2^H^H^H ^H^H^Hdone

mount: /dev/zram1 mounted on /opt/zram/zram1.

mount: overlay1 mounted on /var/log.

By the time you run nginx start, /var/log is full. If ZRAM is off at that time, that is actually the same as / so you can delete other unneeded stuff from /, too.

So free space there! You still have not done that.
apt-cache clean or apt autoremove often help without doing harm.
Stop ZRAM before or delete older files from what is named as bind_dir for /var/log in ztab)

PS: you can also do journalctl -u zram-config or nginx

OK

  1. OK, I stopped zRam sudo service zram-config stop

  2. I cleared all the logs in /var/log with
    sudo find /var/log/ -type f -exec cp /dev/null {} \;
    which I found here https://serverfault.com/a/155301

  3. Reduced size of Journalctl via journald.conf wich was over 2GB
    https://got-tty.org/journalctl-via-journald-conf-die-loggroesse-definieren

  4. system reboot

  5. So far - so good - Everything (incl. ngix) works as expected

Thank you very much

I have the issue that all logging seems to stop after ~3 days on my RPi4 with 4GB of memory.

dmesg gives me plenty of errors for writting journal entries (i forgot to copy them before restarting).
zramctl looked ok and was nowhere near full.
df -h also didn’t show anything special.

what would be the best way to debug this?

How to ask a good question / Help Us Help You - Tutorials & Examples - openHAB Community

1 Like

Sorry for not being clearer and I hope this is not considered high jacking because the problem was mentioned here before…
Today it happened again.
I’m trying to give as much useful information as possible but because all logging in the system essentially stops it is difficult. Please let me know if I missed something!

openhabian@openHAB:~ $ zramctl
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram1 lzo-rle 250M 240.6M 46.1M 120.9M 4 /opt/zram/zram1
/dev/zram0 lzo-rle 200M 12K 1.4K 8K 4 [SWAP]

openhabian@openHAB:~ $ cat /etc/ztab
# swap	alg		mem_limit	disk_size	swap_priority	page-cluster	swappiness
swap	lzo-rle		200M		600M		75		0		80

# dir	alg		mem_limit	disk_size	target_dir			bind_dir
dir	lzo-rle		150M		1000M		/var/lib/openhab2/persistence	/persistence.bind

# log	alg		mem_limit	disk_size	target_dir		bind_dir		oldlog_dir
log	lzo-rle		250M		500M		/var/log		/log.bind

openhabian@openHAB:~ $ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 29G 6.7G 22G 25% /
devtmpfs 1.8G 0 1.8G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 179M 1.8G 10% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sda3 30G 45M 28G 1% /storage
/dev/mmcblk0p1 253M 45K 253M 1% /boot
/dev/zram1 227M 222M 0 100% /opt/zram/zram1
overlay1 227M 222M 0 100% /var/log
tmpfs 388M 0 388M 0% /run/user/1000

events.log & openhab.log just stop mid line and will not continue to be filled.
It seems like the 250mb memory limit is used as a maximum size for zram device and not the 500mb that it should if I understood the config correctly.
Also shouldn’t old logs not gotten rotated out of that folder? I have a few log.1 log.2 and log.gz files there that fill up the device.

Read more carefully.
You still fail to give the basicmost information. OS, OH versions ? openHABian branch ?

Right. That should not happen. Have you restarted ZRAM or rebooted ? Reinstalled ?

no
OH handles that

There was a bug in the creation of zram devices that was causing the mem limit to be used as both the max disk size and mem limit for all devices. If you reinstall it should be fixed and resolve your issue.

This is what openhabian-config gives me:

openHABian Configuration Tool [main]v1.6.3-1192(9b398a9)

I’m on the main branch with OH3 Milestone 1
Not sure where I can find the exact Openhabian OS Version.

If I restart the issue goes away for a bit till the zram device is full again.

I just reinstalled zram (uninstall, reboot, install) via openhabian-cli.

How can I verify i have the correct version?
It seems to still limit the disk size by the mem_limit.

openhabian@openHAB:/var/log/openhab $ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 29G 6.8G 22G 25% /
devtmpfs 1.8G 0 1.8G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 9.4M 1.9G 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sda3 30G 45M 28G 1% /storage
/dev/mmcblk0p1 253M 48M 205M 19% /boot
/dev/zram1 227M 193M 16M 93% /opt/zram/zram1
overlay1 227M 193M 16M 93% /var/log
tmpfs 388M 0 388M 0% /run/user/1000

The output of zramctl should look something like this and if it does then it is fixed. Pay special attention to the DISKSIZE column as is should match yours exactly if it is fixed.

NAME       ALGORITHM DISKSIZE  DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 zstd          600M 42.3M  2.6M 10.4M       4 /opt/zram/zram2
/dev/zram1 zstd          900M 46.8M  1.8M 21.9M       4 /opt/zram/zram1
/dev/zram0 lzo-rle       600M 29.7M   12M 15.4M       4 [SWAP]

I feel I don have to fix…

openhabian@openHAB:/var/log/openhab $ zramctl --version
zramctl from util-linux 2.33.1
openhabian@openHAB:/var/log/openhab $ zramctl
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram1 lzo-rle 250M 214.4M 38M 107M 4 /opt/zram/zram1
/dev/zram0 lzo-rle 200M 4K 86B 4K 4 [SWAP]
openhabian@openHAB:/var/log/openhab $ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 29G 6.8G 22G 25% /
devtmpfs 1.8G 0 1.8G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 9.4M 1.9G 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sda3 30G 45M 28G 1% /storage
/dev/mmcblk0p1 253M 48M 205M 19% /boot
/dev/zram1 227M 199M 11M 96% /opt/zram/zram1
overlay1 227M 199M 11M 96% /var/log
tmpfs 388M 0 388M 0% /run/user/1000

But if i read ztab right it should be 500MB not 250MB

# swap	alg		mem_limit	disk_size	swap_priority	page-cluster	swappiness
swap	lzo-rle		200M		600M		75		0		80

# dir	alg		mem_limit	disk_size	target_dir			bind_dir
dir	lzo-rle		150M		1000M		/var/lib/openhab2/persistence	/persistence.bind

# log	alg		mem_limit	disk_size	target_dir		bind_dir		oldlog_dir
log	lzo-rle		250M		500M		/var/log		/log.bind

I used openhabian-config to first uninstall and then reinstall zram. Is there a better method to ensure I have the correct version installed?

No, you are not on the latest version. Try running it again make sure that you reboot in between uninstalling and reinstalling as a precaution. To be extra safe you could try running sudo rm -rf /opt/zram after uninstalling and rebooting to ensure that openhabian-config will redownload zram when installing.

@ndye Thanks! I think deleting /opt/zram did the trick!
I also noticed that the /etc/ztab got overwritten, but I guess that on purpose.

Yep, glad it worked!

Configuration

##   Release = Raspbian GNU/Linux 10 (buster)
##    Kernel = Linux 5.10.52-v7+
##  Platform = Raspberry Pi 3 Model B Rev 1.2
##    Uptime = 0 day(s). 0:19:19
## CPU Usage = 3.05% avg over 4 cpu(s) (4 core(s) x 1 socket(s))
##  CPU Load = 1m: 0.18, 5m: 0.25, 15m: 0.42
##    Memory = Free: 0.04GB (5%), Used: 0.90GB (95%), Total: 0.94GB
##      Swap = Free: 2.23GB (100%), Used: 0.00GB (0%), Total: 2.24GB
##      Root = Free: 21.52GB (79%), Used: 5.66GB (21%), Total: 28.38GB
##   Updates = 0 apt updates available.
##  Sessions = 2 session(s)
## Processes = 132 running processes of 32768 maximum processes
##Openhabian = 3.1.0 - Release Build

Summary
I’ve had a similar situation where frontail no longer shows the logs and df -h shows var/log at 100%. I’ve uninstall zram using openhabian-config, entered sudo rm -rf /opt/zram on the cli after I exited openhabian-config, and then rebooted. Afterwards, I went back into openhabian-config and installed zram. However, the version stays the same as well as the disksize. What am I’m doing wrong?


openhabian@casajuarez:~ $ zramctl --version
zramctl from util-linux 2.33.1
openhabian@casajuarez:~ $ zramctl
NAME       ALGORITHM DISKSIZE  DATA  COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram2 zstd          450M 17.2M 205.3K  564K       4 /opt/zram/zram2
/dev/zram1 zstd          350M 33.9M 654.1K  1.1M       4 /opt/zram/zram1
/dev/zram0 lzo-rle       450M  3.2M  59.7K  320K       4 [SWAP]
openhabian@casajuarez:~ $ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/root        29G  5.7G   22G  21% /
devtmpfs        454M     0  454M   0% /dev
tmpfs           487M     0  487M   0% /dev/shm
tmpfs           487M  7.4M  479M   2% /run
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           487M     0  487M   0% /sys/fs/cgroup
/dev/mmcblk0p1  253M   49M  204M  20% /boot
tmpfs            98M     0   98M   0% /run/user/1000
/dev/zram1      324M   18M  281M   7% /opt/zram/zram1
overlay1        324M   18M  281M   7% /var/lib/openhab/persistence
/dev/zram2      420M  1.6M  387M   1% /opt/zram/zram2
overlay2        420M  1.6M  387M   1% /var/log

Btw, I can see the logs when I open the Karaf console and type log:tail.

zramctl is not controlled by the zram-config package, it is installed by the linux kernel. From what you have shared your zram is working correctly right now. Please provide logs relating to the topic at hand if you want me to be able to help you.

i have a similar problem. :neutral_face:
system was set up a week ago (openhabian), zram was installed during unattended setup (no re-install):

###############################################################################
###############  ohab3  #######################################################
###############################################################################
##   Release = Raspbian GNU/Linux 10 (buster)
##    Kernel = Linux 5.10.52-v7l+
##  Platform = Raspberry Pi 4 Model B Rev 1.1
##    Uptime = 0 day(s). 16:38:6
## CPU Usage = 0% avg over 4 cpu(s) (4 core(s) x 1 socket(s))
##  CPU Load = 1m: 0.56, 5m: 0.24, 15m: 0.12
##    Memory = Free: 2.33GB (61%), Used: 1.46GB (39%), Total: 3.79GB
##      Swap = Free: 0.53GB (100%), Used: 0.00GB (0%), Total: 0.53GB
##      Root = Free: 2.90GB (43%), Used: 3.77GB (57%), Total: 7.00GB
##   Updates = 12 apt updates available.
##  Sessions = 2 session(s)
## Processes = 139 running processes of 32768 maximum processes
###############################################################################

                          _   _     _     ____   _
  ___   ___   ___   ___  | | | |   / \   | __ ) (_)  ____   ___
 / _ \ / _ \ / _ \ / _ \ | |_| |  / _ \  |  _ \ | | / _  \ / _ \
| (_) | (_) |  __/| | | ||  _  | / ___ \ | |_) )| || (_) || | | |
 \___/|  __/ \___/|_| |_||_| |_|/_/   \_\|____/ |_| \__|_||_| | |
      |_|                          3.2.0.M1 - Milestone Build

today the daily amanda backup mail was missing in my inbox so i checked frontail logs but it was empty. had the same problem on my old setup (2.5.x)

i’m not sure if this is the same issue as it is mentioned here, because my system doesn’t say /var/log 100%:

openhabian@ohab3:~ $ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/root       7.1G  3.8G  3.0G  57% /
devtmpfs        1.7G     0  1.7G   0% /dev
tmpfs           1.9G     0  1.9G   0% /dev/shm
tmpfs           1.9G   26M  1.9G   2% /run
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sda3       7.3G  272M  6.7G   4% /storage
/dev/mmcblk0p1  253M   49M  204M  20% /boot
/dev/zram1      324M  177M  123M  60% /opt/zram/zram1
overlay1        324M  177M  123M  60% /var/lib/openhab/persistence
/dev/zram2      324M   28M  271M  10% /opt/zram/zram2
overlay2        324M   28M  271M  10% /var/lib/influxdb
/dev/zram3      420M   11M  379M   3% /opt/zram/zram3
overlay3        420M   11M  379M   3% /var/log
tmpfs           389M     0  389M   0% /run/user/1000

i guess this is normal with zram?

openhabian@ohab3:~ $ cat /var/log/openhab/openhab.log
cat: /var/log/openhab/openhab.log: No such file or directory

zram:

openhabian@ohab3:~ $ zramctl
NAME       ALGORITHM DISKSIZE   DATA COMPR  TOTAL STREAMS MOUNTPOINT
/dev/zram3 zstd          450M  25.9M  2.1M    13M       4 /opt/zram/zram3
/dev/zram2 zstd          350M  91.4M 56.7M  65.2M       4 /opt/zram/zram2
/dev/zram1 zstd          350M 212.2M  3.5M 105.8M       4 /opt/zram/zram1
/dev/zram0 lzo-rle       450M     4K   86B     4K       4 [SWAP]
openhabian@ohab3:~ $ zramctl --version
zramctl from util-linux 2.33.1

ztab:

# swap  alg             mem_limit       disk_size       swap_priority   page-cluster    swappiness
swap    lzo-rle         200M            450M            75              0               80

# dir   alg             mem_limit       disk_size       target_dir                      bind_dir
dir     zstd            150M            350M            /var/lib/openhab/persistence    /persistence.bind
dir     zstd            150M            350M            /var/lib/influxdb               /influxdb.bind

# log   alg             mem_limit       disk_size       target_dir              bind_dir                oldlog_dir
log     zstd            200M            450M            /var/log                /log.bind

while setting up the system i had a problem with influxdb (solved here), but i don’t know if this is relevant… journal -xe brings has many lines that are connected to influxdb…:

-- Logs begin at Sun 2021-08-08 20:34:36 CEST, end at Mon 2021-08-09 13:20:02 CEST. --
Aug 09 12:29:30 ohab3 influxd-systemd-start.sh[723]: [httpd] 127.0.0.1 - openhab [09/Aug/2021:12:29:30 +0200] "POST /write?db=openhab&rp=autogen&precision=n&consistency=one HTTP/1.1 " 204 0 "-" "okhttp/3.14.4" ae76cca9-f8fc-11eb-8098-dca6322f7bc3 1022
...
...
... (lots of entries)
Aug 09 13:19:18 ohab3 influxd-systemd-start.sh[723]: [httpd] 127.0.0.1 - openhab [09/Aug/2021:13:19:18 +0200] "POST /write?db=openhab&rp=autogen&precision=n&consistency=one HTTP/1.1 " 204 0 "-" "okhttp/3.14.4" a32c866d-f903-11eb-8426-dca6322f7bc3 10
Aug 09 13:19:19 ohab3 influxd-systemd-start.sh[723]: [httpd] 127.0.0.1 - openhab [09/Aug/2021:13:19:19 +0200] "POST /write?db=openhab&rp=autogen&precision=n&consistency=one HTTP/1.1 " 204 0 "-" "okhttp/3.14.4" a4030c77-f903-11eb-8427-dca6322f7bc3 12
Aug 09 13:19:32 ohab3 influxd-systemd-start.sh[723]: [httpd] 127.0.0.1 - openhab [09/Aug/2021:13:19:32 +0200] "POST /write?db=openhab&rp=autogen&precision=n&consistency=one HTTP/1.1 " 204 0 "-" "okhttp/3.14.4" aba7d64f-f903-11eb-8428-dca6322f7bc3 11
Aug 09 13:19:33 ohab3 influxd-systemd-start.sh[723]: [httpd] 127.0.0.1 - openhab [09/Aug/2021:13:19:33 +0200] "POST /write?db=openhab&rp=autogen&precision=n&consistency=one HTTP/1.1 " 204 0 "-" "okhttp/3.14.4" ac22bffa-f903-11eb-8429-dca6322f7bc3 10
Aug 09 13:19:34 ohab3 influxd-systemd-start.sh[723]: [httpd] 127.0.0.1 - openhab [09/Aug/2021:13:19:34 +0200] "POST /write?db=openhab&rp=autogen&precision=n&consistency=one HTTP/1.1 " 204 0 "-" "okhttp/3.14.4" ad17c095-f903-11eb-842a-dca6322f7bc3 11
Aug 09 13:19:38 ohab3 influxd-systemd-start.sh[723]: [httpd] 127.0.0.1 - openhab [09/Aug/2021:13:19:38 +0200] "POST /write?db=openhab&rp=autogen&precision=n&consistency=one HTTP/1.1 " 204 0 "-" "okhttp/3.14.4" af02cf8f-f903-11eb-842b-dca6322f7bc3 14
Aug 09 13:19:39 ohab3 influxd-systemd-start.sh[723]: [httpd] 127.0.0.1 - openhab [09/Aug/2021:13:19:39 +0200] "POST /write?db=openhab&rp=autogen&precision=n&consistency=one HTTP/1.1 " 204 0 "-" "okhttp/3.14.4" af9c438b-f903-11eb-842c-dca6322f7bc3 96
Aug 09 13:19:41 ohab3 influxd-systemd-start.sh[723]: [httpd] 127.0.0.1 - openhab [09/Aug/2021:13:19:41 +0200] "POST /write?db=openhab&rp=autogen&precision=n&consistency=one HTTP/1.1 " 204 0 "-" "okhttp/3.14.4" b0ceaa2c-f903-11eb-842d-dca6322f7bc3 25
Aug 09 13:19:43 ohab3 influxd-systemd-start.sh[723]: [httpd] 127.0.0.1 - openhab [09/Aug/2021:13:19:43 +0200] "POST /write?db=openhab&rp=autogen&precision=n&consistency=one HTTP/1.1 " 204 0 "-" "okhttp/3.14.4" b210bd4e-f903-11eb-842e-dca6322f7bc3 19
Aug 09 13:19:44 ohab3 influxd-systemd-start.sh[723]: [httpd] 127.0.0.1 - openhab [09/Aug/2021:13:19:44 +0200] "POST /write?db=openhab&rp=autogen&precision=n&consistency=one HTTP/1.1 " 204 0 "-" "okhttp/3.14.4" b2aa51bb-f903-11eb-842f-dca6322f7bc3 10
Aug 09 13:19:44 ohab3 influxd-systemd-start.sh[723]: [httpd] 127.0.0.1 - openhab [09/Aug/2021:13:19:44 +0200] "POST /write?db=openhab&rp=autogen&precision=n&consistency=one HTTP/1.1 " 204 0 "-" "okhttp/3.14.4" b2f75cda-f903-11eb-8430-dca6322f7bc3 19
Aug 09 13:19:47 ohab3 influxd-systemd-start.sh[723]: [httpd] 127.0.0.1 - openhab [09/Aug/2021:13:19:47 +0200] "POST /write?db=openhab&rp=autogen&precision=n&consistency=one HTTP/1.1 " 204 0 "-" "okhttp/3.14.4" b494f275-f903-11eb-8431-dca6322f7bc3 15
Aug 09 13:19:50 ohab3 influxd-systemd-start.sh[723]: [httpd] 127.0.0.1 - openhab [09/Aug/2021:13:19:50 +0200] "POST /write?db=openhab&rp=autogen&precision=n&consistency=one HTTP/1.1 " 204 0 "-" "okhttp/3.14.4" b67eaa39-f903-11eb-8432-dca6322f7bc3 19
Aug 09 13:19:57 ohab3 influxd-systemd-start.sh[723]: [httpd] 127.0.0.1 - openhab [09/Aug/2021:13:19:57 +0200] "POST /write?db=openhab&rp=autogen&precision=n&consistency=one HTTP/1.1 " 204 0 "-" "okhttp/3.14.4" ba429a5d-f903-11eb-8433-dca6322f7bc3 28
Aug 09 13:19:57 ohab3 influxd-systemd-start.sh[723]: [httpd] 127.0.0.1 - openhab [09/Aug/2021:13:19:57 +0200] "POST /write?db=openhab&rp=autogen&precision=n&consistency=one HTTP/1.1 " 204 0 "-" "okhttp/3.14.4" ba52f7f4-f903-11eb-8434-dca6322f7bc3 22
Aug 09 13:19:58 ohab3 influxd-systemd-start.sh[723]: [httpd] 127.0.0.1 - openhab [09/Aug/2021:13:19:58 +0200] "POST /write?db=openhab&rp=autogen&precision=n&consistency=one HTTP/1.1 " 204 0 "-" "okhttp/3.14.4" bafbd76e-f903-11eb-8435-dca6322f7bc3 10
Aug 09 13:19:59 ohab3 influxd-systemd-start.sh[723]: [httpd] 127.0.0.1 - openhab [09/Aug/2021:13:19:59 +0200] "POST /write?db=openhab&rp=autogen&precision=n&consistency=one HTTP/1.1 " 204 0 "-" "okhttp/3.14.4" bbf0fadf-f903-11eb-8436-dca6322f7bc3 15
Aug 09 13:20:00 ohab3 influxd-systemd-start.sh[723]: [httpd] 127.0.0.1 - openhab [09/Aug/2021:13:20:00 +0200] "POST /write?db=openhab&rp=autogen&precision=n&consistency=one HTTP/1.1 " 204 0 "-" "okhttp/3.14.4" bc5c9839-f903-11eb-8437-dca6322f7bc3 11
Aug 09 13:20:01 ohab3 influxd-systemd-start.sh[723]: [httpd] 127.0.0.1 - openhab [09/Aug/2021:13:20:01 +0200] "POST /write?db=openhab&rp=autogen&precision=n&consistency=one HTTP/1.1 " 204 0 "-" "okhttp/3.14.4" bcb8e0d9-f903-11eb-8438-dca6322f7bc3 17
Aug 09 13:20:01 ohab3 influxd-systemd-start.sh[723]: [httpd] 127.0.0.1 - openhab [09/Aug/2021:13:20:01 +0200] "POST /write?db=openhab&rp=autogen&precision=n&consistency=one HTTP/1.1 " 204 0 "-" "okhttp/3.14.4" bccc562b-f903-11eb-8439-dca6322f7bc3 20
Aug 09 13:20:02 ohab3 influxd-systemd-start.sh[723]: [httpd] 127.0.0.1 - openhab [09/Aug/2021:13:20:02 +0200] "POST /write?db=openhab&rp=autogen&precision=n&consistency=one HTTP/1.1 " 204 0 "-" "okhttp/3.14.4" bd56b40f-f903-11eb-843a-dca6322f7bc3 13

journal -xe says there are 1129 lines, most of these lines look like those above…

as i also had problems with amanda i also checked amanda:

[13:24:07] backup@ohab3:~$ amcheck openhab-dir
amcheck: critical (fatal): create debug directory "/var/log/amanda/server/": Permission denied
amcheck: create debug directory "/var/log/amanda/server/": Permission denied
[13:24:14] backup@ohab3:~$ amreport openhab-dir
amreport: critical (fatal): create debug directory "/var/log/amanda/server/": Permission denied
amreport: create debug directory "/var/log/amanda/server/": Permission denied

this is not my operative setup so i haven’t made a reboot, but if it’s the same as on my operative setup (2.5) a reboot would solve the problem for 2 or 3 days…

if my problem is not connected to this thread please tell me, so i can open a new one!

edit:
something’s wrong with mosquitto, too (i can tail logs via console):

20:03:26.015 [INFO ] [del.core.internal.ModelRepositoryImpl] - Loading model 'mqtt.things'
20:03:26.109 [INFO ] [hab.event.ThingStatusInfoChangedEvent] - Thing 'mqtt:broker:mosq' changed from OFFLINE (COMMUNICATION_ERROR): io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/127.0.0.1:1883 to OFFLINE
20:03:26.113 [INFO ] [hab.event.ThingStatusInfoChangedEvent] - Thing 'mqtt:broker:mosq' changed from OFFLINE (COMMUNICATION_ERROR): io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/127.0.0.1:1883 to OFFLINE
20:03:26.117 [INFO ] [o.transport.mqtt.MqttBrokerConnection] - Starting MQTT broker connection to 'localhost' with clientid openHAB3
20:03:26.132 [INFO ] [hab.event.ThingStatusInfoChangedEvent] - Thing 'mqtt:broker:mosq' changed from OFFLINE to OFFLINE (COMMUNICATION_ERROR): Timeout
20:03:26.146 [INFO ] [hab.event.ThingStatusInfoChangedEvent] - Thing 'mqtt:broker:mosq' changed from OFFLINE (COMMUNICATION_ERROR): Timeout to OFFLINE (COMMUNICATION_ERROR): io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/127.0.0.1:1883
20:03:36.132 [INFO ] [t.reconnect.PeriodicReconnectStrategy] - Try to restore connection to 'localhost'. Next attempt in 60000ms
20:03:36.140 [INFO ] [hab.event.ThingStatusInfoChangedEvent] - Thing 'mqtt:broker:mosq' changed from OFFLINE (COMMUNICATION_ERROR): io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/127.0.0.1:1883 to OFFLINE
20:03:36.145 [INFO ] [o.transport.mqtt.MqttBrokerConnection] - Starting MQTT broker connection to 'localhost' with clientid openHAB3
20:03:36.155 [INFO ] [hab.event.ThingStatusInfoChangedEvent] - Thing 'mqtt:broker:mosq' changed from OFFLINE to OFFLINE (COMMUNICATION_ERROR): io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/127.0.0.1:1883
20:04:36.151 [INFO ] [t.reconnect.PeriodicReconnectStrategy] - Try to restore connection to 'localhost'. Next attempt in 60000ms
20:04:36.157 [INFO ] [hab.event.ThingStatusInfoChangedEvent] - Thing 'mqtt:broker:mosq' changed from OFFLINE (COMMUNICATION_ERROR): io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/127.0.0.1:1883 to OFFLINE
20:04:36.161 [INFO ] [o.transport.mqtt.MqttBrokerConnection] - Starting MQTT broker connection to 'localhost' with clientid openHAB3
20:04:36.178 [INFO ] [hab.event.ThingStatusInfoChangedEvent] - Thing 'mqtt:broker:mosq' changed from OFFLINE to OFFLINE (COMMUNICATION_ERROR): io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/127.0.0.1:1883
.
.
.
etc.

My apologies but it appears that my events.log and openhab.log files were deleted when I uninstalled and reinstalled zram. I had done a clean install several months back and enabled zram to avoid sd corruption. Logs stopped displaying on frontail recently so I followed the instructions on this thread because the zram1 and overlay1 (/var/log) was at 100% and there was an error message saying it could not write. I’ve done the reinstall but the /var/log files are not populating even though I can see them when I do a log:display on karaf. I guess I’m trying to find out how to get /var/log files to populate and how to ensure they’re being done on zram. So which logs would you need?

There was a file permission bug in InfluxDB 1.8.7. Maybe you have been affected by it. Here’s the link with the problem description and the fix: