OH3.0.1.M1 - Log full

Yesterday I made some experiments without paying attention to the log. Today OH was active but UI was irresponsive. I was not able to login, logout, see my locations, equipments, etc. openhab.log was full of errors, the last ones were RRD4j complainning to be unable to persist data, and immediatly before were the errors from my faulty experiment. The last log entry was from 3h ago and the log size size was exactly 16M.

I’ve stopped OH, deleted the logs, started OH, and now it’s ok. I’m not saying that this is a general error, but my suggestion is that OH should create log1, log2, etc to avoid these situations, like Plex does (for example).

OH is able to do so but the configuration needs to fit to what is available on the local system.
You wrote that the logsize was 16MB. Was still free space available on the disk/ZRAM ?

How ? Did not find anything neither in documentation nor searching this forum.

I have more than 1Tb available.

Linked to gitbub issue #2176

That’s because it already does so.

userdata/etc/log4j.xml

                <!-- Rolling file appender -->
                <RollingFile fileName="${sys:openhab.logdir}/openhab.log" filePattern="${sys:openhab.logdir}/openhab.log.%i" name="LOGFILE">
                        <PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} [%-5.5p] [%-36.36c] - %m%n"/>
                        <Policies>
                                <OnStartupTriggeringPolicy/>
                                <SizeBasedTriggeringPolicy size="16 MB"/>
                        </Policies>
                </RollingFile>

It will create a new log file every time OH starts up or when the log file grows to 16 MB.

Thanks. In my case it creates a new log file when OH starts, but last night openhab.log grew to 16M and OH simply stopped. Last messages recorded in the log were from RRD4j complainning to be unable to persist data, which is strange, there’s lot of free space available (more than 1Tb).

Which probably means that the log grew to 16 MB and when it tried to rotate and start a new file it couldn’t because the file system was locked or otherwise in a state where writes were not allowed by the openHAB user.

There really isn’t anything you can change in the openHAB log settings to fix this. You need to figure out why the openhab user or openHAB process was not allowed to write to the file system.

Thanks, but I have no idea of how to do that. The original mistake I did was to install the Zigbee add-on and use its serial capability to communicate with a zigbee coordinator where the zigbee process was not active. That led to a series of error messages, for each access attempt I had one entry complainning of the error, and several lines with a java traceback. So my suspicion is that the log filled during this trace process and that left OH “squeezed” and other processes were affected.

That wouldn’t happen until the log file filled up all your 1TB drive. The log file is not the source of the original problem. And I can’t tell you how to go about figuring it out since I’ve never seen any problem like it that wasn’t caused by the drive running out of space, which you’ve assured us was not the case.

When something has grabbed all of some resource, it is usually other processes that complain,

that means you use only one big / partition on your system or did you change the location of the log file ?

Here are the partitions:

[~] # df -h
Filesystem                Size      Used Available Use% Mounted on
none                    290.0M    270.8M     19.2M  93% /
devtmpfs                  3.8G      8.0K      3.8G   0% /dev
tmpfs                    64.0M    556.0K     63.5M   1% /tmp
tmpfs                     3.8G    144.0K      3.8G   0% /dev/shm
tmpfs                    16.0M         0     16.0M   0% /share
/dev/sdc5                 7.8M     28.0K      7.8M   0% /mnt/boot_config
tmpfs                    16.0M         0     16.0M   0% /mnt/snapshot/export
/dev/md9                493.5M    155.0M    338.5M  31% /mnt/HDA_ROOT
cgroup_root               3.8G         0      3.8G   0% /sys/fs/cgroup
/dev/mapper/cachedev1
                          2.6T    151.1G      2.5T   6% /share/CACHEDEV1_DATA
/dev/mapper/cachedev2
                          3.5T      1.4T      2.2T  39% /share/CACHEDEV2_DATA
/dev/md13               417.0M    387.9M     29.0M  93% /mnt/ext
tmpfs                    48.0M     60.0K     47.9M   0% /share/CACHEDEV1_DATA/.samba/lock/msg.lock
tmpfs                    16.0M         0     16.0M   0% /mnt/ext/opt/samba/private/msg.sock

All OH files are in /share/CACHEDEV1_DATA that has approx 2.5Tb available.

1 Like