Hardware: Raspberry Pi 3b+ Processor: BCM2835; Linux 4.19.50-v7+ armv7l
OS: Raspbian GNU/Linux 10 (buster); Firmware #896
Java Runtime Environment: (build 11.0.8+10-post-Raspbian-1deb10u1); Version 11.0.8" 2020-07-14
openHAB version: 2.5.9 Release Build
Dear community,
My /var/log/openhab2/openhab.log and /var/log/openhab2/events.log suddenly stopped working on 02.12.2020 04:50:08am as you can see in my screenshot for the logs.
Several reboots of the Pi and restarts of openHab with cleaning the beforehand cache didnāt change the situation. The SD card donāt seem to be failing, everything else is just running fine.
Checking the log setup in the Karaf console showed me a ājava.lang.NullPointerExceptionā for every log command. For e.g.
openhab> log:display
Error executing command: java.lang.NullPointerException
I am an absolute beginner, can someone help me with this?
Hi Bruce, good question. I can backup my current card with pi-clone in the next days to verify this in detail.
Currently I am not assuming that the card is failing because everything else is just running fine, I would expect way more strange behaviors instead of just stopping both logs.
Is there any way I can restart the logging from scratch without setting up openHab again from the beginning? For e.g. removing both the logfiles etc.?
I was able to investigate a but further, maybe someone can help with the debug logs? I used openhab-cli start --debug to find something, this came out after logging out:
openhab> logout
org.ops4j.pax.logging.pax-logging-api [log4j2] ERROR : Unable to write to stream /var/log/openhab2/openhab.log for appender LOGFILE Ignored FQCN: org.apache.logging.log4j.spi.AbstractLogger
org.apache.logging.log4j.core.appender.AppenderLoggingException: Error writing to RandomAccessFile /var/log/openhab2/openhab.log
at org.apache.logging.log4j.core.appender.rolling.RollingRandomAccessFileManager.writeToDestination(RollingRandomAccessFileManager.java:141)
at org.apache.logging.log4j.core.appender.OutputStreamManager.flushBuffer(OutputStreamManager.java:293)
at org.apache.logging.log4j.core.appender.rolling.RollingRandomAccessFileManager.flush(RollingRandomAccessFileManager.java:160)
at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.directEncodeEvent(AbstractOutputStreamAppender.java:199)
at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.tryAppend(AbstractOutputStreamAppender.java:190)
at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.append(AbstractOutputStreamAppender.java:181)
at org.apache.logging.log4j.core.appender.RollingRandomAccessFileAppender.append(RollingRandomAccessFileAppender.java:252)
at org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:156)
at org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:129)
at org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(AppenderControl.java:120)
at org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:84)
at org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:543)
at org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:502)
at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:485)
at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:412)
at org.apache.logging.log4j.core.config.AwaitCompletionReliabilityStrategy.log(AwaitCompletionReliabilityStrategy.java:63)
at org.apache.logging.log4j.core.Logger.logMessage(Logger.java:154)
at org.ops4j.pax.logging.log4j2.internal.PaxLoggerImpl.doLog0(PaxLoggerImpl.java:354)
at org.ops4j.pax.logging.log4j2.internal.PaxLoggerImpl.doLog(PaxLoggerImpl.java:337)
at org.ops4j.pax.logging.log4j2.internal.PaxLoggerImpl.error(PaxLoggerImpl.java:163)
at org.ops4j.pax.logging.internal.TrackingLogger.error(TrackingLogger.java:130)
at org.ops4j.pax.logging.slf4j.Slf4jLogger.error(Slf4jLogger.java:1019)
at org.apache.karaf.deployer.features.osgi.Activator$DeploymentFinishedListener.deploymentEvent(Activator.java:90)
at org.apache.karaf.features.internal.service.FeaturesServiceImpl.callListeners(FeaturesServiceImpl.java:321)
at org.apache.karaf.features.internal.service.Deployer.deploy(Deployer.java:1067)
at org.apache.karaf.features.internal.service.FeaturesServiceImpl.doProvision(FeaturesServiceImpl.java:1062)
at org.apache.karaf.features.internal.service.FeaturesServiceImpl.lambda$doProvisionInThread$13(FeaturesServiceImpl.java:998)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.io.IOException: No space left on device
at java.base/java.io.RandomAccessFile.writeBytes(Native Method)
at java.base/java.io.RandomAccessFile.write(RandomAccessFile.java:559)
at org.apache.logging.log4j.core.appender.rolling.RollingRandomAccessFileManager.writeToDestination(RollingRandomAccessFileManager.java:137)
... 30 more
Seems to be a combination of all your assumptions, means it might be an issue with space and permissions.
After further investigating I tried to reset the permission / ownership via the Karaf console. Turns out that there is no space left (why ever). My first thought was indeed a damaged SD card, but checking the filesystem with df -h showed me that there is no space left for logs:
I dont know what I did but the logging started again, I strongly assume something is wrong with the zram1 setting.
After I rebootet the Pi, suddenly 17mb became available again, so the openhab log and events have at least for now enough space to run:
Filesystem Size Used Avail Use% Mounted on
/dev/zram1 469M 418M 17M 97% /opt/zram/zram1
In the red marked zram1 folder from my screenshot above are many old gzipped files, how can I remove them properly and how can I setup zram logrotate to prevent this issue for the future?
The /var/log/ folder also have a lot of gzipped files
Edit: Iām afraid the logging will stop soon, the zram1 folder will run out of space in the next hour. Those are the most heavy files. Any idea how to avoid the growing of the logfiles?
a) check the log files and identify the root cause for logged items; depending on the root cause it might be necessary to take corrective actions
b) reduce the log level from āverboseā to āless verboseā; here itās required to know more about what do you see in the related log files and the log level; depending on that the log level can be reduced
I finally got it running again. As I mentioned before, I donāt know why suddenly 17mb were available again after a restart so the logging was able to run again.
I checked the large daemon.log and syslog and saw that my influxdb was creating several records per second in the affected logfiles. I disabled the unnecessary logging from the influxdb config and cleared the logfiles with the following commands (not pretty, I know):
From now on the logging was reduced heavily and the space increased up to 400mb. Now everything should be fine with logrotate and not too large logfiles, even if in the last 8 hours a few mb of logs were written.
Thanks to everyone who pushed me into the right direction! It was a perfect start for me in the openHAB community.