openHab log and events stopped working / java.lang.NullPointerException

  • Platform information:
    • Hardware: Raspberry Pi 3b+ Processor: BCM2835; Linux 4.19.50-v7+ armv7l
    • OS: Raspbian GNU/Linux 10 (buster); Firmware #896
    • Java Runtime Environment: (build 11.0.8+10-post-Raspbian-1deb10u1); Version 11.0.8" 2020-07-14
    • openHAB version: 2.5.9 Release Build

Dear community,

My /var/log/openhab2/openhab.log and /var/log/openhab2/events.log suddenly stopped working on 02.12.2020 04:50:08am as you can see in my screenshot for the logs.

Several reboots of the Pi and restarts of openHab with cleaning the beforehand cache didn’t change the situation. The SD card don’t seem to be failing, everything else is just running fine.

Checking the log setup in the Karaf console showed me a “java.lang.NullPointerException” for every log command. For e.g.

openhab> log:display
Error executing command: java.lang.NullPointerException

I am an absolute beginner, can someone help me with this?

Thanks and best regards,

How do you know? Did you try with a different known good card?

Hi Bruce, good question. I can backup my current card with pi-clone in the next days to verify this in detail.

Currently I am not assuming that the card is failing because everything else is just running fine, I would expect way more strange behaviors instead of just stopping both logs.

Is there any way I can restart the logging from scratch without setting up openHab again from the beginning? For e.g. removing both the logfiles etc.?

Has the filesystem run out of space?

Filesystem in terms of space looks good to me I guess:

1 Like

I was able to investigate a but further, maybe someone can help with the debug logs? I used openhab-cli start --debug to find something, this came out after logging out:

openhab> logout
org.ops4j.pax.logging.pax-logging-api [log4j2] ERROR : Unable to write to stream /var/log/openhab2/openhab.log for appender LOGFILE Ignored FQCN: org.apache.logging.log4j.spi.AbstractLogger
org.apache.logging.log4j.core.appender.AppenderLoggingException: Error writing to RandomAccessFile /var/log/openhab2/openhab.log
        at org.apache.logging.log4j.core.appender.rolling.RollingRandomAccessFileManager.writeToDestination(
        at org.apache.logging.log4j.core.appender.OutputStreamManager.flushBuffer(
        at org.apache.logging.log4j.core.appender.rolling.RollingRandomAccessFileManager.flush(
        at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.directEncodeEvent(
        at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.tryAppend(
        at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.append(
        at org.apache.logging.log4j.core.appender.RollingRandomAccessFileAppender.append(
        at org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(
        at org.apache.logging.log4j.core.config.AppenderControl.callAppender0(
        at org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(
        at org.apache.logging.log4j.core.config.AppenderControl.callAppender(
        at org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(
        at org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(
        at org.apache.logging.log4j.core.config.LoggerConfig.log(
        at org.apache.logging.log4j.core.config.LoggerConfig.log(
        at org.apache.logging.log4j.core.config.AwaitCompletionReliabilityStrategy.log(
        at org.apache.logging.log4j.core.Logger.logMessage(
        at org.ops4j.pax.logging.log4j2.internal.PaxLoggerImpl.doLog0(
        at org.ops4j.pax.logging.log4j2.internal.PaxLoggerImpl.doLog(
        at org.ops4j.pax.logging.log4j2.internal.PaxLoggerImpl.error(
        at org.ops4j.pax.logging.internal.TrackingLogger.error(
        at org.ops4j.pax.logging.slf4j.Slf4jLogger.error(
        at org.apache.karaf.deployer.features.osgi.Activator$DeploymentFinishedListener.deploymentEvent(
        at org.apache.karaf.features.internal.service.FeaturesServiceImpl.callListeners(
        at org.apache.karaf.features.internal.service.Deployer.deploy(
        at org.apache.karaf.features.internal.service.FeaturesServiceImpl.doProvision(
        at org.apache.karaf.features.internal.service.FeaturesServiceImpl.lambda$doProvisionInThread$13(
        at java.base/
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(
        at java.base/java.util.concurrent.ThreadPoolExecutor$
        at java.base/
Caused by: No space left on device
        at java.base/ Method)
        at java.base/
        at org.apache.logging.log4j.core.appender.rolling.RollingRandomAccessFileManager.writeToDestination(
        ... 30 more

Same as screenshot:

Unable to write to stream /var/log/openhab2/openhab.log
Either that path does not exist or there is a permissions issue.

1 Like

Seems to be a combination of all your assumptions, means it might be an issue with space and permissions.

After further investigating I tried to reset the permission / ownership via the Karaf console. Turns out that there is no space left (why ever). My first thought was indeed a damaged SD card, but checking the filesystem with df -h showed me that there is no space left for logs:


Any hints how to solve this?

It looks like you are running openHABian. I will defer to @mstormi on ho0w best to free up space. I suspect either old logs or something zram related.

Check the content of /var/log.
Are there any files that are rotated / old / gzipped ?
What are the biggest files ?

I dont know what I did but the logging started again, I strongly assume something is wrong with the zram1 setting.

After I rebootet the Pi, suddenly 17mb became available again, so the openhab log and events have at least for now enough space to run:

Filesystem      Size  Used Avail Use% Mounted on
/dev/zram1      469M  418M   17M  97% /opt/zram/zram1

In the red marked zram1 folder from my screenshot above are many old gzipped files, how can I remove them properly and how can I setup zram logrotate to prevent this issue for the future?

The /var/log/ folder also have a lot of gzipped files

Edit: I’m afraid the logging will stop soon, the zram1 folder will run out of space in the next hour. Those are the most heavy files. Any idea how to avoid the growing of the logfiles?

a) check the log files and identify the root cause for logged items; depending on the root cause it might be necessary to take corrective actions
b) reduce the log level from ‘verbose’ to ‘less verbose’; here it’s required to know more about what do you see in the related log files and the log level; depending on that the log level can be reduced

I finally got it running again. As I mentioned before, I don’t know why suddenly 17mb were available again after a restart so the logging was able to run again.

I checked the large daemon.log and syslog and saw that my influxdb was creating several records per second in the affected logfiles. I disabled the unnecessary logging from the influxdb config and cleared the logfiles with the following commands (not pretty, I know):

[11:19:02] root@RasPiHome:/home/pi# > /opt/zram/zram1/upper/syslog
[11:19:13] root@RasPiHome:/home/pi# > /opt/zram/zram1/upper/daemon.log
[11:19:38] root@RasPiHome:/home/pi# > /opt/zram/zram1/upper/kern.log

From now on the logging was reduced heavily and the space increased up to 400mb. Now everything should be fine with logrotate and not too large logfiles, even if in the last 8 hours a few mb of logs were written.

Thanks to everyone who pushed me into the right direction! It was a perfect start for me in the openHAB community.