I did a reboot and now is the biggest file the mosquitto.log. How can I clean up this file?
How big are the individual files and what is the content of mosquitto.log file - especially the last rows ?
To clean the file I would stop mosquitto and then manually remove the file.
Start mosquitto again and check if the file is created again.
That was in last rows.
1676567644: Client ESP-01S_0 already connected, closing old connection.
1676567644: Socket error on client ESP-01S_0, disconnecting.
1676567644: New client connected from 192.168.178.159 as ESP-01S_0 (c0, k10, u'$
1676567644: New connection from 192.168.178.162 on port 1883.
1676567644: Client ESP-01S_0 already connected, closing old connection.
1676567644: Socket error on client ESP-01S_0, disconnecting.
1676567644: New client connected from 192.168.178.162 as ESP-01S_0 (c0, k10, u'$
1676567644: New connection from 192.168.178.158 on port 1883.
1676567644: mosquitto version 1.5.7 terminating
I have delete the file and it was created a new one.
Thanks Wolfgang, the update works.
Hello again,
I’m having problems with my memory again. I can’t get into openhab anymore either. I can only get on the Raspberry directly with putty.
I actually had the memory more in mind now and rebooted the system yesterday.
Now the system is full in the following files/folders:
Could anybody help me please?
I would check if the files are rotated … Means what are timestamps of first and last entry.
syslog, daemon.log should be rotated about at least once a week.
If there are older entries in these files they are not rotated.
What entries are in there ? Why are there that many entries in these files ? Mine are a few 10kB up to 100k or 200k. So it looks like more entries are logged on your system for good or bad reason.
This should be checked. You can use an editor to open the files or you can use more command to open them on the command line and walk through the files.
Okay in syslog I can read this at the first line:
and at 8:26 I think the problem begins
the last lines are:
The first line of daemon.log:
and the bullshit starts at the same time:
Apr 13 08:23:11 openhabian npm[759]: #033[32mZigbee2MQTT:info #033[39m 2023-04-13 08:23:11: MQTT publish: topic 'zigbee2mqtt/F_$
Apr 13 08:24:32 openhabian npm[759]: #033[32mZigbee2MQTT:info #033[39m 2023-04-13 08:24:32: MQTT publish: topic 'zigbee2mqtt/F_$
Apr 13 08:24:34 openhabian npm[759]: #033[32mZigbee2MQTT:info #033[39m 2023-04-13 08:24:34: MQTT publish: topic 'zigbee2mqtt/F_$
Apr 13 08:25:41 openhabian systemd[1]: Starting Daily apt download activities...
Apr 13 08:25:43 openhabian systemd[1]: apt-daily.service: Succeeded.
Apr 13 08:25:43 openhabian systemd[1]: Started Daily apt download activities.
Apr 13 08:26:47 openhabian karaf[754]: Exception in thread "OH-eventexecutor-1" Exception in thread "OH-eventexecutor-2" java.l$
Apr 13 08:26:47 openhabian karaf[754]: #011at java.base/java.util.concurrent.LinkedBlockingQueue.dequeue(LinkedBlockingQueue.ja$
Apr 13 08:26:47 openhabian karaf[754]: #011at java.base/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:$
Apr 13 08:26:47 openhabian karaf[754]: #011at java.base/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java$
Apr 13 08:26:47 openhabian karaf[754]: #011at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.ja$
Apr 13 08:26:47 openhabian karaf[754]: #011at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.j$
Apr 13 08:26:47 openhabian karaf[754]: #011at java.base/java.lang.Thread.run(Thread.java:829)
and that’s the last lines:
Apr 13 08:29:46 openhabian karaf[754]: #011at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.ja$
Apr 13 08:29:46 openhabian karaf[754]: #011at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.j$
Apr 13 08:29:46 openhabian karaf[754]: #011at java.base/java.lang.Thread.run(Thread.java:829)
Apr 13 08:29:46 openhabian karaf[754]: Exception in thread "OH-eventexecutor-217293" java.lang.NullPointerException
Apr 13 08:29:46 openhabian karaf[754]: #011at java.base/java.util.concurrent.LinkedBlockingQueue.dequeue(LinkedBlockingQueue.ja$
Apr 13 08:29:46 openhabian karaf[754]: #011at java.base/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:$
Apr 13 08:29:46 openhabian karaf[754]: #011at java.base/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java$
Apr 13 08:29:46 openhabian karaf[754]: #011at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.ja$
Apr 13 08:29:46 openhabian
- rotate for daemon.log should be defined in /etc/logrotate.d/rsyslog and done weekly
- it looks like it was not done on 09.04.
- I don’t have logs for karaf in syslog nor in daemon.log
I would try
sudo logrotate -f /etc/logrotate.d/rsyslog
this should rotate all in /etc/logrotate.d/rsyslog configured files.
daemon.log will be moved to daemon.log.1; syslog will be moved to syslog.1
New files should be created.
If the files are rotated you can compress the old ones if you would like to keep them or remove them to get some free space.
Once the space is available I would suggest that it is necessary to check why these huge files are generated.
Thats in the file:
/var/log/syslog
{
rotate 7
daily
missingok
notifempty
delaycompress
compress
postrotate
/usr/lib/rsyslog/rsyslog-rotate
endscript
}
/var/log/mail.info
/var/log/mail.warn
/var/log/mail.err
/var/log/mail.log
/var/log/daemon.log
/var/log/kern.log
/var/log/auth.log
/var/log/user.log
/var/log/lpr.log
/var/log/cron.log
/var/log/debug
/var/log/messages
{
rotate 4
weekly
missingok
notifempty
compress
delaycompress
sharedscripts
postrotate
/usr/lib/rsyslog/rsyslog-rotate
endscript
}
at the comand
sudo logrotate -f /etc/logrotate.d/rsyslog
I get this answer:
Content of the file is ok/correct. It defines which files to rotate and which older files thus will be compressed.
The error messages mean that no .1 log files are found - they cannot be compressed.
This could mean that they never were rotated which would fit with seeing the huge files.
After running the command do you now see files ending in .1 ?
Have new .log files been created ?
If both answers are yes you could either manually delete one or two of the huge .1 files or run the rotate command again which then should compress the .1 file if the previously have been created by rotating ( renaming the .log to .log.1 file ).
Sorry I have to reboot the system before I can control the files.
After the reboot the systems works and the comand
sudo logrotate -f /etc/logrotate.d/rsyslog
makes no errors.
I think the error with the space will come back in a few days.
Should we check befor the error something, why it generate the hugh files?
because .log.1 files were created during the previous logrotate call.
When you did the first run of logrotate the file did not exist and thus raise the error message.
would make sense otherwise you may end up again with a full disk.
Do you still see the same messages being logged in syslog resp. daemon log as before ?
Are the karaf messages logged in directory /var/log/openhab in log files as well ?
The messages start today in morning. I think because I start the logrotate manually.
In the new files syslog and daemon is no entry of karaf.
And in the logfiles from openhab is today also nothing. Is there a way to look in the older files with the .gz?
You can do
zmore /var/log/syslog.2.gz
which is similar to more but works with compressed files
or
zgrep karaf /var/log/syslog.2.gz | more
which in this case returns rows containting the string karaf and pipes that to the more command
At openhab.log, syslog and daemon.log is no file from the day with the crash.
Here we are again:
The problem starts almost at the same time as last time:
Daemon.log file:
Last lines of daemon:
Syslog file:
Last lines:
What should I do now?
I search in google and it is possible a problem from java and I have to update this. But how could do this manually?
At openhabian-config is it not possible to update my java.
As far as I can see this is Zulu. In case it would have been installed via an apt package openhabian should have been able to update it. If it is installed via a tar package I think you need to reinstall Zulu java to get the lastest version.
Btw. the screenshots you posted do not really help to analyze the problem as the mostly just show the row header information. The interesting part is cut off at the right end of the line.
It is better to mark the text and copy paste it into code fences here.
I’m really a noob in this stuff, sorry. Could you explane me how I can reinstall Zulu?