LOG Shows 100+ per second> Skip event subscriber // ->FILTER: openhab/items/

  • Platform information:
    • Hardware: Raspberry pi4b 2GB
    • OS: Openhabian 1.6.3 (32bit)
    • Java Runtime Environment: Java 11
    • openHAB version: 3.0.1
  • Issue of the topic: Lot’s of Skip event subscriber logs
  • Please post configurations (if applicable):
    • Items configuration related to the issue > To much to count

My cpu load was rather high (around 0.7 - 1.2) lately (Cpu usage is around 5-10% sometimes spiking to 50%). So I wanted to investigate why this was. After disabeling enabeling bindings, I couldn’t pinpoint the reason. Nothing let the load drop. So I set the log to TRACE on org.openhab. When I do that, I see couple of hundred times a second these type of messages. I think this isn’t right, but I have the log on TRACE, so maybe that’s normal on Openhab 3.0 but i’m not sure.

The strange thing is, the messages don’t make sense. For instance it says:
->FILTER: openhab/items/Ketelwarmwater/state:Weerbericht

But the item: Ketelwarmwater and Weerbericht don’t relate to eachother in any way. Not in rules, not in binding, not in item files, not in Things, not in Groups, not even in persistance.

Does anyone know what these messages mean? Could it lead to the relative high cpu load?

2021-02-11 23:46:07.854 [TRACE] [dule.handler.ItemStateTriggerHandler] - ->FILTER: openhab/items/Ketelwarmwater/state:Weerbericht

2021-02-11 23:46:07.855 [TRACE] [ue.internal.handler.HueSensorHandler] - onSensorStateChanged() was called

2021-02-11 23:46:07.855 [TRACE] [ab.core.internal.events.EventHandler] - Skip event subscriber (class org.openhab.core.automation.internal.module.handler.ItemStateTriggerHandler) because of its filter.

2021-02-11 23:46:07.856 [TRACE] [ue.internal.handler.HueSensorHandler] - onSensorStateChanged() was called

2021-02-11 23:46:07.857 [TRACE] [dule.handler.ItemStateTriggerHandler] - ->FILTER: openhab/items/Ketelwarmwater/state:Habchanger

2021-02-11 23:46:07.858 [TRACE] [ue.internal.handler.HueSensorHandler] - onSensorStateChanged() was called

2021-02-11 23:46:07.859 [TRACE] [ab.core.internal.events.EventHandler] - Skip event subscriber (class org.openhab.core.automation.internal.module.handler.ItemStateTriggerHandler) because of its filter.

2021-02-11 23:46:07.860 [TRACE] [ue.internal.handler.HueSensorHandler] - onSensorStateChanged() was called

2021-02-11 23:46:07.860 [TRACE] [dule.handler.ItemStateTriggerHandler] - ->FILTER: openhab/items/Ketelwarmwater/state:RadioPrev

2021-02-11 23:46:07.861 [TRACE] [ue.internal.handler.HueBridgeHandler] - Hue sensor '83' added.

2021-02-11 23:46:07.862 [TRACE] [ab.core.internal.events.EventHandler] - Skip event subscriber (class org.openhab.core.automation.internal.module.handler.ItemStateTriggerHandler) because of its filter.

2021-02-11 23:46:07.863 [TRACE] [ue.internal.handler.HueBridgeHandler] - Hue sensor '85' added.

2021-02-11 23:46:07.864 [TRACE] [dule.handler.ItemStateTriggerHandler] - ->FILTER: openhab/items/Ketelwarmwater/state:Woonkamer_decrease

2021-02-11 23:46:07.866 [TRACE] [ab.core.internal.events.EventHandler] - Skip event subscriber (class org.openhab.core.automation.internal.module.handler.ItemStateTriggerHandler) because of its filter.

2021-02-11 23:46:07.867 [TRACE] [dule.handler.ItemStateTriggerHandler] - ->FILTER: openhab/items/Ketelwarmwater/state:sendFindPixel3

2021-02-11 23:46:07.868 [TRACE] [ab.core.internal.events.EventHandler] - Skip event subscriber (class org.openhab.core.automation.internal.module.handler.ItemStateTriggerHandler) because of its filter.

2021-02-11 23:46:07.869 [TRACE] [dule.handler.ItemStateTriggerHandler] - ->FILTER: openhab/items/Ketelwarmwater/state:System_pod_mode

2021-02-11 23:46:07.871 [TRACE] [ab.core.internal.events.EventHandler] - Skip event subscriber (class org.openhab.core.automation.internal.module.handler.ItemStateTriggerHandler) because of its filter.

2021-02-11 23:46:07.872 [TRACE] [ab.core.internal.events.EventHandler] - Delegate event to subscriber (class org.openhab.core.internal.items.ItemUpdater).

2021-02-11 23:46:07.873 [TRACE] [dule.handler.ItemStateTriggerHandler] - ->FILTER: openhab/items/Ketelwarmwater/state:KeukenAanwezig

2021-02-11 23:46:07.874 [TRACE] [ab.core.internal.events.EventHandler] - Skip event subscriber (class org.openhab.core.automation.internal.module.handler.ItemStateTriggerHandler) because of its filter.

2021-02-11 23:46:07.876 [TRACE] [dule.handler.ItemStateTriggerHandler] - ->FILTER: openhab/items/Ketelwarmwater/state:Manual_light_mode

2021-02-11 23:46:07.877 [TRACE] [ab.core.internal.events.EventHandler] - Skip event subscriber (class org.openhab.core.automation.internal.module.handler.ItemStateTriggerHandler) because of its filter.

2021-02-11 23:46:07.878 [TRACE] [ab.core.internal.events.EventHandler] - Delegate event to subscriber (class org.openhab.core.io.monitor.internal.EventLogger).

2021-02-11 23:46:07.879 [TRACE] [dule.handler.ItemStateTriggerHandler] - ->FILTER: openhab/items/Ketelwarmwater/state:sendFindPixel2

2021-02-11 23:46:07.881 [TRACE] [ab.core.internal.events.EventHandler] - Skip event subscriber (class org.openhab.core.automation.internal.module.handler.ItemStateTriggerHandler) because of its filter.

2021-02-11 23:46:07.882 [TRACE] [dule.handler.ItemStateTriggerHandler] - ->FILTER: openhab/items/Ketelwarmwater/state:RadioNext

2021-02-11 23:46:07.883 [TRACE] [ab.core.internal.events.EventHandler] - Skip event subscriber (class org.openhab.core.automation.internal.module.handler.ItemStateTriggerHandler) because of its filter.

2021-02-11 23:46:07.884 [TRACE] [dule.handler.ItemStateTriggerHandler] - ->FILTER: openhab/items/Ketelwarmwater/state:RadioStation

2021-02-11 23:46:07.886 [TRACE] [ab.core.internal.events.EventHandler] - Skip event subscriber (class org.openhab.core.automation.internal.module.handler.ItemStateTriggerHandler) because of its filter.

2021-02-11 23:46:07.887 [TRACE] [ab.core.internal.events.EventHandler] - Delegate event to subscriber (class org.openhab.io.openhabcloud.internal.CloudService).

2021-02-11 23:46:07.888 [TRACE] [dule.handler.ItemStateTriggerHandler] - ->FILTER: openhab/items/Ketelwarmwater/state:sendSmartHomeNotify

2021-02-11 23:46:07.889 [TRACE] [ab.core.internal.events.EventHandler] - Skip event subscriber (class org.openhab.core.automation.internal.module.handler.ItemStateTriggerHandler) because of its filter.

Did some testing and removed everything from openhab, bindings, things, items, persistance, etc. then the messages disappear. The higher load is due too Grafana and Inluxdb. Still curious what those messages mean.

I don’t know what it means but in the first place I would try to stop the binding for the ketelwarmwater item. If that helps reenable it and remove the item itself. Eventually also remove the thing it linked to.
Maybe there’s a profile attached?

FWIW I’ve just set my system to TRACE log level and it, too, produces hundreds of ‘filter’ messages like yours per second.
I don’t think this is where you CPU is going.
I’d clean the cache then have a look at memory usage, maybe you have a leak ?
It may also be worth a try to copy your system to another SD card and boot from that to see if it makes a difference. Simple with openhabian menu 53.

Thnx for checking. So it looks like normal behaviour. It happens on all the items, and they almost never relate to eachother.

I run openhab on a SSD. I noticed that ZRAM is on by default, is that a good idea with a SSD?

I’m currently adding file by file to see why my cpu load is a bit high, maybe my system is just big and it can’t calm down because there is to much going on. (231 rules / 144 things / 692 items / I use Grafana and influx and have a lot of graphs, 15+ i guess.

Could this be the reason, my system is becoming to big for a Rpi4 ?

Are you running openhab on a Pi?

No.

No.

Yes, RPi3

1 Like

Firstly, the graphs (Grafana) should not be generating a load unless you are viewing the graphs. When viewing the graphs this would cause the spikes as both Grafana and InfluxDB would be using the CPU (influxdb accessing the metrics and grafana to display them).

I would actually try to investigate this in reverse, start with everything running and use ‘top’ or ‘htop’ to see which processes are using the CPU.

I’m going to be presumptuous and guess that you see InfluxDB having a high load when accessing graphs, since not only is it trying to serve data to grafana but also still receiving data from OpenHAB. Hence, you should evaluate which items really need to store data in InfuxDB and change the configs to ensure only essential ones are saved. You also made no mention if you are using RRD4J which is built-in on OpenHAB3 and potentially you could look at custom configuring that to reduce load.

I tried all the things you said before posting. I should have noted that, but I forgot unfortunately.

The strange thing is, the CPU usage fluctuates between 5% and 8% mostly, BUT often with a load of 0.8 to 1.2. When graphs are showing, the usage is a bit higher and I see from time to time (10s, because of the refresh of the graphs) a 10% spike for a second from process influxdb, same as you describe but not that high.

I save around 100 items in influxdb. Less isn’t ‘possible’ because I just want to see those values in the graphs (my boiler values, room temps, solar panels, smart energy meter, corona values, weather, media statistics, etc.). Not sure if this is a lot, normal, or little items. But maybe to much for a raspberry pi4 to handle?

I disabled RRD4J since openhab 3.0 because of a heavier load on the cpu. And I couldn’t get some graphs to work. For instance, some values on the left, some on the right but with staircase property, recalculate some values for better viewinig experience, etc.

Yesterday I removed everything exept the bindings from openhab and the load dropped to around 0.2. Think that should be normal? Then i stopped grafana and influx and the load just drops below 0.1. When I started up the services again the load climbed the same as they dropped, around 0.2, seems normal. Then I restarted openhab and loaded file by file to see what is causing the higher load, but every file that loads, add up some extra load. At the end the cpu load wobbles between 0.5 and 0.8. That’s less then before but it isn’t optimal i think. I got the feeling that there is something wrong but i can’t find it.

As I already responded: no.

As a rule of thumb, stop looking at the load, it is misleading for a number of reasons (out of scope to explain here).
Identify the process that is causing your load first.
Get an idea of the pattern it shows. Is it spiking every minute for 10 seconds ? Is it spiking after you changed rules ? You need not bother with looking at the OH config if it isn’t the java process to cause this.

1 Like

You don’t mention the period for this ‘CPU Load’ measurement, but I’d also like to know why a CPU Load under 1 would concern you? At a cpu load of 1 it would in effect be 100% utilized, as you are seeing your load of 0.5 to 0.8 indicates that the CPU has idle time (also reflected by you mentioning CPU usage between 5% and 8%).

I would also note that on one of my boxes I see the following load averages which is in effect similar to what you describe:

load averages:  0.37,  0.56,  0.63                                                                                
CPU:  3.0% user,  0.1% nice,  1.3% system,  0.7% interrupt, 94.9% idle

To be honest, I think you are looking for a problem that doesn’t exist, but if you want to dig further follow Markus’ advice - use ‘top’ or ‘htop’ to figure out which process is causing your load.

The process which does the most is Java and arpping. The reason I think there is something wrong because I upgraded to a raspberry pi4b from a pi3b and the load increased. My other OH3 on a raspberry pi 3b has a load of 0.15 mostly.

I’m looking to much at the load, but I don’t know how to identify the process causing the load to be high. At the moment, the load is normal for 2 days. This was after disabeling ZRAM. The difficulty on identifying the process causing this, is lack of linux knowlegde and the process is mostly Java or arpping. The time the load is higher is mostly all the time.

When the load comes above 1.5, the rules aren’t fired quick enough and it takes to much time for my lights to switch on (a second or 3).

But at this time the problems are gone, maybe due to disabeling ZRAM. This is normal for my system:

top - 08:18:09 up 1 day, 23:11,  1 user,  load average: 0.70, 0.73, 0.74
Tasks: 127 total,   1 running, 126 sleeping,   0 stopped,   0 zombie

I’ll keep an eye on how things are going. If it stays this low, I will close this topic.

For sure not because of this. And disabling ZRAM puts your system’s availability at risk so that was a bad move.
Stop poking around and start analysing. Use top to see how much userspace CPU the java process is consuming when load is rising.
Disable the network binding for some or all hosts and try to use a different ping tool to see if that makes a difference. Enable debugging for this binding. Try to tune numbers of retries, timeouts etc.
You might have a network problem at times. When OH spawns many arppings processes in a short time this will increase the load. If by that time you have a network problem it’ll skyrocket because the ping processes do not terminate quickly.

Markus, I know you’re trying to help, I appreciate that a lot! But i’m limited in time and do it day by day just as you guys I guess. I’m certain you don’t mean it rude but it sound rude to assume i’m poking around, I really don’t. I just don’t write everything down here, that’s my fault, sorry. But keep it a little nice, I would appreciate that.

That said, I continued my search by disableing rules, items, binding, etc. And because of the arpping everytime showing up. I disabled the network binding and the load is now between 0.25 - 0.5, that’s good as I have many rules fireing at certain moments (cron jobs). So I have to find out what’s wrong with my network configuration. But that’s no problem for me :slight_smile:

The reason i disabled ZRAM was because of your anwsers:

I run openhab on a SSD. I noticed that ZRAM is on by default, is that a good idea with a SSD?

No. < I read that as: This is not a good idea with an SSD, so I disabled it.

It looked like it helped at that time, but I think that had to do with the restart of openhab. Because there aren’t many ping processes running at that time.

So it’s smart to enable ZRAM? (it compresses the data to store in the RAM as i understand)

I just lack some linux knowledge and having some difficulties pulling the right strings to see whats cousing the higher load. But i’m learning every day and doing my best.

I interpret load as:
0 - 1 > all within the time each process (or thread or cycle) should finish.
1 - above > every process/thread/cycle is running late, the higher the number, the later each process is executed.

So my goal is to keep the load as low as possible, it feels more efficient. But using htop or top is pretty difficult. Top isn’t showing arpping much, just for a very short period so at first I didn’t notice it. htop isn’t showing arpping at all! But just strange Java openhab command lines. A lot of them:

 1725 openhab    20   0  707M  580M 14360 S  0.0 30.3  2:41.15 /usr/bin/java -XX:-UsePerfData -Dopenhab.home=/usr/share/openhab -Dopenhab.conf=/etc/openhab -Dopenhab.runtime=/usr/share/ope
26520 openhab    20   0  707M  580M 14360 S  0.0 30.3  0:00.20 /usr/bin/java -XX:-UsePerfData -Dopenhab.home=/usr/share/openhab -Dopenhab.conf=/etc/openhab -Dopenhab.runtime=/usr/share/ope
26521 openhab    20   0  707M  580M 14360 S  0.0 30.3  0:00.16 /usr/bin/java -XX:-UsePerfData -Dopenhab.home=/usr/share/openhab -Dopenhab.conf=/etc/openhab -Dopenhab.runtime=/usr/share/ope
 1207 openhab    20   0  707M  580M 14360 S  0.0 30.3  2:41.88 /usr/bin/java -XX:-UsePerfData -Dopenhab.home=/usr/share/openhab -Dopenhab.conf=/etc/openhab -Dopenhab.runtime=/usr/share/ope
 1739 openhab    20   0  707M  580M 14360 S  0.0 30.3  0:36.90 /usr/bin/java -XX:-UsePerfData -Dopenhab.home=/usr/share/openhab -Dopenhab.conf=/etc/openhab -Dopenhab.runtime=/usr/share/ope

Can’t find anywhere on this forum or google what that should mean. Could you explain what it means?

Well if you run everything off SSD then no but if part of your system is still on SD then yes, reenable it and edit /etc/ztab so it only applies to the SD part(s).

Well as I wrote it’s out of scope here to discuss meaningfulness of the “load” as an indicator here, in particular when you’re a Linux newbie. So please just believe me.
As I said use ‘top’ to detemine the java process’ CPU usage. Your ‘htop’ output is showing every thread on it’s own line that’s why (that isn’t very helpful). that’s why I prefer ‘top’ but they both have sorting and filter options you can play around with.

1 Like

Alright, thanks! I run everything from SSD, so my systems availability isn’t at risk :wink:

For anyone else who’s reading this thread. The log showing the earlier mentioned messages after setting the log to TRACE. This isn’t strange behaviour.

My CPU Load problem lies within the Network binding. I think one or more of my network things is acting up. Not sure how and why. But it doesn’t have anything to do related to the log messages.