I am aware that I can cut down on the amount when persisting but I really would like to cut down the amount of “update” that touch OH. I would love for the binding to work like persistence does and allows specifying:
minimum time between updates: ie no more than 1 change per 5s
minimum absolute or % changes: ie at least 0.1A or 5%
That would have huge benefits;
less data coming in to OH
less data to process for persistence
no more of this spamming in the logs, manking the logs totaly useless
way less execution of the rules on “changed”
So the short version is: How can we calm things down despite an overmotivated external MQTT Server, assuming the MQTT server cannot be changed.
If the binding need to cache the previous value per metric, would that be such an issue ?
A single data point (timestamp + value) could solve the issue at the price of a bit of memory.
OH is going to have to do something with every message published to the topics it subscribes to. You will not be saving any CPU or memory in any way. In fact, to reduce the number of updates to the Items you will increase the amount of CPU and memory.
If it’s really just the Item changes you really care about reducing you can:
in OH 4.2 you can use the round profile to round the values before the Item is updated which will reduce the changes
in OH 4.2 and earlier versions you can use Filter Profile to require the change to be above a certain amount before passing it through
Note though that except for debounce, all of these reduce the number of changes the Item goes through but do not reduce the number of updates made to the Item. But by default events.log doesn’t include updates so there’s that. For sensors you usually want to trigger on changes anyway so the fact that the Item is updating at the same rate but changing less frequently should be sufficient.
You will not be saving any CPU or memory in any way.
I had downstream rules in mind and not having to trigger some rules for a tiny tiny change of a value would be a gain. I would be happy to spend a bit of CPU/Memory to reduce the downstream load. For instance, I really don’t care about nano ampere/volt changes… none of the sensors are that precise. But to be fair, OH cannot know what is relevant and what is not. So the rounding you suggest sounds like a good option.
Ok, it sounds like I have some options. I don’t think debounce can really help but the 2 other options sounds relevant.
I am btw also suspecting that the high frequency of the changes may be the cause (or another issue worst) for this issue.
No matter how full logs are they are never “useless”.
Sure, using a console, I could grep away the noise. I guess that people (incl. me) mean that the web base tail page becomes useless since the ratio of relevant values vs the spamm is very low.
Even better would be to setup some Loki or something of the sort, it is on the list
I did a pass with the round profile, that was an easy win and it helps already.
Yet I am still getting several updates a second for some values that can fluctuate a lot (ie BatteryCurrent in my case).
It is not the end of the story but at least it was simple to put in place (after installing the required addon Basic Profiles).
The next requirements woutd be to limit the frequency of the updates based on time.
I initially did not think about using debounce for that but that seems appropriate. I will give a second run at your debounce tutorial (I did not get it to work properly yet).
I guess using one profile over the other will depend on the use case. I used rounding initially to limit the number of changes over time… it can work but the tool for the job here is more the time debounce.
That’s why both profiles exist. And indeed the two different debounce profiles and the rule template each individually address different use cases as well.
Approach
Reduces
Purpose
round profile
changes
rounding the value before updating the Item, the Item still updates
debounce count
updates
throw away every count-1 updates and only posting the count one regardless of what the values are
debounce time
updates
throw away every update that happens within the time period of the last posted update regardless of their values or how many there are
filter JS Scripting profile
changes
require a change above a given amount from the last posted update; if the change isn’t large enough post the last posted value
debounce rule template
updates
works with any Item, not just Items with a link to a Channel and can be used when you only want to debounce one or more possible states (e.g. debouce OFF but immediately process ON)
If you ever need to do both a rounding and a debounce, you’d probably need to use the round profile and the Debounce [4.0.0.0;4.9.9.9].
I’m glad you are getting to a working point. I too have added the round profile to a few of my sensor Items to lower how often a few of my rules run as well. I wasn’t seeing any performance or load problems or anything like that but it seemed an easy thing to do to reduce the load a bit just because I could.
I did not totally solve my HabPanel issues but I see significantly less crashes so I think it does help. The remaining crashes seem to occure at a specific time now, so I guess it is another issue and I will troubleshoot further.
The logs (without any kinda filtering) are also much more “human”.
So the ratio gain/effort of the change is very high
it also made me upgrade to 4.2, which I had not done yet.
At first I was disappointed not being able to mixin 2 profiles but thinking about it, I did not really need the 2.
It was nice to see in the logs value like 15 W instead of 15.1298127340192345671234 W but I can leave with the long numbers in the logs for now. In the UIs, some simple formatting does the trick.
There are significant technical reasons why profiles cannot be combined or chained. I opened an issue to add something like that a good while ago and investigations were made and it was determined to not be feasible to implement.
Going back to my undergrad numberical analysis days, premature rounding is the root of all evils. Ultimately, computers which use IEEE 754 to represet floating point numbers (which is pretty much all of them, COBAL being a notable exception) are bad at math. To be more specific, there are just some numbers that cannot be represented using IEEE 754. So instead the computer represents a value that’s really close to some numbers but not exact. So the result of a calculation might correctly be 2.7, but the computer stores it as 2.7000000000000012324.
As one does calculations sometimes the result will be a little above and other times a little below the “real” values so these errors often mostly cancel eachother out and the computer gets pretty close to accurate answers. However, if you round the values prematurely you take away the opportunity for these errors to cancel each other out, resulting in a gradually increasing margin of error.
Therefore best practice in computer programming is to never round a value unless and until it’s shown to a human.
All that is a really long way to say, don’t worry about the long numbers. In fact, it’s better to keep them and only round when showing them in the UI.
This is clear yet, howver, in practical use cases, sensors are faaaar from that accurate.
I am sure this 2.7000000000000012324 W or Volts, or Amp., or Degrees, is not 2.7000000000000012324 and the rounded 2.7 is good enough.
That being said, OH cannot know that and some values may be very accurates.
So the use of profiles do make sense so we human can tell the computer where it can do its “crazy” precision and where it makes no sense (such as in the example above).
Being able to do this rounding would be actually beneficial as it would allow seeing changes only when there are real changes. For instance if a sensor is accurate at 1%, round at 0.1% and never report smaller changes. This 0.1% is obviously an example, in some cases it should be 0.1, in others it will be 0.00000…0001, OH cannot know that but we can.
But you don’t know if the extra values are because of IEEE 754 that have accumulated due to calculations in the sensor driver, or noise in the sensor reading itself. And as I mentioned, the errors and even the noise tends to cancel themselves out over time/repeated calcualtions.
However, if you prematurely round something that you do calculations with later, you will introduce a lot of error. Just a few calculations may end up with 10-20% margin of error if you round too early. For example, if you want the sum of readings over the past hour the difference between the sum with rounded values and without rounded values can be pretty far apart.
If you aren’t doing math with the readings, rounding poses no problems. But if you round it and later decide to do some math with it, you could run into problems if you forget to remove the rounding. So you either have to live with lots of decimal places in events.log, or you need to remember to remove the rounding later if you decide you need to use the sensor reading in a calculation.
A nice option would be a setting to format the Unit. So the value remains the same but the logs would show the formatted value.
For instance, 3 decimals max for Current, etc… But I see how those arbitrary choices can get in the way of some use cases. After all, someone may be working with super accurate sensors…