Is there a way to limit the precision of Number-Items?

If there no sense to limit recording into DB the changes do we have a possibility to define a treshold for writing into the log? It’s hard to catch important message in the log when it’s flooded by temperature or voltage changes every .001.

I mean it’s ok to keep records on all temperature changes in DB but see only important (configurable) changes every 1.0 or 0.5 Celsius in the log?

1 Like

Not that I’m aware of. The event bus logger doesn’t have any state so it has no idea how much the Item changed from one log statement to the next.

But there are other options:

I habe turned off logging for item events complety. I don‘t see any good reason to those in the log.

Just out of curiosity - how do you debug your rules then?
And don’t tell me you just don’t make mistakes :smiley:

Do you log the states from the rules?

I don’t make mistakes xD.

Seriously: To minimize interference I usually setup the rule on my dev machine with the needed items and then use the console to set the item states. If necessary I log from the rule.

Thanks Rich! I’m planning to upgrade to the newest version, but wanted to get the weather data as soon as possible. One of the advantages of OH over Home Assistant is that I haven’t had to do any maintenance work over more than 2 years and the system is still in a stable condition and just works :wink:

Totally agree. I just have no need for the data …

Ah ok, I didn’t know this. I’ll have a look.

Thanks for the advices!

Conversely I’ve turned up the logging to events.log to show when my rules are running. But, whenever I’m following the logs, I use filters to get rid of the statements I don’t care about at that time. Everyone has their own approach and they are all good. I’m consistent enough in my naming that I can usually get it necked down to just those Items and rule that are relevant with just one REGEX. I also use multitail which can let me additionally set up a search term that highlights the rows of interest as they occur.

I like this approach (for me, YMMV) because I don’t have to change anything like adding log statements to rules or change logging levels to debug or follow along with what’s happening even as I change what I’m working on drastically from day to day.

I’m also not on an SD card so lots of writes isn’t a problem.

You might say you’ve neglected the maintenance. But, as with a car or a house, when you neglect maintenance now, you will have a lot more work later. You are mainly just delaying the effort, not reducing it in the long run.

1 Like

I use a regex filter to remove items that update continuously, such as temperature/humidity/light sensors , as well as other items that I don’t feel the need to see when I’m checking the log.

For example, the TP-Link Kasa binding can control the LED indicators on Kasa switches/outlets. I turn the LEDs off when I go to bed, and on again in the morning. I don’t need to see that in my log.

I’ve also set some handlers to ERROR, so that I don’t see any non-criticial updates from them.

<Logger level="ERROR" name="openhab.event.ItemCommandEvent"/>
<Logger level="ERROR" name="openhab.event.ItemStatePredictedEvent"/>
<Logger level="ERROR" name="openhab.event.GroupItemStateChangedEvent"/>
<Logger level="ERROR" name="openhab.event.ThingStatusInfoChangedEvent"/>
<Logger level="ERROR" name="openhab.event.ChannelTriggeredEvent"/>

It’s easy enough to comment these out from log4j2.xml whenever I’m debugging, since the edit takes effect within a few minutes in OH3 (without requiring a reboot). However, I usually don’t need to bother. I don’t get too complex with my rules, so all I usually care about is if an item state changes.

I suppose I could use WARN with pretty much the same results.

It’s also harder for us to help people upgrade from OH2 to OH3 at this point, since we haven’t done it ourselves in quite some time. As well, there are older discussions about upgrading that are no longer reliable due to subsequent changes.

I don’t think it’s necessary to be on the latest version all of the time, but it’s better to not fall too far behind. I usually upgrade somewhere between 3-6 months after a new version is released.

I upgrade every 3-6 days. :rofl:

1 Like

You‘re right. I started on OH 1.7 and since then upgraded to every stable release until 2.5.7. However, since I haven’t lived in my home regulary for the last two+ years, I didn’t want to do any updates without being on site. Now I‘m back and looking forward to learn about all the „new“ OH3.x features :star_struck:

This is a valid point. I hope I won’t get in too much trouble. One bigger thing for me might be that my whole configuration is file- and not UI based. I started reading migration threads to find out the best strategy, but as you said they might not be reliable for a migration from 2.5.7 to 3.3.

Thanks for your advices!

1 Like

Thanks for the answer. I will review filtering options suggested

All your 2.5 configuration and approach is still valid for OH 3 with only a few minor breaking changes (maybe more than a few given the time between versions). But you can configure and use OH 3 just like you always have.

But if you want to use some new stuff like MainUI, semantic models, custom widgets, the marketplace, etc. you may want to move some of that stuff to become managed instead of file based.

Be sure to review the Getting Started Tutorial. Even though you are an OH expert, Getting Started shows how to do most of the stuff you already know using MainUI, as well as introducing the new stuff. The people how have the most trouble moving to OH 3 are those who skip Getting Started.

1 Like

Your advices are always appreciated! Thanks a lot Rich. I will definitely do so.

Just to clarify: you can use all this with your existing *.items files.

1 Like

True but it’s so much work and so easy to get wrong as to hardly be worth it.

Back to the original question, easy and somewhat direct method to get what you want (that has many reasons it’s not the most elegant way).

Tie your binding to an item. Make a second item that is the rounded value. Run a rule that updates the rounded item whenever the bound item changes. Trigger all your interesting database stuff off the rounded item. Means you still have access to the raw value, but you’re only firing rules and database persistence off the reduced precision item.

Although Rich said it, I don’t think the point was made strongly enough. You’re using influx db. This is a time series database specifically designed for data series like we’re talking about here. You can set retention policies that downsample a measurement into averages and use less space on disk over time. Be more aggressive in your policy if it’s a major concern for you.

That being said, I don’t think this is a problem worth solving. Of course you’re welcome to disagree and do whatever you want, but make sure it’s worth your time before trying too hard. I suggest doing some math estimates before deciding it’s worth solving. How many bytes does a measurement (or row in the db) take up on disk? Given a measurement per minute, how long would it take to fill a megabyte? How long for 100 megabytes? A gigabyte? Then determine how much data is too much for this particular item. Disk space is very cheap these days and your time is worth at least something. How much of your time is it worth spending to do what you asked about in the OP vs some alternative solution. The right answer could be to not worry about it, or to wipe the table once every year, or to just buy another hard drive or clear off some unused files on the existing one.

I’m super curious whether you’ll still feel the problem is worth solving after considering what I wrote here. Please do share your thoughts when you have a chance!

This would be a solution. Thanks for sharing your idea!

Hi Jonathan, you’re right. In the particular case I came up with it might not be worth solving it the way I was intended to do. I thought there might exist a kind of formatting that I missed to just lower the precision.

I have other data points that fire a trigger every second (and yes even this might not be worth thinking about it - I’ll do the math) so it was good to learn about retention policies in influxdb.

So thanks everyone for the valuable input!

1 Like