Pull request welcome!
I may need some help with that. This is brand new to me.
ContactItem
.. inheritance-diagram:: HABApp.openhab.items.ContactItem
:parts: 1
.. autoclass:: HABApp.openhab.items.ContactItem
:members:
:inherited-members:
:member-order: groupwise
Iāve implemented something in the dev branch already. Thatās exactly the place where the docs are missing, too.
I suggest you wait until I release the next version of HABApp because the dev branch contains already many (breaking) changes and then Iām really happy about any help to improve the docs!
Okay, will do!
Next question:
Is watch_change() (ItemNoChangeEvent) supposed to work on StringItems?
I canāt get it to fire on my StringItem. ItemCommandEvent works fine. As soon as I change to a SwitchItem, both work.
Strange! Now that Iām trying again, the same way I tried it before (I think), it works fine. Not sure what happened that time. Please ignore the previous question.
import HABApp
from HABApp.core.events import ValueUpdateEvent, ValueChangeEvent
from HABApp.mqtt.items import MqttItem
from HABApp.openhab.items import StringItem
from HABApp.openhab.events import ItemStateEvent, ItemCommandEvent, ItemStateChangedEvent
import logging
import colorsys
log = logging.getLogger('HABApp.StringTest')
class StringTest(HABApp.Rule):
def __init__(self):
super().__init__()
self.itemCommand = StringItem.get_item('testString')
self.itemCommand.listen_event(self.item_command, ItemCommandEvent)
watcherCommand = self.itemCommand.watch_change(2)
self.itemCommand.listen_event(self.thing_no_change, watcherCommand.EVENT)
def item_command(self, event):
assert isinstance(event, ItemCommandEvent)
log.info(f"command: {event}")
def thing_no_change(self, event):
log.info(f"nochange: {event}")
StringTest()
You can call listen_event
directly from watcherCommand
:
self.itemCommand.watch_change(2).listen_event(self.thing_no_change)
Itās much better since itās less error prone - please do it that way.
Agreed, and gladly!
But, itās from your example, just so you know.
watcher = self.thing.watch_change(60)
self.thing.listen_event(self.thing_no_change, watcher.EVENT)
Hello @Spaceman_Spiff,
I need to get HABApp event logging under control. Iāve left it for a few months because I was busy with other things but now itās time. You can see why if you look at the size of the event log and the timestamps. Itās literally gigabytes per day.
Iām having some other performance issues (occasionally simple functions take too long to run) which I suspect may be SSD wear ā I will certainly replace the SSD but letās fix logging first.
Iām looking for something similar to putting a RegexFilter is log4j2.xml which does exactly what Iām looking for in openHAB.
Example:
<RegexFilter onMatch="DENY" onMismatch="ACCEPT" regex=".* (EnergyMeter|json_2Dtotal|HABApp_Ping).*"/>
Is there an equivalent way of using regular expressions in logging.yml?
If not, whatās the best way of replicating the functionality another way?
You can add a logging.Filter, in one of your rules and attach it to the corresponding logger.
An alternative would be to disable the event log altogether.
logging.Filter doesnāt seem to be similar ā in fact, l donāt understand how itās applicable at all. Every single event in events.log comes from HABApp.EventBus so how does filtering by the logger name ( āA.Bā, āA.B.Cā, āA.B.C.Dā etc) help?
I disabled the event log earlier today (or rather, I raised the log level to WARNING) and that at least stopped the flood, but of course now I donāt have av event log at all. It would be really useful it was possible to configure the EventBus logger with regular expressions to mark certain event names as ādebugā, that way it would be possible to exclude them from the log without it completely disabling the log.
Anyway, Iāll keep it off for now, because even with it disabled, Iām still having some other performance issues, so they are obviously not event log related.
Would you please help me look at the performance issue?
Functions are sometimes / often taking too long.
[2022-05-02 19:06:44,706] [ HABApp.Worker] WARNING | Execution of GarageRemotes.execute took too long: 1.40s
[2022-05-02 19:06:44,730] [ HABApp.Worker] WARNING | ncalls tottime percall cumtime percall filename:lineno(function)
[2022-05-02 19:06:44,732] [ HABApp.Worker] WARNING | 1 0.000 0.000 1.402 1.402 /config/rules/Garage/GarageRemotes.py:35(execute)
[2022-05-02 19:06:44,734] [ HABApp.Worker] WARNING | 1 0.000 0.000 1.400 1.400 /usr/local/lib/python3.8/site-packages/HABApp/rule/scheduler/habappschedulerview.py:65(soon)
[2022-05-02 19:06:44,739] [ HABApp.Worker] WARNING | 1 0.000 0.000 1.400 1.400 /usr/local/lib/python3.8/site-packages/HABApp/rule/scheduler/habappschedulerview.py:20(at)
[2022-05-02 19:06:44,762] [ HABApp.Worker] WARNING | 1 0.000 0.000 1.400 1.400 /usr/local/lib/python3.8/site-packages/eascheduler/scheduler_view.py:18(at)
[2022-05-02 19:06:44,774] [ HABApp.Worker] WARNING | 1 0.000 0.000 1.400 1.400 /usr/local/lib/python3.8/site-packages/eascheduler/jobs/job_one_time.py:11(_schedule_first_run)
[2022-05-02 19:06:44,775] [ HABApp.Worker] WARNING | 1 0.000 0.000 1.399 1.399 /usr/local/lib/python3.8/site-packages/eascheduler/jobs/job_base.py:37(_set_next_run)
[2022-05-02 19:06:44,777] [ HABApp.Worker] WARNING | 1 0.000 0.000 1.399 1.399 /usr/local/lib/python3.8/site-packages/HABApp/rule/scheduler/scheduler.py:49(add_job)
[2022-05-02 19:06:44,780] [ HABApp.Worker] WARNING | 1 0.000 0.000 1.386 1.386 /usr/local/lib/python3.8/concurrent/futures/_base.py:416(result)
[2022-05-02 19:06:44,781] [ HABApp.Worker] WARNING | 1 0.000 0.000 1.386 1.386 /usr/local/lib/python3.8/threading.py:270(wait)
[2022-05-02 19:06:44,783] [ HABApp.Worker] WARNING | 2 1.386 0.693 1.386 0.693 {method 'acquire' of '_thread.lock' objects}
[2022-05-02 19:06:44,785] [ HABApp.Worker] WARNING | 1 0.000 0.000 0.012 0.012 /usr/local/lib/python3.8/asyncio/tasks.py:911(run_coroutine_threadsafe)
[2022-05-02 19:06:44,786] [ HABApp.Worker] WARNING | 1 0.000 0.000 0.012 0.012 /usr/local/lib/python3.8/asyncio/base_events.py:762(call_soon_threadsafe)
[2022-05-02 19:06:44,788] [ HABApp.Worker] WARNING | 1 0.000 0.000 0.011 0.011 /usr/local/lib/python3.8/asyncio/base_events.py:738(_call_soon)
[2022-05-02 19:06:44,789] [ HABApp.Worker] WARNING | 1 0.000 0.000 0.011 0.011 /usr/local/lib/python3.8/asyncio/events.py:32(__init__)
Look at whatās actually in this function:
def execute(self, reps):
if reps == 0:
self.current_value = self.oh_item.get_value()
self.current_value += self.direction
if self.current_value < 0:
self.current_value = 0
if self.current_value > 100:
self.current_value = 100
self.rule.run.soon(self.oh_item.oh_send_command,self.current_value)
log.info(f'exec: {self.direction} {self.current_value}')
The only thing that could conceivably take time is the openhab item get value!
Iāve seen this previously, but now that Iām trying to implement a āhold down to repeatā volume control action, all of a sudden it became more critical.
I have already set org.openhab.restauth:allowBasicAuth=true
Iām running docker on a RPi 4 with 8 GB ram, and a 64gb SSD. Itās very, very lightly loaded: All itās running is HABApp, Mosquitto, openHAB, and the UniFi controller.
Memory stats are:
total used free shared buff/cache available
Mem: 7999496 2060940 2335128 728 3603428 6156596
Swap: 102396 3348 99048
Any idea where to look?
Unfortunately I donāt have an idea regarding your performance issues. I have on my RPi4 an OpenHab with Habapp, TPLink Omada Controller and a Graylog instance running. So far without any performance problems.
total used free shared buff/cache available
Mem: 8000512 4523296 1150080 1908 2327136 3397592
Swap: 102396 0 102396
Regarding the logging Iām using graylog to consolidate all logs from OpenHab and Habapp. That made it much easier to correlate logs between the different services and it also allows for filtering and searching for certain expressions.
Thanks for looking, @Dominik_Bernhardt
Iāve started looking into the issue and so far it seems like it may be a thread / event contention issue inside HABApp when many things are happening at once. @Spaceman_Spiff Iāve started a new thread so I donāt flood this thread with this edge-case issue, would really appreciate if you would take a look or three.
Iām facing since a while a strange Openhab/HabApp boot synchronisation issue. Depending on the boot sequence openhab items that are restored on startup do not have the correct value in HabApp. In Openhab the restored value is shown while Habapp still thinks the item is None.
As an example I have shown a sequence for one particular item.
The first event is from the openhab log. The next shows the event in Habapp and the last is a log output in a testing rule printing the value.
As you can see the update event reaches Habapp but for some reason the value is not updated.
The issue does not appear all the time but in most cases if I completly reboot the machine or all services. If I first start openhab, wait until it is running and then start Habapp everything is fine.
System:
- All services running in container
- Raspberry4
- OH 3.3.0.M3
- Habapp 0.31.2
Startup is really a pain. However thanks to @J-N-K with OH3.3 and the new HABApp Version all issues will be gone since he did some work on the RestAPI.
Since there are other major changes maybe I should prepare a testable beta version so HABApp works fine when 3.3 launches.
Would you (and anyone else be interested in testing the version)?
Hi Sebastian,
I definitly would be interested to test a beta/testing version.
Dominik
Not a HABApp question, per se, but I figured Iād ask the experts here, and offer to test the above beta version as well as I have lots of items restored on startup as well.
Iāve just started using GitHub as I try to rebuild a 3D printer that requires lots of merged settings and finally figure that I should do the same for my OH configuration.
Even though Iād set up a private repo, Iām still nervous about posting API keys, etcā¦
Would it be best to store that info in HABApp parameters, or as OS environment variables? At the moment, they are all just in the code.
I tinkered with parameters the other day but didnāt find it as easy to use as I have imagined, but that is a function of my capabilities more than the code.
Oh thatās easy. We all have secret values. No matter which config file you put them in, just donāt commit the files into source control. Always keep them local. It can be somewhat painful to reconcile the diff, but typically config files donāt change often. It would also be easier to keep dev env separate from production env. This way you will not commit the secrets by mistake.
Iād like to get them all into a āsecrets.ymlā file that is accessible to every ruleā¦ just not sure what has the least overhead in terms of complexity, etcā¦ possibly centralizing parameters as wellā¦which I havenāt yet tackled. Then I can just back up that one file periodically and locally.
Thatās how I do it. I created a parameter file that stores my api keys/password. Then from the corresponding rule I just load the data out of the parameter file.
If you have trouble with parameter files you could create another thread and weāll look at it together.