HABApp - Easy automation with openHAB

Agreed, and gladly!

But, it’s from your example, just so you know. :slight_smile:

        watcher = self.thing.watch_change(60)
        self.thing.listen_event(self.thing_no_change, watcher.EVENT)

here too

Hello @Spaceman_Spiff,

I need to get HABApp event logging under control. I’ve left it for a few months because I was busy with other things but now it’s time. You can see why if you look at the size of the event log and the timestamps. It’s literally gigabytes per day.

I’m having some other performance issues (occasionally simple functions take too long to run) which I suspect may be SSD wear – I will certainly replace the SSD but let’s fix logging first. :slight_smile:

I’m looking for something similar to putting a RegexFilter is log4j2.xml which does exactly what I’m looking for in openHAB.

<RegexFilter onMatch="DENY" onMismatch="ACCEPT" regex=".* (EnergyMeter|json_2Dtotal|HABApp_Ping).*"/>

Is there an equivalent way of using regular expressions in logging.yml?

If not, what’s the best way of replicating the functionality another way?

You can add a logging.Filter, in one of your rules and attach it to the corresponding logger.
An alternative would be to disable the event log altogether.

logging.Filter doesn’t seem to be similar – in fact, l don’t understand how it’s applicable at all. Every single event in events.log comes from HABApp.EventBus so how does filtering by the logger name ( ‘A.B’, ‘A.B.C’, ‘A.B.C.D’ etc) help?
I disabled the event log earlier today (or rather, I raised the log level to WARNING) and that at least stopped the flood, but of course now I don’t have av event log at all. It would be really useful it was possible to configure the EventBus logger with regular expressions to mark certain event names as “debug”, that way it would be possible to exclude them from the log without it completely disabling the log.

Anyway, I’ll keep it off for now, because even with it disabled, I’m still having some other performance issues, so they are obviously not event log related.

Would you please help me look at the performance issue? :slight_smile:

Functions are sometimes / often taking too long.

[2022-05-02 19:06:44,706] [            HABApp.Worker]  WARNING | Execution of GarageRemotes.execute took too long: 1.40s
[2022-05-02 19:06:44,730] [            HABApp.Worker]  WARNING |    ncalls  tottime  percall  cumtime  percall filename:lineno(function)
[2022-05-02 19:06:44,732] [            HABApp.Worker]  WARNING |         1    0.000    0.000    1.402    1.402 /config/rules/Garage/GarageRemotes.py:35(execute)
[2022-05-02 19:06:44,734] [            HABApp.Worker]  WARNING |         1    0.000    0.000    1.400    1.400 /usr/local/lib/python3.8/site-packages/HABApp/rule/scheduler/habappschedulerview.py:65(soon)
[2022-05-02 19:06:44,739] [            HABApp.Worker]  WARNING |         1    0.000    0.000    1.400    1.400 /usr/local/lib/python3.8/site-packages/HABApp/rule/scheduler/habappschedulerview.py:20(at)
[2022-05-02 19:06:44,762] [            HABApp.Worker]  WARNING |         1    0.000    0.000    1.400    1.400 /usr/local/lib/python3.8/site-packages/eascheduler/scheduler_view.py:18(at)
[2022-05-02 19:06:44,774] [            HABApp.Worker]  WARNING |         1    0.000    0.000    1.400    1.400 /usr/local/lib/python3.8/site-packages/eascheduler/jobs/job_one_time.py:11(_schedule_first_run)
[2022-05-02 19:06:44,775] [            HABApp.Worker]  WARNING |         1    0.000    0.000    1.399    1.399 /usr/local/lib/python3.8/site-packages/eascheduler/jobs/job_base.py:37(_set_next_run)
[2022-05-02 19:06:44,777] [            HABApp.Worker]  WARNING |         1    0.000    0.000    1.399    1.399 /usr/local/lib/python3.8/site-packages/HABApp/rule/scheduler/scheduler.py:49(add_job)
[2022-05-02 19:06:44,780] [            HABApp.Worker]  WARNING |         1    0.000    0.000    1.386    1.386 /usr/local/lib/python3.8/concurrent/futures/_base.py:416(result)
[2022-05-02 19:06:44,781] [            HABApp.Worker]  WARNING |         1    0.000    0.000    1.386    1.386 /usr/local/lib/python3.8/threading.py:270(wait)
[2022-05-02 19:06:44,783] [            HABApp.Worker]  WARNING |         2    1.386    0.693    1.386    0.693 {method 'acquire' of '_thread.lock' objects}
[2022-05-02 19:06:44,785] [            HABApp.Worker]  WARNING |         1    0.000    0.000    0.012    0.012 /usr/local/lib/python3.8/asyncio/tasks.py:911(run_coroutine_threadsafe)
[2022-05-02 19:06:44,786] [            HABApp.Worker]  WARNING |         1    0.000    0.000    0.012    0.012 /usr/local/lib/python3.8/asyncio/base_events.py:762(call_soon_threadsafe)
[2022-05-02 19:06:44,788] [            HABApp.Worker]  WARNING |         1    0.000    0.000    0.011    0.011 /usr/local/lib/python3.8/asyncio/base_events.py:738(_call_soon)
[2022-05-02 19:06:44,789] [            HABApp.Worker]  WARNING |         1    0.000    0.000    0.011    0.011 /usr/local/lib/python3.8/asyncio/events.py:32(__init__)

Look at what’s actually in this function:

	def execute(self, reps):
		if reps == 0:
			self.current_value = self.oh_item.get_value()
		self.current_value += self.direction
		if self.current_value < 0:
			self.current_value = 0
		if self.current_value > 100:
			self.current_value = 100
		log.info(f'exec: {self.direction} {self.current_value}')

The only thing that could conceivably take time is the openhab item get value!

I’ve seen this previously, but now that I’m trying to implement a “hold down to repeat” volume control action, all of a sudden it became more critical. :slight_smile:

I have already set org.openhab.restauth:allowBasicAuth=true

I’m running docker on a RPi 4 with 8 GB ram, and a 64gb SSD. It’s very, very lightly loaded: All it’s running is HABApp, Mosquitto, openHAB, and the UniFi controller.

Memory stats are:

               total        used        free      shared  buff/cache   available
Mem:         7999496     2060940     2335128         728     3603428     6156596
Swap:         102396        3348       99048

Any idea where to look?

Unfortunately I don’t have an idea regarding your performance issues. I have on my RPi4 an OpenHab with Habapp, TPLink Omada Controller and a Graylog instance running. So far without any performance problems.

               total        used        free      shared  buff/cache   available
Mem:         8000512     4523296     1150080        1908     2327136     3397592
Swap:         102396           0      102396

Regarding the logging I’m using graylog to consolidate all logs from OpenHab and Habapp. That made it much easier to correlate logs between the different services and it also allows for filtering and searching for certain expressions.

Thanks for looking, @Dominik_Bernhardt

I’ve started looking into the issue and so far it seems like it may be a thread / event contention issue inside HABApp when many things are happening at once. @Spaceman_Spiff I’ve started a new thread so I don’t flood this thread with this edge-case issue, would really appreciate if you would take a look or three. :slight_smile:

I’m facing since a while a strange Openhab/HabApp boot synchronisation issue. Depending on the boot sequence openhab items that are restored on startup do not have the correct value in HabApp. In Openhab the restored value is shown while Habapp still thinks the item is None.
As an example I have shown a sequence for one particular item.
The first event is from the openhab log. The next shows the event in Habapp and the last is a log output in a testing rule printing the value.
As you can see the update event reaches Habapp but for some reason the value is not updated.

The issue does not appear all the time but in most cases if I completly reboot the machine or all services. If I first start openhab, wait until it is running and then start Habapp everything is fine.


  • All services running in container
  • Raspberry4
  • OH 3.3.0.M3
  • Habapp 0.31.2

Startup is really a pain. However thanks to @J-N-K with OH3.3 and the new HABApp Version all issues will be gone since he did some work on the RestAPI.

Since there are other major changes maybe I should prepare a testable beta version so HABApp works fine when 3.3 launches.
Would you (and anyone else be interested in testing the version)?

1 Like

Hi Sebastian,

I definitly would be interested to test a beta/testing version.


Not a HABApp question, per se, but I figured I’d ask the experts here, and offer to test the above beta version as well as I have lots of items restored on startup as well.

I’ve just started using GitHub as I try to rebuild a 3D printer that requires lots of merged settings and finally figure that I should do the same for my OH configuration.

Even though I’d set up a private repo, I’m still nervous about posting API keys, etc…

Would it be best to store that info in HABApp parameters, or as OS environment variables? At the moment, they are all just in the code.

I tinkered with parameters the other day but didn’t find it as easy to use as I have imagined, but that is a function of my capabilities more than the code.

Oh that’s easy. We all have secret values. No matter which config file you put them in, just don’t commit the files into source control. Always keep them local. It can be somewhat painful to reconcile the diff, but typically config files don’t change often. It would also be easier to keep dev env separate from production env. This way you will not commit the secrets by mistake.

I’d like to get them all into a ‘secrets.yml’ file that is accessible to every rule… just not sure what has the least overhead in terms of complexity, etc… possibly centralizing parameters as well…which I haven’t yet tackled. Then I can just back up that one file periodically and locally.

That’s how I do it. I created a parameter file that stores my api keys/password. Then from the corresponding rule I just load the data out of the parameter file.

If you have trouble with parameter files you could create another thread and we’ll look at it together.

Hello @Spaceman_Spiff
Here I am again, this time with a simple beginner question. :slight_smile:

See the question? :slight_smile:
It would save a roundtrip and reduce traffic.
Or, is it just not an issue because the current value of the dimmer is cached in HABApp anyway so no traffic is generated?

Sorry - no. :confused:

Okay, according to the openHAB documentation, dimmer items have an IncreaseDecrease command, which I assume increases or decreases the dimmer value in a single operation, rather than having to request current value, adjust value, and send the command back.
HABApp seems to lack this command, which means we do have to get_value, adjust, and oh_send_command. Is this an oversight? I would think this decreases performance. Or is it just not necessary?

It’s would be just oh_send_command('INCREASE') and it’s an oversight

Hi Seb! Just wanted to let you know, ever since I learned about the docker stats command and found the high CPU load issue, and cleaned out some old test scripts (whatever it was that triggered the issue), I have not ever had a timeout or performance issue! Everything has been working great ever since. I even disabled my MqttDirect hack and everything has been fine without it. Knock on wood, HABApp has been functioning perfectly for weeks now.

Thank you again for all your work, not to mention your patience! :slight_smile:

1 Like

Hi Sebastian! Quick question this time hopefully.

Can I set unique names for the different instances of rules, when using multiple instances of the same rule?

class TestRule(HABApp.Rule):
	def __init__(self):

RuleOne = TestRule()
RuleTwo = TestRule()
RuleThree = TestRule()
[2022-06-14 09:28:51,949] [             HABApp.Rules]     INFO | Added rule "TestRule" from rules/Game Room/GameroomTvCommands.py
[2022-06-14 09:28:51,950] [             HABApp.Rules]     INFO | Added rule "TestRule.2" from rules/Game Room/GameroomTvCommands.py
[2022-06-14 09:28:51,951] [             HABApp.Rules]     INFO | Added rule "TestRule.3" from rules/Game Room/GameroomTvCommands.py

They’re assigned TestRule, TestRule.2 and TestRule.3 by default.

The reason why this suddenly matters is that I’m trying to use self.get_rule() from other rules.