openHAB testing framework

Hi,

as my home automation system grows and grows, every configuration change and every update is a bit of a risk which impact I can not really estimate in advance. It is just hard to estimate how long it takes and even after it is done one is left unsure if there are some side effects one did not discover yet.

As a software developer I am used to write tests for every line of code I write. I wanna do that for my openHAB configuration too. So I am asking if there is already something I can use or if it is something I can eventually contribute to the openHAB ecosystem.

I read some threads about duplicating the production system as some kind of test system using the event bus as connection. But even with this approach I have to check the whole functionality of my home automation manually.

I was thinking of some kind of a test framework which allows to describe the behavior of the home automation system, e.g. (Gherkin style)

GIVEN sensor A’s value is 10
AND item B is OFF
WHEN sensor A’s value changes to 15
THEN item B is ON

So my questions are:

  • Is there something similiar to this I described?
  • If not is there any interest in such an framework?

It sounds like you are wanting to test automation rather than actual openHAB code. Is that correct?

I haven’t tried it, but you may be able to get Gherkin and Cucumber working in jRuby using scripted automation. The Jython helper libraries also have a testing module. Scripted automation allows you to do pretty much anything. There’s really no way to test the DSL, since it is largely based on magic.

@bob_dickenson I believe has set up a pretty extensive testing framework first for his Rules DSL Rules and then for his Jython rules IIRC. Perhaps he might have some advice.

I’ve tried to Google for this, but came up with nothing. What is this, and how can we use it? Is there a link I missed in my search?

You should find everything starting here…

… but let me know if you have any questions!

1 Like

Exactly. I will give your pr a try for sure. Will get back to you with my questions. Looks promising so far.

Oh, I’ve got all that setup and am successfully running jython rules, that’s no issue! I’m specifically wondering about the testing module…

If you’re using the addon, the helper libraries are included and can import core.testing…

I have not spent much time with this module or the documentation for it. It would be great if you could post or submit some examples, or help fill in the documentation!

1 Like

I would be also interested in something where you can test your rules with dummy items for example.
It is really hard to test a new/modified rule with every case, so it would be great if we could have some sandbox mode or something where I can easily change items state (through an UI maybe) and run a rule for that which is now won’t change the actual items just emulate them (like if you would only do postUpdate calls, but you could easily leave this mode, which will restore all items actual state).

This is a rather complex thing, but I have rather complex rules which depends on many factors and I can’t easily test them right now. I know that I could set up a seperate server with the same config/items and remove the binding/channel links, this would do the same, but this is again, not an easy solution.

Scripted automation can add/removed bindings, add/removed Things, add/remove Items, start discovery, send command, send updates, etc. Anything you need to do. The testing module helps and provides some reporting, but it’s all just scripting.

With HABApp it is possible to connect to an openhab instance in read only mode. That way you can keep running your new rules alongside the old ones and then switch over once they run properly.

This is indeed a very promising starting point. The possibilities from the scripting interface seem to be (almost) endless.
I followed the instructions and was able to successfully executed some first test cases.
I also stumbled upon a python module named behave which provides the gherkin syntax for python.
Will have a deeper look into this.

1 Like

I was able to install behave into my jython environment and call it from a rule. The strange thing is that it returns without any log ouput and always returns 0 which means that all test passed. This happens even if I change the steps to fail the tests.
Since I am not an python expert I was wondering if someone sees whats the problem here.
My test configuration can be found here: Test config repository