Control items with willow based voice assistant

I have configured willow to send the speech it recognizes to a given item via this option:

This works fine, the item is updated whenever a text is found by willow.

The next step is obviously to have these updates trigger commands on various items, depending on the actual text content.
Using a rule with a “item value changed” trigger, I can do it already, but it is a bit brittle and not very flexible when it comes to adding new commands.

I believe there are better things to do in here with things like Natural Language Processing (NLP) but I’m a bit lost in the various elements I could gather around here.

I have seen the section about Rule Voice Interpreter in my installation, but the documentation is missing.

I have seen mentions of HABot but I couldn’t figure out if it still applies to my openHAB 3.4.3 installation and if it would actually help with my intention.
I mean, it seems like a full fledged user interface and not something that would just send commands to an item based on voice.

In the end, I’m a bit lost and would appreciate any recommendations.

1 Like

You have three options to procesd the text on the OH side.

  1. send the text to an Item and parse it in a rule.

  2. the built-in interpreter which I don’t know as much about but suspect it has a decent NLP capability. I plan on working with it sometime soon.

  3. HABot provides a different but still relatively basic NLP processor. I think it supports more languages and is a bit more tunable through the semantic model and the ability to apply synonyms. One cool thing about HABot is when the ability to add custom semantic tags gets completed, HABot will support and understand them too. So, for example, if you added a new tag called “lamp” and made it’s parent “light”, based on my experiments HABot will understand that “lamp” is a “light” and include it in a command like “turn off all the lights in the kitchen”.

See Multimedia | openHAB for a description of all three.

You choose which one is used by Willow among the settings found at Settings → Voice → Default Human Language Interpreter. As I understand it, Willow uses the OH API endpoint without specifying the interpreter to use so you can choose how to process the text yourself by setting the default here.

HABot will only be an option if you install the add-on.

OH 4 (I think 3 too but I’m not sure) also supports keyword detection and dialogs but I know nothing about those yet nor how/if they are relevant to Willow given how the integration works right now. But it will be awesome if we can figure that out.

Thanks for the feedback.

The built-in interpreter is not playing nice with willow, no matter what I tell, it always reply with a 400 HTTP error code. And yet, I see nothing in openHAB log.

I installed HABot and I understand that I would first need to make it work with commands that I type on its interface for it to do something useful when given a text from willow. And that’s where I’m facing a bit of a challenge.
I mean, I read various posts from you and others about how it is quite difficult to get this right, with a lots of synonyms to be added.

I’ll have to think about how I tackle this, considering the number of things that I want to control by voice and balancing it with the time it costs to set it up.

400 means bad request. I do know that there can be a problem if you configure the end point on Willow with a trailing slash. Make sure you have that configured (the docs were updated like today or yesterday on that point).

Indeed but basic stuff should work. If you have a Light tagged item in the kitchen “turn off the light in the kitchen” will work. Also keep in mind most of those posts you are finding were written before there was a MainUI and at a time where .items files were the only option. Configuring the semantic model that way is a lot of work. It’s much less so now.

But if you are getting a 400 from the built in NLP, you’re going to get it for HABot too. So focus on fixing that. Then try stuff using your semantic model as it exists now. Later worry about synonyms and such.

Thanks for the explanation.

One issue that I had not anticipated is that I have configured openHAB to be in English because it makes it easier to match the documentation and communicate on this forum.
This means that HABot expects English prompts and while I’m fine with it, all other users will frown if I tell them to talk in English, they want French.

I’m not sure it’s possible to configure HABot to use its own language, different from the rest of the system.

I don’t know it HABot needs to be told the language. You might try and see. Once installed you can bring it up and interact with it using text. Try some French queries and see what it does. You’ll probably need to start with the English for certain key words like the semantic tags (e.g. “lights”) to start but you might be able to work around that with synonym metadata.

I am also trying to use willow, but I haven’t even got as far as getting the application server (or is it the client that does that?) to do send a command to OpenHAB (or anywhere else for that matter). So, when I read this, I get hopeful:

Could you explain a bit more how you did this?

How far did you get with integrating Willow with openHAB?

I created an item manually (ie, not from a thing) then I went into Settings, System settings, Rule Voice interpreter and gave that item as the Voice command item

Then I can write rules triggered when the above item changes and react on the text that has been received.

This is quite cumbersome because if I test for exact equality to “open sleeping room blinds” it might not trigger because the text as sent by Willow is not always the exact one I’m waiting for.

This is why I’d like to have an NLP approach that would give me the intent behind the text in the form of an action and a target for instance. But this, I have not looked at.