This works fine, the item is updated whenever a text is found by willow.
The next step is obviously to have these updates trigger commands on various items, depending on the actual text content.
Using a rule with a “item value changed” trigger, I can do it already, but it is a bit brittle and not very flexible when it comes to adding new commands.
I believe there are better things to do in here with things like Natural Language Processing (NLP) but I’m a bit lost in the various elements I could gather around here.
I have seen the section about Rule Voice Interpreter in my installation, but the documentation is missing.
I have seen mentions of HABot but I couldn’t figure out if it still applies to my openHAB 3.4.3 installation and if it would actually help with my intention.
I mean, it seems like a full fledged user interface and not something that would just send commands to an item based on voice.
In the end, I’m a bit lost and would appreciate any recommendations.
You have three options to procesd the text on the OH side.
send the text to an Item and parse it in a rule.
the built-in interpreter which I don’t know as much about but suspect it has a decent NLP capability. I plan on working with it sometime soon.
HABot provides a different but still relatively basic NLP processor. I think it supports more languages and is a bit more tunable through the semantic model and the ability to apply synonyms. One cool thing about HABot is when the ability to add custom semantic tags gets completed, HABot will support and understand them too. So, for example, if you added a new tag called “lamp” and made it’s parent “light”, based on my experiments HABot will understand that “lamp” is a “light” and include it in a command like “turn off all the lights in the kitchen”.
You choose which one is used by Willow among the settings found at Settings → Voice → Default Human Language Interpreter. As I understand it, Willow uses the OH API endpoint without specifying the interpreter to use so you can choose how to process the text yourself by setting the default here.
HABot will only be an option if you install the add-on.
OH 4 (I think 3 too but I’m not sure) also supports keyword detection and dialogs but I know nothing about those yet nor how/if they are relevant to Willow given how the integration works right now. But it will be awesome if we can figure that out.
The built-in interpreter is not playing nice with willow, no matter what I tell, it always reply with a 400 HTTP error code. And yet, I see nothing in openHAB log.
I installed HABot and I understand that I would first need to make it work with commands that I type on its interface for it to do something useful when given a text from willow. And that’s where I’m facing a bit of a challenge.
I mean, I read various posts from you and others about how it is quite difficult to get this right, with a lots of synonyms to be added.
I’ll have to think about how I tackle this, considering the number of things that I want to control by voice and balancing it with the time it costs to set it up.
400 means bad request. I do know that there can be a problem if you configure the end point on Willow with a trailing slash. Make sure you have that configured (the docs were updated like today or yesterday on that point).
Indeed but basic stuff should work. If you have a Light tagged item in the kitchen “turn off the light in the kitchen” will work. Also keep in mind most of those posts you are finding were written before there was a MainUI and at a time where .items files were the only option. Configuring the semantic model that way is a lot of work. It’s much less so now.
But if you are getting a 400 from the built in NLP, you’re going to get it for HABot too. So focus on fixing that. Then try stuff using your semantic model as it exists now. Later worry about synonyms and such.