Generally speaking the overall the scope of Willow is (currently) quite narrow. I’ve done too much too early in the past and it doesn’t work out well! Once you get past all of the fancy audio processing, wake, speech recognition, etc all we really do is:
- Speech to text
- Send text somewhere (OH in this conversation)
- OH does whatever you have configured with the transcribed text
- Display speech to text transcript on LCD, result/status from OH, and give tone-based audio feedback for success/failure (more on this later)
I’m VERY new to OH (first install ever yesterday!) but here’s an equivalent
curl of what we do from the Willow device for OH:
curl -u $OH_TOKEN: https://$OH_INSTANCE/rest/voice/interpreters -H 'accept: application/json' -H 'Accept-Language: en' -H 'Content-Type: text/plain' -d 'turn off upstairs fan'
This uses the default system configured OH Human Language Interpreter which in my case is currently set to Built-in Interpreter.
Where the OH console reports:
10:50:59.543 [INFO ] [openhab.event.ItemCommandEvent ] - Item 'upstairs_fan_Switch' received command OFF 10:50:59.544 [INFO ] [openhab.event.ItemStatePredictedEvent] - Item 'upstairs_fan_Switch' predicted to become OFF 10:50:59.546 [INFO ] [openhab.event.ItemStateChangedEvent ] - Item 'upstairs_fan_Switch' changed from ON to OFF
For the audio feedback (basically text to speech) like what you’re describing we’re still hashing out our overall strategy but it will be similar in straightforwardness, with the initial approach likely being something as simple as “if the response contains audio, play it instead of the tones or built in audio chimes for success/failure”. We have a TTS engine in our Willow Inference Server and there is ongoing work to do TTS on device as well. I’m torn between allowing users to use a variety of STT/TTS engines because we are completely committed to “Alexa or better experience” and the use of engines we haven’t validated for quality and response time has the very real potential of ruining the experience for the user.
Thanks for the pointer to HAbot! This is exactly the kind of feedback I was looking for in this thread. Being so new to OH I’m not in touch with the ecosystem and how the community is actually using it. I’ll look into it but generally speaking we’re currently more-or-less aiming for broader compatibility as opposed to requiring any extra steps on the part of the user. The onboarding for Willow and your existing install should be as simple as “point us there and we’ll figure it out”. For now, at least .