Voice feedback spoken by the app?

Hi.

Is there a way to let the Android app speak?
My idea was to have a better response to voice commands. I have build a nice rule that is understanding a lot of commands to control my home via voice. But somehow there is something missing.
When I interact with my home via voice, it should answer.
Sure I could do this with the “say” command, but this will just playing the voice output via the device where openHAB is installed on (a raspi 2 in my case).

Is there a way to get this voice on the device(and in the ideal case only this device) that was sending the voice command?

You could do it with tasker…
Make a rule to pass your response to tasker when voice command occurs.
You could then parse in Tasker and use the built in google tts, or even a third party TTS like IVONA.

1 Like

Maybe look at this topic:

1 Like

Thank you both.

This is a good starting point. But still I will have to figure out how to make sure that just the device that was spending the voice command is receiving the voice answer.

Somehow it would be nice if this would be a possibility out of the box like the voice command itself. Maybe simply by a second item VoiceAnswer which is monitored for some time by the app after is was sending a voice command. This would be very simple to implement.

1 Like

@LarsK @mashborn

I have been considering my voice command structure for a while, as I use voice with my android devices (phone and Moto 360), an Ubi voice controlled computer, and now with Openhab. I have been considering a way to centralise my voice actions, and suspect this will be best performed using Google + Autovoice, tasker and then distribution over MQTT, REST calls and using AutoRemote.

Eventually, I want to be able to say something to any of my voice entry points (phone, watch, Ubi, other future devices) and get the same result, without having to remember which system links with which voice entry portal.

This is a WIP mind map showing my current flows, but I will want to adapt over time…

The HTPC Eventghost is for things like Media Center, Kodi, Netflix etc. Openhab does all my environment and lights. Ninja Block is mainly now used to accumulate sensor data from 433 devices and MQTT them to Openhab.

Any thoughts on integration?

1 Like

cool - I’ve got nearly the same setup - except the ubi-part.
I did not find the perfect location for voice-to-action-definitions.

I think, we need a platform independent webservice, that combines intelligent word/grammar-detection and responds with an action. Like a private google like semantic voice-search…

I have ~10 commands defined, only to lower the blinds in the living-room - because I want to naturally speak and not learn a single special phrase.
But that would not be scalable to all possible voice-actions.

And I think the most interesting part would be, if voice-actions not only work with item commands, but more a kind of temporary rules.

Let me explain:
It’s easy to say and to control “Switch on light living room”
But I want to say: “Switch off light living room at 11.pm today”

Check out Wit.ai
This is what I’m basing my voice control on. The testing I’ve done so far has proven it to be an extremely flexible and capable platform for trying to desipher what the end user actually wants.

This looks exactly like what I’m looking for - except that it is available only in english…
But thanks for the tipp!

It actually supports 11 languages right now. Not sure if they all have speech-to-text support though. I needed Estonian language so I created the necessary translation files and submitted a pull request to get Estonian included. There’s no stt support for it, but I do not actually need it, I’m running that in my own server.

Sign up and check the language options when creating a new app to see if the language you need is available.

@thucar Can you expand on your setup? this has been on my todo list for a while and I’m very interested to hear about your setup and try it out myself.