I recently purchased a full complement of SONOS Ones for a few rooms of the house and have been integrating them with my openHAB setup. All has gone smoothly except for getting amazon echo control to work the way I want using the amazonechocontrol binding.
Everything seems to work fine apart from the all-important item: “lastVoiceCommand”. All other items work as expected, but this never gets updated (i.e. it should receive an update with the last voice command as text). My plan is to use the lastVoiceCommand to do all kinds of automation, but I’m utterly stuck!
As explained above, all other items given here receive updates and can be controlled as expected – I’ve tried many others and they all work great.
I am running openHAB on Windows 10 with stable version 2.4.0 (although I have tested this on the latest 2.5.0 stable and snapshot builds. I have also tried the latest beta version of the amazonechocontrol bindning (amazonechocontrol_2.5.0.Beta_9_Preview_1). Same behaviour.
If anyone has experience this, please let me know if there is a way to get this working.
Thanks for the tip. First thing I did was to clear out my manual files and attempt autodiscovery. Unfortunately, it only discovered the flashbriefing thing. However, I didn’t really investigate this angle for long before something dawned on me.
I restored the manual .things approach and checked the Alexa echo control dashboard, and realised that it shows two Alexa endpoints for each of my Sonos One Speakers; one is helpfully labelled with the room (i.e “Lounge”) and the other is harder to discern (i.e “Alex’s 2nd Sonos One”). I was using the former. After creating dummy things and items for the the latter, I spoke a few commands in each room and worked out which devices belonged to which . Now all Alexa items are reporting correctly.
Problem solved! Thanks again for you help.
As a secondary point, my end goal is to intercept Alexa commands and do whatever using rules. I did a quick test and I can certainly get custom outcomes I want form non-standard commands. However, Alexa naturally still responds to that command verbally first.
In my test example, a rule triggers when lastcommand contains the words “one two three” and sends a command to the TTS item to respond with the word “four”. In practice, what happens is:
Voice command: “Alexa, one two three”
Alexa: “I don’t know that one” “four”
My question is whether the default response can be suppressed in some way. If so, I can see this being an extremely powerful tool.
This seems to suggest that the command can be used without using a wake word. I haven’t found a way to set my devices up so they don’t need a wake word, so saying the command “one two three” will not be registered at all.
To be clear, thanks to your advice, lastVoiceCommand is now working perfectly. I now want to suppress the default response so that I can fully control what is reported back for the commands I specify in rules.
Here’s another test example, where the rule looks for the string “abc” and returns “d” via TTS.
This is the rule:
rule "Alexa Test 2"
when
Item alexa_g_LastVoiceCommand received update
then
if (alexa_g_LastVoiceCommand.state.toString.contains("a b c")){
alexa_g_TTS.sendCommand("d")
}
end
And in practice:
Voice command: “a. b. c.”
Alexa's response: "I think you want to play the song ABC, is that right?" "D"
As you can see, Alexa carries out the default response and then returns the outcome of the rule. My question is, can the default response be suppressed?
Hope that makes sense. And thanks again for your time.