I have a program that I use as a front end to openhab that is fully voice controlled. I currently have to enter each command into its interface and then use a curl command to “tell” openhab what to do. I basically update items via the rest api and can control openhab any way i want using voice. This works really well.
Voice Command: Turn on the bedroom light.
Voice Response: Turning on bedroom light
Action: curl.exe --header “Content-Type: text/plain” --request POST --data “ON” http://ipaddr:port/rest/items/slBedroomlight
So this works really well and really fast. The light is usually on before the response is done. But now i want to ask openhab a question. For example, “what is the thermostat set to?”.
The voice program accepts a simple web command to speak a line of text back to me. To do this i have an openhab switch i turn “on” through the rest api. The rule for that switch generates a string to send to the voice program to speak back to me.
Voice Command: What is the thermostat set to?
Response : checking
action: curl.exe --header “Content-Type: text/plain” --request POST --data “ON” http://ipaddr:port/rest/items/speakThermostat
the rule for speakThermostat generates a custom response and uses httpgetrequest to send the text back to be spoken.
strResponse = "the thermostat is set to 76 degrees"
Ok , so this is working great. Lots of switches and setup to do but it works great. Anyone having better ideas on parsing data on the front end vs the back end of this i am open to suggestions.
Here the problem (finally):
I want to have multiple voice controls throughout the house. They are built into “magic mirrors” in different rooms in the house. But if i have more than one, how do i know what computer initiated the request? What if two people in different rooms ask a question at the same time? i would need some type of unique instance of a variable for each request. How does openhab handle this? Am i missing something simple?
Another idea is to send a simple string to one item in openhab. Sort of like voicecommand. Then parse out the item. I can combine the machine identifier and the switch name and then pass it to openhab, perform a lock , parse the string, do the commands, then unlock. What happens if another command comes in at the same time? Does openhab cache the string until i’m done with the first one or is it overwritten? Can i access a switch based on a passed string variable without massive case statements etc to get to the switch? Is this a proper use of locks? I don’t know enough about the capabilities of the rest api and its probably pretty easy to send two data items at once?
Since the voice program requires a different command for each action, having openhab parse out the action is being redundant. I already know the intent of the command when passing it to openhab so i dont think i want a whole “AI mother of all parsing” rule.
Anyone have any thoughts or ideas. Was looking at the echo as i see the actions binding is coming along really well but i want to have everything local and not have amazon know everything about me. lol. I know that my setup is limited by the commands i enter but in reality, how many things do you really want to ask your house? .The voice front end will soon do voice searches on google and speak the results back to me (siri or cortana capabilities). This is only for specific house things.
I have done some programming in the 80s and self taught myself some c++ in the early 90’s but all my coding experience is pre-internet so please excuse me if this is novice stuff. i’m pretty rusty at coding but coming back to me slowly lol. Any help would be appreciated.