Binding Request: Amazon Echo

@belovictor What were your thoughts/ideas for this?

I’m not so sure passing the text would that simple unless I’m completely missing something. As I noted using HTTP within the skill does not seem to work. I am using the username and password for my.openhab.org and coded it just like the examples I linked to above. Since I couldn’t get the Twilio code sample to work either, I am at a loss as to what to try next since I may be missing something fundamental. I was hoping someone else had better success and could point out what I have been doing wrong.

Yes I do understand using the power of Echo would be nice but I first wanted to at least get a base working. I am currently using the amazon-echo-ha-bridge and it does work very well with no need to create an Echo Skill. But it is limited to commands that control lighting, on:off:dim %. So to close or open something like a garage door you have to say “Turn off (or on) the garage door”. It’s workable but…

Though I will have to say ‘ask openhab to…’ every time, I would prefer a separate skill instead of that bridge.
Well, the example you pointed is a labda service (amazon lambda app). So first you need to be sure your skill calls this lambda and after that diagnose if it works inside. I didn’t have a deep look into how to configure the skill yet though.
My thought was adding a little bit of a context (echo will settle in certain room or space of your house, so you could specify a group to enable things like ‘all lights’ and add some context to commands you say to operate controls in this particular area) for example. The main trouble is that we don’t have an addressable hierarchy in openHAB so you can’t automatically interpret things like ‘main lights in guest room on second floor’. So we can agree on certain item name or label scheming so that you could map parts of it into speakable things. I’m not sure about label though, cause I have a lot of ‘main lights’ or ‘temperature’ labels in my house while those items stay in different groups (mapped to rooms…). So there should be done quite a bunch of research here on how to map voice vs items. I just don’t want to create a huge ‘if then’ ruleset to process voice commands and edit it every time I add something to my house configuration…

I mentioned this before in my own thread but the way I’m tackling that problem at the moment is by having a hierarchical group structure and assigning items to multiple groups. I don’t know if it’s necessarily the best way (and that’s why I’m always keen to hear other people’s experience/solutions) but it seems to work.

So far I have a ‘location’ group. e.g. (living_room, bedroom, bathroom) and a ‘thing’ group (e.g. lights, tvs).

If I say “turn off the bedroom light” it will firstly look for an item called “bedroom_light” and if found will turn that item off.
If I say “turn off the bedroom lights” (and it can’t find an item called “bedroom_lights” it will look for items that are members of BOTH the bedroom group and the lights group and turn them all off.

Cool. That’s what I meant :smile:
Where is your thread?

Link is below. I’m using google now to capture speech to text and wit.ai to process text to intent.

I think the hierarchical structure is great. My system very much fits this mold. In fact, my sitemap is populated and organized based in the groupings, so would work perfectly.

It is large but manageable and easily scalable.

Actually mine does work with close/open the garage door commands…it must equate open to ON, etc. pretty neat but I would still like to see some further development

Two questions:

  1. The Echo HA bridge sounds like a good starting point to have some simple integration (at least for lights). As this bridge is implemented in Java, it should be fairly easy to integrate it smoothly into openHAB as a bundle. Did anybody start such an effort? I would love to have that!

  2. For an Alexa skill, I would not want to use “openHAB” as the greeter. Just like the Echo has Alexa, openHAB should have Bob or something like that. So shall we do a poll on what would be a good name for our personal assistant…?

I’d like to be able to customize the name, but that probably wouldn’t be possible if we’re creating a permanent skill for everyone to use.

Mine is called “Gregory” (House).

2 Likes

The Echo HA bridge is a great starting point I have 15 devices so far and it is VERY fast both with X10 and insteon devices from the insteon PLM binding. Also to RFM69 Arduino nodes.

I know several trekkies" have been pushing for “computer” as a trigger name on the Amazon forums :smile: That would always be neat… I think any word that is not very common (such as “Alexa”) must help prevent unintentional triggers although as a skill perhaps this does not matter. It would be fantastic to get it set up natively with alexa as a skill opening up additional possibilities. Just with the limited functionality through the hue emulator has already proven this a viable and “real deal” voice recognition system.

I’m currently using the echobridge successfully to arm the security system through the DSC Binding, and through recent improvements to that I am also able to enable/disable the chime feature and open the garage door through its output.

I also have a switch item set up to run a rule for changing the thermostats when leaving for
the day, lights also of course. Switches work very fast, sometimes before she’s said “OK”

Having the echobridge as a binding would be great, wish I knew how to code java well as I would jump on it.

That’s funny @ubergeek , the very first thing I did when I got my echo was
put in a request to change the wake work from Alex to computer. I am also
using the echo bridge project to control about a half dozen lights, my
sonos equipment and my pool. The official api requires a application wake
work ( tell openhab to …) which is not as convent as just saying Alexa,
turn kitchen lights on. I now own three echos! Good voice recognition is a
game changer.

1 Like

@digitaldan knows how to code java :slight_smile: Didn’t you think yet about creating a real bundle for it, so that no additional bridge needs to be set up? As I will get my Echo tomorrow, I am now myself very interested in it and thus will have all possibilities for review and testing!

I think you could take each clause as a group name and constrain the set of items returned to only those belonging to all named groups. The example “(main lights) in (guest room) on (second floor)” means the items returned would have to belong to all three groups, main_lights, guest_room and second_floor, no hierarchy needed.

Gee, thanks @Kai :wink: My openhab project list is busting at the seems, I admit I need to set aside some time and start coding again. The echo bridge idea is interesting for more than just the echo, in essence it’s simply a philips hue emulator, since many things now support directly talking to a hue device ( logitech remotes, the echo, ect…), this could be a way to integrate other systems that do not have a convenient API.

I actually just had a look at the code of the HA bridge - it is heavily dependent on Spring (I always thought the original idea of Spring was to not have any dependencies within the code itself to it…), so hardly any of it could be used within an openHAB bundle. It would rather be a rewrite, using JAX-RS. I think if we want to go this way, it would make sense to do it directly for openHAB 2, since items could be nicely tagged to be exposed to the Echo.

Has anyone looked at the fauxmo binding to emulate WeMo switches for the echo? Wondering if a WeMo emulater would be better/worse/same as the echo HA bridge.

http://www.makermusings.com/2015/07/18/virtual-wemo-code-for-amazon-echo/

GitHub - makermusings/fauxmo: Emulated Belkin WeMo devices that work with the Amazon Echo

Just received my echo yesterday so haven’t hooked it with OpenHab yet other than an IFTTT trigger.

I also have my Echo since 2 weeks now and successfully hooked it up to openHAB through the hue-echo-bridge.
I just checked the code for the WeMo emulation, but this doesn’t seem to be any better. It is still only on/off commands (while with hue you can at least also dim) and you need to have an incoming socket per device that you want to emulate. That is better with hue, since you only have to emulate the bridge itself.

I am not sure if I will soon (ever?) find time to do an implementation for openHAB 2, although I would love to have it. So if there is anybody else with fewer time constraints, please let me know and I’ll be happy to support where I can!

I’m using IFTTT with the Echo to make openHAB do things like “Open the Front Gate” etc… It does annoyingly say “Sending that to IFTTT” after I say my command but at least it works…

I’d like to see them open up the inbound stuff so we can use the Echo as a TTS speaker for announcing things like “You’ve left the garage door open”… Here’s hoping they enable that one day.

1 Like