root@maggie alexa-utterances # patch -p1 < 8599208037ada7020a3ab8c6fc979e31a2ff934c.patch
patching file README.md
patching file index.js
can’t find file to patch at input line 100
Perhaps you used the wrong -p or --strip option?
The text leading up to this was:
Remove the test.js portion in the patch file (bottom), and it works!
I’ve created an issue for this, and uploaded a working copy of the patch. I’ve also requested the author to apply it to master/npm! Also updated the Alexa-HA documentation on how to apply the patch here.
I’ve also seen some reports of ASK developers having issues with wildcard certs or certs issued from non trusted CA authorities. Ultimately you don’t really need all that - I bet you could just self sign it and upload the SSL cert to your custom AWS ASK Skill.
I want to hit my head against the wall!!! I googled and screwed around with this for hours! Thank you so much, that was the issue, I put back my real certs and everything works great!
Is the next step to make this a binding? Right now I only have office lights working, but I have hundreds of items I would like to configure. I would think that if it was a binding it could have access to that and not need that part of the config.
Would really like to get rid of Apache. Is it that much work to be able to specify certs?
I see that all your key words are one word, how would I make it work with Joshua, Emma’s, Master bedroom?
Any any any way to get rid of ask OpenHAB?
When OpenHab is at my service, any way to ask more then one command without having to ask openhab again?
Excellent! Glad you managed to get it all setup! To answer your questions:
Sort of, because we can’t make the Echo ‘announce’ stuff, it would be more of an IO add-on than a binding. I have had a side discussion already with the ‘powers that be’ about publishing it as an official OpenHAB Skill, and potentially adding it as a new feature to my.OpenHAB.org. A lot of work will be needed to have a single Skill that supports all houses, create a Web UI for setting up the item->location->name mappings, address all security concerns and make it reusable as possible without much end user configuration…
I will experiment tonight using only NodeJS/Express and self signed certs, and let you know the outcome.
Mine are single words for simplicity, but two words should work the same. The important part is that you update the ASK LOCATION_TYPE ‘custom slot’ to perfectly match what you want… Then, maybe enable some of the console.log DEBUG lines in the index.js code so you can see exactly what Alexa ‘heard’ when the intent is hit. Note that the Alexa-HA code strips the dangling word ’ room’ from the location names (So ‘living room’ becomes ‘living’, but ‘bedroom’ remains ‘bedroom’).
Not today, this is required for any custom Skill AFAIK… I believe the ‘big’ vendors like Philips/Samsung/Wemo/etc must have struck a relationship with Amazon to get the deeper integration (with out ‘ask …’). As this project matures I intend to request the same! The reason the Amazon Echo HA Bridge can do it is because its simply emulating a Hue bridge…
I am working on that. I initially had it wired it in so Alexa-HA ‘reprompts’ the user after each command. For example, say ‘Alexa, open OpenHab’ and wait. After 10 seconds it will ask you again. That because awkward to use when enabled for every command, so I took it out last minute. Ideally I want to find a way to issue many commands all at once, like ‘Alexa tell OpenHab to turn on the office lights, office fan, office PC’, etc.
And again, thanks for your donation! Part of that has already been used to pre-order the new Echo Dot so I can begin testing with multiple Echo’s in the house, and teach Alexa how to control everything in the home theater
Lots more to do, but its certainly coming together well! Be sure to keep an eye on the development branch where all the latest and greatest improvements are being worked on:
I thought of that too during initial development, and have a TODO for automatic discovery via ‘/rest/items’. The problem I ran into with this, which is why I ended up doing it via Alexa-HA configuration, is the item names could be set to anything (i.e. arbitrary, varying between OpenHAB setups) in your OpenHAB items file. Alexa-HA needs to know what/where before it can issue an action, and to handle edge cases (i.e. multiple lights in a room you want to control individually), I found it to be more flexible AND reusable for everyone this way. Albeit more of a pain to get setup, though
I looked at this, I think it would work if you could get descriptions, but it looks like you only get item names over rest and I am not sure that is what we are looking for. Does anyone know if a binding has access to item descriptions?
I guess that even if you had descriptions that I would run into problems because I have several items with the same Description. I think the best approach would be to add a new item field for speech tag. You could add it just to the items you want and then the binding could see all of them without binding specific config.