Some understanding problems with the new girl in da house

@jeshab sorry to bother you directly but maybe you can shed light on this. I have not found any proper explanation elsewhere.
Could you please explain how matching of voice commands to “rooms” works ?
Is it Alexa to match the label ONLY so I have to label all my lights “roomX ceiling light” as written in https://github.com/openhab/openhab-alexa/blob/master/USAGE.md#item-labels ?
Is the label interpreted as a single string ? What if I ask Alexa with word order reversed, i.e. to “turn on ceiling light roomX” or even “ceiling lights in roomX” ?
Is it really that “filler” words like in/at/… are ignored even if they are inbetween the two halves of the label string ?
I always thought that’s sign of semantic parsing, i.e. Alexa to separately recognize “ceiling lights” as an item and “roomX” as a location or group, then looking for all items to match “ceiling lights” in the first place and then to select from these the one to match group roomX best.

Would it make sense to duplicate items with a label to have an inversed word order label?

Do I have to use the Alexa group function with the skill in one way or another or is that just a useless duplication of OH group functionality ?

Correct you need to label each of the item exposed to Alexa with a unique name, otherwise you will run into duplicate issue as you experienced if you have two labelled the same. Also, keep in mind that the uniqueness isn’t just with the OH items but any smart home devices discovered on your Alexa account via other skills or locally via a hue bridge.

The Alexa language processing can understand different permutation and even missing words as long as it can still determine a unique device. For example, a device labelled “Kitchen Bar Lights” can be controlled by just asking “Turn on the bar” if there are no other device on your Alexa account that are named with the term “bar”.

There is no room concept at the device level. However, you can create Alexa group based on room and take advantage of the room awareness feature. But it would be limited to functionalities, such as turning on/off all the lights of a given room, and not per device as you seem to want.

In the end, as @Dibbler42 mentioned, it is very hard for us to troubleshoot your issue and provide you specific guidelines if you don’t give some details such as an item definition and how you tried to interact with that giving device.

1 Like

Thanks Jeremy!

I think it would be worth to document that somewhere prominent.
I guess there’s many who are as confused as I am.

That’s also very important to point out.
I think my problem with phantom hue duplicates (to result in 2 OH item/Alexa devices to have identical labels) contributed a lot to the situation.

I tried hue emulation first (and got it to work) until I realized that I cannot control thermostats that way because Hue does not know them
There’s also Homekit (to support thermostats).

There’s probably more people to face the same confusion and decisions and hit the same pitfall when they “just” want to get Alexa to work for the first time.

I updated the troubleshooting guide based on your feedback related to uniqueness.

2 Likes

on a sidenote, can I get away without that, too? Can I have rules triggered by Alexa and determine which Echo unit the request came from then act based on that (i.e. select item to match the room).

Thanks. I’d suggest to give a hint at homekit integration, too.

Please also add your explanations on languge from above to the guide.

You can in combination with the Amazon Echo Control binding, as per the tutorial below. But if your intention is just to control the lights of a given room then I would recommend using the Alexa groups.

What do you mean? Did you want to say “Hue emulator integration”? If so, I mentioned it under the term “Philips Hue Bridge” local integration. Actually, what are the steps you took to resolve the hue emulator devices still being discovered? You mentioned a JSON db file but not what action you took.

That part is more a general understanding of how Alexa works and not particular to the skill itself. I would expect a user to understand that concept when experimenting with voice commands, through trial and errors. I think in your case, because you had duplicate issues that you couldn’t comprehend, it forced you into thinking the requests needed to be more rigid.

No I really meant Homekit. While I didn’t use it, others may, and I believe it may result in duplicates, too, just like Hue emulation did for me. It also uses some v2 tags to overlap with this skill’s config.

See my post #8

The v2 tags overlapping is done on purpose so you can use one tag to rule them all. This is why all voice assistant integrations still support Homekit tags. So I am unsure what kind of duplicates could happen in this use-case.

I am aware of that post. I was looking for more detailed steps you took to resolve the matter. Did you delete the file? Did you edit the file? If the latter, what type of edits did you make? The reason I am asking is because I think that part should be added to the troubleshooting guide as I have seen that issue on many occasions for users migrating from the Hue emulator to the skill. Ideally, a feature request should probably be opened for the Hue emulator binding so it properly cleans up its configuration when getting disabled.

Can’t try myself because I don’t own any Homekit devices, but what happens if both, Homekit and your skill, are enabled ? Will those Homekit-tagged items appear twice ?

I’ve (later) added that I deleted the file. Have not encountered any problem with that since.
Valuable hint of yours, I’ve opened https://github.com/openhab/openhab-addons/issues/8043.

FWIW, some of my phantom Hue devices keep returning. So far I could not figure out why.
I see no references in jsondb.

Lighting items including color groups now work fine. Thanks for your help so far.

Unfortunately I’m still having a problem to retrieve temperature from my thermostats. I can set the target temperature and on doing so it also announces the current temp (due to itemsensor), but Alexa does not find or query the item that I tagged as the TemperatureSensor just as used in the example in USAGE.md. I can ask her whatever, she does not know.
What’s wrong about this ?

Group Thermotest "Testhansel"                                <temperature>                              { alexa="Endpoint.Thermostat" }
Number:Temperature Thermotest_Ist "Temperatur [%.0f °C]"      <temperature> (Thermotest,Heizung)        { channel="max:thermostat:KMD102xxxx:KMD302yyyy:actual_temp",
                                                                                                        alexa="TemperatureSensor.temperature" }
Number:Temperature Thermotest_Soll "Solltemperatur [%.0f °C]" <temperature> (Thermotest,Heizung)        { channel="max:thermostat:KMD102xxx:KMD302yyyy:set_temp",
                                                                                                        alexa="ThermostatController.targetSetpoint" [itemSensor="Thermotest_Ist"] }

I wonder whether Endpoint health is implemented ? The link below makes me believe that it needs to for temperature retrieval to work.
If I need to add that, how ?
https://developer.amazon.com/de-DE/docs/alexa/device-apis/alexa-endpointhealth.html

Why are you using an item sensor in this case? Target and current temperature shouldn’t return the same state.

Group endpoints are considered as a single device on the Alexa end that includes a bunch of functionalities defined by each associated Alexa-enabled items. So your temperature item is accessible via that device.

Alexa, what’s Testhansel Temperature?

Ok, I managed to get Alexa recognize most of my thermostats now, too and yes, I omitted the itemSensor.
Turned that my Echo itself had apparently cached a number of devices. That at least was the reason for them to keep reappearing although I had deleted the metadata file. Resetting the Echo did the trick.
Probably another info worth adding to the docs.
One thing I’m still missing sufficient information on is thermostatMode. The skill docs say you can add [binding="bindingname"] but it doesn’t explains neither what that does nor what’s valid binding names (except Nest).
Another thing I struggle is to raise/lower my blinds. Open+Close finally works, but I had to make use of the generic form rather than “blind” to completely reverse the definition.
What still does not work is to raise or lower by 10%… Alexa’s understanding is erratic here.
I tried to also reverse Raise/Lower parameters but the same command sometimes seems to raise blinds and sometimes it lowers them.
ButI wonder why as she’s always applying to the right blinds so at least the OH item was properly identified.

Lastly, I’ve just added some scenes. While this works as simple switches, the skill docs say to eventually add [category="SCENE_TRIGGER"] to switch(?) devices right away (without a rule I guess), but there again I have not found any explanation how that is supposed to work.

Did you actually check the documentation related to the actual capability?

I have no way to help you with this one if you don’t give some details such as an item definition and the exact utterances you use to interact with that given device.

I would refer to my first point checking the relevant capability documentation.

Ah. Sorry I missed that, thanks for pointing me there.

As written earlier, I had to replace “blinds” because they have an inverted understanding of open and close.

Rollershutter EG_Wohnen_Jalousie_links "Raffstore links [%d %%]"        <rollershutter> (EG_Wohnen,Rolladen,Jalousien)          { channel="zwave:device:dddxxxx:node113:blinds_control",
        alexa="RangeController.rangeValue" [category="INTERIOR_BLIND", friendlyNames="@Setting.Opening", supportedRange="0:100:10", unitOfMeasure="Percent", actionMappings="Close=100,Op
en=0,Lower=(+10),Raise=(-10)", stateMappings="Closed=1:100,Open=0"] }

The difficulty here is I don’t speak English to my Alexa but German so the exact phrasing is of little use to you I believe. Keywords are different but syntax also is sometimes.
When I say (literal replacement of German by English words)
“Alexa, drive Raffstore links up” or “Alexa, Raffstore links higher”, she sometimes moves them up, but sometimes moves them down instead. “She” always takes action, she even always does so on the right item, but the kind of action she takes is sometime right and sometimes wrong - erratic as I called it.
Have to correct myself. She does take the correct direction action, however the steps are erratic, it can be 2 or 10 or 30 percent change. Isn’t it always supposed to be 10 ? Or does it depend on the current value by the time she receives the next command ? (difficult when blinds are still moving or have finished but no new value has been returned yet)
Is there any debug output saying what she understood by my input i.e. category and action ?

Yes I didn`t understand at that point what a category is or means. That was not explained (only way later on).
I think it would make sense to move the chapter to explain what a category is way up close to the beginning, plus hyperlink there from where referred to.
Most people will not read everything up to the end and stop when they reach the ‘reference’ part. You don’t expect any more concept information after that point.

Plus a full-scale example including explanations would be helpful, without that it’s rather abstract hence not easy to understand.
So if I issue “set scene XXX”, labels are compared against my voice input of all those items to EITHER have the SceneController OR no scene but some other capabilities but an additional [category=SCENE_TRIGGER]. Did I get that right?

Can you provide the relevant event logs showing the command received from the Alexa skill along with the state changes? You should get a better understanding if these erratic changes are related to the skill or to your binding.

I am not sure to follow you. You seem to be comparing an interface capability with a display category which are totally different. The former defines a device functionality, in other word, how it should be controlled. The latter mostly defines what category it should be displayed as in the Alexa app. Keep in mind that each interface capability has a default display category as per indicated in the documentation. This means you don’t necessarily have to specify the category on some of the capability.

Ok, I was assuming that if I issue a “lower” resulting in command x and then another “lower” it should result in a command x-10. But the log shows it’s actually y-10 with y being the most recent value the binding reported back right before it receives the next command. So your skill acts correctly.

2020-07-09 23:10:48.349 [ome.event.ItemCommandEvent] - Item 'EG_Wohnen_Jalousie_links' received command 87
2020-07-09 23:10:48.391 [nt.ItemStatePredictedEvent] - EG_Wohnen_Jalousie_links predicted to become 87
2020-07-09 23:10:48.402 [vent.ItemStateChangedEvent] - EG_Wohnen_Jalousie_links changed from 97 to 87
2020-07-09 23:10:50.083 [vent.ItemStateChangedEvent] - EG_Wohnen_Jalousie_links changed from 87 to 89
2020-07-09 23:11:04.839 [ome.event.ItemCommandEvent] - Item 'EG_Wohnen_Jalousie_links' received command 79
2020-07-09 23:11:04.874 [nt.ItemStatePredictedEvent] - EG_Wohnen_Jalousie_links predicted to become 79
2020-07-09 23:11:04.901 [vent.ItemStateChangedEvent] - EG_Wohnen_Jalousie_links changed from 89 to 79
2020-07-09 23:11:06.672 [vent.ItemStateChangedEvent] - EG_Wohnen_Jalousie_links changed from 79 to 88
2020-07-09 23:11:13.589 [ome.event.ItemCommandEvent] - Item 'EG_Wohnen_Jalousie_links' received command 78
2020-07-09 23:11:13.610 [nt.ItemStatePredictedEvent] - EG_Wohnen_Jalousie_links predicted to become 78
2020-07-09 23:11:13.625 [vent.ItemStateChangedEvent] - EG_Wohnen_Jalousie_links changed from 88 to 78
2020-07-09 23:11:15.274 [vent.ItemStateChangedEvent] - EG_Wohnen_Jalousie_links changed from 78 to 81
2020-07-09 23:11:27.202 [ome.event.ItemCommandEvent] - Item 'EG_Wohnen_Jalousie_links' received command 71
2020-07-09 23:11:27.226 [nt.ItemStatePredictedEvent] - EG_Wohnen_Jalousie_links predicted to become 71
2020-07-09 23:11:27.241 [vent.ItemStateChangedEvent] - EG_Wohnen_Jalousie_links changed from 81 to 71
2020-07-09 23:11:29.160 [vent.ItemStateChangedEvent] - EG_Wohnen_Jalousie_links changed from 71 to 79
2020-07-09 23:11:41.007 [ome.event.ItemCommandEvent] - Item 'EG_Wohnen_Jalousie_links' received command 69
2020-07-09 23:11:41.019 [nt.ItemStatePredictedEvent] - EG_Wohnen_Jalousie_links predicted to become 69
2020-07-09 23:11:41.029 [vent.ItemStateChangedEvent] - EG_Wohnen_Jalousie_links changed from 79 to 69

That was my understanding after reading the Display Category explanation where it says
When a user asks to turn the lights ON, Alexa looks for devices in that group that have the category “LIGHT” to send the command to.

So either that’s wrong or totally misunderstandable or it’s not just about the display group?

That’s exactly what I was trying to get to. Your binding is adjusting to a different state than the command received. That was what I suspected.

I should have included in my statement that the display category is used in the Alexa app along with the Alexa-enabled group (aka room awareness) feature for specific categories (e.g. light or temperature sensor)

Yes but that quoted sentence will remain to be wrong as it is not about display only (that would be a rather “cosmetic” problem to not affect functionality) but as it is written it refers to what Alexa does when interpreting user input.

I think the whole document would become even more useful when it gets a little less focused on/restricted to the skills API only - you should spend a larger introductory paragraph with more and less ambigous explanations on how Alexa works and what’s the concepts w.r.t this skill.
If people hit understanding problems they are totally unaware where the problem is located:
is it in Alexa voice parsing, Alexa semantic parsing, failure of skill to find an OH item, item(s) found but intent incompatible with item actions ?
Extended use case examples are also helpful such as the one with my blinds… bottom line there was my implicit assumption it’s working based on the target value but it’s in fact using the current value.
I know from my own contributions that if some of the concept information is missing in the beginning, people quickly fill that gap with assumptions and guesses and that’s quickly resulting in misleading.
Developers like ourselves don’t understand and frown at how users could make such assumptions (and we don’t even name them because it’s “so obvious”). But that only to us, but to users without enough technical background (like me with the Alexa skill) it’s then like a longish journey where you get lost just because you took a single wrong turn only, but it was so very much at the beginning that you end up in a completely foreign area.
While you added a Troubleshooting paragraph, that is a little bit like searching for the needle in the haystack… I’m missing instructions how to get meaningful debug output.
Referring to my blinds issues, it should be something such as "Alexa speech-to-text understood the following words: “Raffstore links higher” and "Alexa understood the following intent: “lower the blinds called Raffstore links by the predefined stepwidth” (which she doesn’t know as only OH does) and “Skill identified the following OH items {label} to match the Alexa intent: EG_Wohnen_Jalousie_links {Raffstore links [%d %%]}” and “Skill identified to apply the Lower(-10) action to this item with a current value of XX”
If output similar to that was available, people would know a lot better better what stage in the process they need to look at when troubleshooting. The Alexa response error “code” is a beginning, sure, but not really useful.

We are aware of this concern. The initial intent with the current skill was to provide as much customization to the user as possible in having access to the full extent of the Alexa Smart Home API. Obviously, it made the configuration more complex over time as the API grew in complexity, preventing some of the less technical users as you said from easily taking advantage of it.

Rest assure that there are plans to add a layer on top of the current skill that would be more device/function oriented limiting to most commonly used configuration. That was the intent behind the concept of metadata labels but never got pushed to the forefront while the development has been focus on supporting all the latest features added to the API in the last year or so.

There isn’t much that can be done here in the way the skill currently operates and if you look at other voice integrations, it is the same observation. The troubleshooting guide provides the most common errors and potential solutions as a pointer. As I mentioned before, related to voice integrations, a lot of it is trials and errors. It is certainly not perfect especially for non-English languages and you also have to keep in mind that we don’t have access to how the Alexa processing language works or what it is doing before it decides to forward a request to the skill or not. And the request the skill is receiving doesn’t include the original utterance but a structured object based on the Alexa Smart Home API.

As far as debugging the skill itself, as I mentioned above, it is not possible to do so due to the way it currently operates unless you decide to run your own private instance. However, part of the roadmap is to introduce an actual binding to support deferred messages. At that point, that information would easily be available at the OH server level.

1 Like