Some understanding problems with the new girl in da house

Why are you using an item sensor in this case? Target and current temperature shouldn’t return the same state.

Group endpoints are considered as a single device on the Alexa end that includes a bunch of functionalities defined by each associated Alexa-enabled items. So your temperature item is accessible via that device.

Alexa, what’s Testhansel Temperature?

Ok, I managed to get Alexa recognize most of my thermostats now, too and yes, I omitted the itemSensor.
Turned that my Echo itself had apparently cached a number of devices. That at least was the reason for them to keep reappearing although I had deleted the metadata file. Resetting the Echo did the trick.
Probably another info worth adding to the docs.
One thing I’m still missing sufficient information on is thermostatMode. The skill docs say you can add [binding="bindingname"] but it doesn’t explains neither what that does nor what’s valid binding names (except Nest).
Another thing I struggle is to raise/lower my blinds. Open+Close finally works, but I had to make use of the generic form rather than “blind” to completely reverse the definition.
What still does not work is to raise or lower by 10%… Alexa’s understanding is erratic here.
I tried to also reverse Raise/Lower parameters but the same command sometimes seems to raise blinds and sometimes it lowers them.
ButI wonder why as she’s always applying to the right blinds so at least the OH item was properly identified.

Lastly, I’ve just added some scenes. While this works as simple switches, the skill docs say to eventually add [category="SCENE_TRIGGER"] to switch(?) devices right away (without a rule I guess), but there again I have not found any explanation how that is supposed to work.

Did you actually check the documentation related to the actual capability?

I have no way to help you with this one if you don’t give some details such as an item definition and the exact utterances you use to interact with that given device.

I would refer to my first point checking the relevant capability documentation.

Ah. Sorry I missed that, thanks for pointing me there.

As written earlier, I had to replace “blinds” because they have an inverted understanding of open and close.

Rollershutter EG_Wohnen_Jalousie_links "Raffstore links [%d %%]"        <rollershutter> (EG_Wohnen,Rolladen,Jalousien)          { channel="zwave:device:dddxxxx:node113:blinds_control",
        alexa="RangeController.rangeValue" [category="INTERIOR_BLIND", friendlyNames="@Setting.Opening", supportedRange="0:100:10", unitOfMeasure="Percent", actionMappings="Close=100,Op
en=0,Lower=(+10),Raise=(-10)", stateMappings="Closed=1:100,Open=0"] }

The difficulty here is I don’t speak English to my Alexa but German so the exact phrasing is of little use to you I believe. Keywords are different but syntax also is sometimes.
When I say (literal replacement of German by English words)
“Alexa, drive Raffstore links up” or “Alexa, Raffstore links higher”, she sometimes moves them up, but sometimes moves them down instead. “She” always takes action, she even always does so on the right item, but the kind of action she takes is sometime right and sometimes wrong - erratic as I called it.
Have to correct myself. She does take the correct direction action, however the steps are erratic, it can be 2 or 10 or 30 percent change. Isn’t it always supposed to be 10 ? Or does it depend on the current value by the time she receives the next command ? (difficult when blinds are still moving or have finished but no new value has been returned yet)
Is there any debug output saying what she understood by my input i.e. category and action ?

Yes I didn`t understand at that point what a category is or means. That was not explained (only way later on).
I think it would make sense to move the chapter to explain what a category is way up close to the beginning, plus hyperlink there from where referred to.
Most people will not read everything up to the end and stop when they reach the ‘reference’ part. You don’t expect any more concept information after that point.

Plus a full-scale example including explanations would be helpful, without that it’s rather abstract hence not easy to understand.
So if I issue “set scene XXX”, labels are compared against my voice input of all those items to EITHER have the SceneController OR no scene but some other capabilities but an additional [category=SCENE_TRIGGER]. Did I get that right?

Can you provide the relevant event logs showing the command received from the Alexa skill along with the state changes? You should get a better understanding if these erratic changes are related to the skill or to your binding.

I am not sure to follow you. You seem to be comparing an interface capability with a display category which are totally different. The former defines a device functionality, in other word, how it should be controlled. The latter mostly defines what category it should be displayed as in the Alexa app. Keep in mind that each interface capability has a default display category as per indicated in the documentation. This means you don’t necessarily have to specify the category on some of the capability.

Ok, I was assuming that if I issue a “lower” resulting in command x and then another “lower” it should result in a command x-10. But the log shows it’s actually y-10 with y being the most recent value the binding reported back right before it receives the next command. So your skill acts correctly.

2020-07-09 23:10:48.349 [ome.event.ItemCommandEvent] - Item 'EG_Wohnen_Jalousie_links' received command 87
2020-07-09 23:10:48.391 [nt.ItemStatePredictedEvent] - EG_Wohnen_Jalousie_links predicted to become 87
2020-07-09 23:10:48.402 [vent.ItemStateChangedEvent] - EG_Wohnen_Jalousie_links changed from 97 to 87
2020-07-09 23:10:50.083 [vent.ItemStateChangedEvent] - EG_Wohnen_Jalousie_links changed from 87 to 89
2020-07-09 23:11:04.839 [ome.event.ItemCommandEvent] - Item 'EG_Wohnen_Jalousie_links' received command 79
2020-07-09 23:11:04.874 [nt.ItemStatePredictedEvent] - EG_Wohnen_Jalousie_links predicted to become 79
2020-07-09 23:11:04.901 [vent.ItemStateChangedEvent] - EG_Wohnen_Jalousie_links changed from 89 to 79
2020-07-09 23:11:06.672 [vent.ItemStateChangedEvent] - EG_Wohnen_Jalousie_links changed from 79 to 88
2020-07-09 23:11:13.589 [ome.event.ItemCommandEvent] - Item 'EG_Wohnen_Jalousie_links' received command 78
2020-07-09 23:11:13.610 [nt.ItemStatePredictedEvent] - EG_Wohnen_Jalousie_links predicted to become 78
2020-07-09 23:11:13.625 [vent.ItemStateChangedEvent] - EG_Wohnen_Jalousie_links changed from 88 to 78
2020-07-09 23:11:15.274 [vent.ItemStateChangedEvent] - EG_Wohnen_Jalousie_links changed from 78 to 81
2020-07-09 23:11:27.202 [ome.event.ItemCommandEvent] - Item 'EG_Wohnen_Jalousie_links' received command 71
2020-07-09 23:11:27.226 [nt.ItemStatePredictedEvent] - EG_Wohnen_Jalousie_links predicted to become 71
2020-07-09 23:11:27.241 [vent.ItemStateChangedEvent] - EG_Wohnen_Jalousie_links changed from 81 to 71
2020-07-09 23:11:29.160 [vent.ItemStateChangedEvent] - EG_Wohnen_Jalousie_links changed from 71 to 79
2020-07-09 23:11:41.007 [ome.event.ItemCommandEvent] - Item 'EG_Wohnen_Jalousie_links' received command 69
2020-07-09 23:11:41.019 [nt.ItemStatePredictedEvent] - EG_Wohnen_Jalousie_links predicted to become 69
2020-07-09 23:11:41.029 [vent.ItemStateChangedEvent] - EG_Wohnen_Jalousie_links changed from 79 to 69

That was my understanding after reading the Display Category explanation where it says
When a user asks to turn the lights ON, Alexa looks for devices in that group that have the category “LIGHT” to send the command to.

So either that’s wrong or totally misunderstandable or it’s not just about the display group?

That’s exactly what I was trying to get to. Your binding is adjusting to a different state than the command received. That was what I suspected.

I should have included in my statement that the display category is used in the Alexa app along with the Alexa-enabled group (aka room awareness) feature for specific categories (e.g. light or temperature sensor)

Yes but that quoted sentence will remain to be wrong as it is not about display only (that would be a rather “cosmetic” problem to not affect functionality) but as it is written it refers to what Alexa does when interpreting user input.

I think the whole document would become even more useful when it gets a little less focused on/restricted to the skills API only - you should spend a larger introductory paragraph with more and less ambigous explanations on how Alexa works and what’s the concepts w.r.t this skill.
If people hit understanding problems they are totally unaware where the problem is located:
is it in Alexa voice parsing, Alexa semantic parsing, failure of skill to find an OH item, item(s) found but intent incompatible with item actions ?
Extended use case examples are also helpful such as the one with my blinds… bottom line there was my implicit assumption it’s working based on the target value but it’s in fact using the current value.
I know from my own contributions that if some of the concept information is missing in the beginning, people quickly fill that gap with assumptions and guesses and that’s quickly resulting in misleading.
Developers like ourselves don’t understand and frown at how users could make such assumptions (and we don’t even name them because it’s “so obvious”). But that only to us, but to users without enough technical background (like me with the Alexa skill) it’s then like a longish journey where you get lost just because you took a single wrong turn only, but it was so very much at the beginning that you end up in a completely foreign area.
While you added a Troubleshooting paragraph, that is a little bit like searching for the needle in the haystack… I’m missing instructions how to get meaningful debug output.
Referring to my blinds issues, it should be something such as "Alexa speech-to-text understood the following words: “Raffstore links higher” and "Alexa understood the following intent: “lower the blinds called Raffstore links by the predefined stepwidth” (which she doesn’t know as only OH does) and “Skill identified the following OH items {label} to match the Alexa intent: EG_Wohnen_Jalousie_links {Raffstore links [%d %%]}” and “Skill identified to apply the Lower(-10) action to this item with a current value of XX”
If output similar to that was available, people would know a lot better better what stage in the process they need to look at when troubleshooting. The Alexa response error “code” is a beginning, sure, but not really useful.

We are aware of this concern. The initial intent with the current skill was to provide as much customization to the user as possible in having access to the full extent of the Alexa Smart Home API. Obviously, it made the configuration more complex over time as the API grew in complexity, preventing some of the less technical users as you said from easily taking advantage of it.

Rest assure that there are plans to add a layer on top of the current skill that would be more device/function oriented limiting to most commonly used configuration. That was the intent behind the concept of metadata labels but never got pushed to the forefront while the development has been focus on supporting all the latest features added to the API in the last year or so.

There isn’t much that can be done here in the way the skill currently operates and if you look at other voice integrations, it is the same observation. The troubleshooting guide provides the most common errors and potential solutions as a pointer. As I mentioned before, related to voice integrations, a lot of it is trials and errors. It is certainly not perfect especially for non-English languages and you also have to keep in mind that we don’t have access to how the Alexa processing language works or what it is doing before it decides to forward a request to the skill or not. And the request the skill is receiving doesn’t include the original utterance but a structured object based on the Alexa Smart Home API.

As far as debugging the skill itself, as I mentioned above, it is not possible to do so due to the way it currently operates unless you decide to run your own private instance. However, part of the roadmap is to introduce an actual binding to support deferred messages. At that point, that information would easily be available at the OH server level.

1 Like

Hi All,
I’m experiencing the same “duplication” problem with my setup (in italian language).
I have 2 items with different labels
“Luce della cucina” (Kitchen light)
“Tapparella della cucina” (Kitchen blinds)

Alexa app is showing the right “display category” for each item.
Devices work perfectly if actioned from the alexa app.
When I ask to turn the kitchen light on, the light is correctly identified and turned on.
When I ask to turn the kitchen light off, alexa says there is more than 1 item with that name.

I opened a ticket to alexa customer support and went through 1 hour of tests. At the end they opened an “internal ticket” (a bug?) but the response was that the error is with the openHAB skill.

If I well understood the smart home skill model, responsibility to correctly identify the device to be swtiched on/off is with the alexa service, and not the skill. The skill is providing the label and the display category, which should be enough to identify the right device among the 2 I have, which are different display categories (both supported) and have different labels.

Same problem applies to 2 other couple of devices I have in my configuration.
Changing the label fixes the issue (which shows that the device is working).

To me, this is an alexa bug.

I’m running on OH2.5.3.

As mentioned previously, duplicate errors are issued on the Alexa side prior to forwarding the request to the skill. So I am unsure how the Alexa customer support can determine it is the skill fault. I assume that during the troubleshooting you did with them, they had you delete all your devices in your account and rediscover them. If the concern is with the skill discovery process, it important to note that the skill uses the item labels configured on the OH side.

In all, with the information you provided, I would say your issue is between your Alexa-related OH items configuration, the way you are formulating your voice commands and the device discovered on your Alexa account.

At this point, it is hard to say if you don’t provide additional information such as your item definitions and the exact utterances (no need to translate) you requested as they appear in your voice transcript history in your Alexa account.