Not on the skill side since this is happening on the Amazon side. You can take a look at your cloud connector logs but I am not very familiar on how to troubleshoot that side.
Other than double checking the client credentials are matching and setting up again your skill account linking in case you included a typo, there is not much else I can recommend.
Thank you for sharing details with us.
Your authorization server uses Lets Encrypt server which is causing this issue. I will suggest you to us any other certificate in your authorization server to fix this issue.
Please follow above suggestion and let me know if you have any further question.
Due to the current global COVID situation, you may experience a delay in case resolution. As the safety and wellbeing of our team is paramount, we appreciate your patience during this time.
Best regards,
I used this certificate just find with OH2.5 and a slightly older version of the skill and Iâd imagine myopen hab.org is using the same thing. Any thoughts? This doesnât seem right.
The Alexa authorization requirements were actually updated a while back, explicitly excluding SSL certificates signed by Letâs Encrypt, but I havenât seen it being enforced with smart home skills so far. If it would be the case, users of the official skill would experienced the same issue unless they just decided to enforce with new skills.
Are you sure your certificate is still valid? At which stage of the skill linking process do you get this error?
So I did some further testing and there seems to be some additional undocumented security requirements added for new skills which donât seem to be enforced for existing skills yet.
I initially was running with the same issue you mentioned above. However, after setting SSL ciphers to ALL on my NGINX server for my openHAB domain, the skill was successfully linked using my Letâs Encrypt certificate.
Moreover, it seems that you can revert back the supported SSL ciphers to its original more strict configuration as it seems that Amazon is doing some caching on their end. At this point, itâs hard to pinpoint which exact SSL ciphers Amazon is looking for.
If you canât get it to work, the other solution is to go with the Amazon LWA OAuth2 provider option and add your Cloud Connector account username and password to the skill configuration file as described in the readme file
Yes and we will monitor if any action needs to be taken for the official skill. As I mentioned above, itâs not clear yet if the documented restriction is enforced as I was able to still get a LE certificate working on a newly created skill granted it takes more fiddling now. Also, I donât think Amazon will want to apply that restriction retrospectively as it may cause some unnecessary drama for them with existing working skill listed in their catalog.
Anyway, the workaround for private skill instances is quite easy for anyone to implement if need to be.
With regard to the ciphers wouldnât it be possible to sniff the network traffic during SSL negotiation ? Client / Serer exchange a list of supported ones as far as I remember.
Actually, it can be logged. I just did another test and I can see the Amazon OAuth2 request uses TLSv1.2/ECDHE-RSA-AES256-GCM-SHA384 SSL cipher. The skill linking was still successful using a LE certificate.
Oddly, my NGINX SSL configuration already supported that cipher prior to doing all the fiddling. So I am not sure whatâs going on the Amazon side at this point and I canât replicate the original issue anymore. It could have been a temporary thing as well.
Hi Jeremy, still struggling with this. Amazon say I need to remove the extra langauges, where in the skill configuration is the language configuration file? i.e so I can remove ALL languages bar English(AU)
If your issue is still a failure through the account linking process and that you moved to use Amazon LWA OAuth2 server instead, then it shows that your initial issue wasnât related to using a LE certificate as Amazon stated.
Have you asked them to provide the reasoning behind that? From my experience, it has no impact especially, since you can see the deployed skill under your account and try to activate it. Usually, itâs a matter of the Lambda function being available in the AWS region related to your language. Did you set your region to us-west-2?
If you are keen doing so, the only way is to go into your Alexa Developer Console of that skill and select the language settings at the bottom of the language dropdown. Keep in mind that each time you will deploy the skill with the ASK CLI tool, the languages will be re-added.
At this point, if this doesnât fix your issue, I would recommend using the official skill if you canât get it working. As I mentioned before, there are major changes to the skill coming down the pipes including changes to the deployment process.
No, I havent gotten to the LWA part yet , working on languages. Yes, Iâve set the region to us-west-2. Iâve totally rebuilt the cloud server so that took time.
I cant get past this, using ask init from inside the /openhab-alexa directory that was cloned
ubuntu@ip-172-31-37-187:/$ ask init
This utility will walk you through creating an ask-resources.json file to help deploy
your skill. This only covers the most common attributes and will suggest sensible
defaults using AWS Lambda as your endpoint.
This command is supposed to be running at the root of your ask-cli project, with the
Alexa skill package and AWS Lambda code downloaded already.
- Use "ask smapi export-package" to download the skill package.
- Move your Lambda code into this folder depending on how you manage the code. It can
be downloaded from AWS console, or git-cloned if you use git to control version.
This will utilize your 'default' ASK profile. Run with "--profile" to specify a
different profile.
Press ^C at any time to quit.
? Skill Id (leave empty to create one):
? Skill package path: /openhab-alexa
? Lambda code path for default region (leave empty to not deploy Lambda): /openhab-alexa/lambda
? Use AWS CloudFormation to deploy Lambda? Yes
? Lambda runtime: nodejs12.x
? Lambda handler: index.handler
Writing to /ask-resources.json:
{
"askcliResourcesVersion": "2020-03-31",
"profiles": {
"default": {
"skillMetadata": {
"src": "/openhab-alexa"
},
"code": {
"default": {
"src": "/openhab-alexa/lambda"
}
},
"skillInfrastructure": {
"type": "@ask-cli/cfn-deployer",
"userConfig": {
"runtime": "nodejs12.x",
"handler": "index.handler"
}
}
}
}
}
Writing to /.ask/ask-states.json:
{
"askcliStatesVersion": "2020-03-31",
"profiles": {
"default": {
"skillId": "",
"skillInfrastructure": {
"@ask-cli/cfn-deployer": {
"deployState": {}
}
}
}
}
}
? Does this look correct? Yes
[Error]: EACCES: permission denied, open '/ask-resources.json'
ubuntu@ip-172-31-37-187:/$
Seems its a ASK CLI V2 issue, rolled back to V1 , which seemed to go OK. Then run ask deploy, inside the git directory
I donât know. It is most likely an issue with your environment. You can use the debug option to figure it out.
Anyway, as I mentioned previously, you should probably use the official skill. The intended audience for deploying a private skill is technically targeted towards developers and not end users of the skill.
Sure, but that means putting up with the poor performance in terms of speed when using the apps from myopenhab.org. Whilst Alexa is fast using myopenhab.org, the app speed is terrible.