Gardena: Error 429 Limit Exceeded

@Kai I would be happy to look at this in a couple of weeks time. Could you perhaps open an issue to remind me?

Sure: [gardena] Error 429 Limit Exceeded · Issue #13137 · openhab/openhab-addons · GitHub

Thanks, @AndrewFG!

Disable and Enable the Bridge solve this problem only for a short time. After one day same problem…

Same thing here :-/ Since now I do get the issue at least twice a day. Pause/disable and re-enable the Thing brings it back to work. At a fact: For me it’s not useful with openHAB right now as I can’t trust it will work on next morning: where I have a rule to water our lawn/garden :frowning:

@AndrewFG Crossing my fingers you will find some time rather soon (before summer ends) :wink:

Yes, I see this problem also now on a daily basis. As the binding has not changed for some weeks, I suppose the problem is due to something changing in the last few days on the server. With luck I might have some time to look at this deeper tomorrow.

3 Likes

Hello all

Maybe, this is because of this problem? They have changed the autentification:

See Problems to connect to Gardena cloud service with the Gardena binding

This is fixed.

Ok, just thought because I had the same problem with Limit exceed and then tried to set a new application in the husqvarna cloud and since then, I was no more able to connect with the the cloud because a Secret key is now required (see last posts of my link). I‘m on OH3.3 and there it is not fixed. I‘m currently in holiday, I couldn‘t try the new jar file, but looks like it is working with this.

FWIW, interestingly, my account is still online since I disabled/enabled it 4 days ago.

Mine went also offline again.
I got additional messages:
2022-07-22 06:46:07.207 [WARN ] [.core.thing.binding.BaseThingHandler] - Handler GardenaAccountHandler tried updating the thing status although the handler was already disposed

@Kai @Integer @INT5749 @laursen @openschwall @usambara

Dear All,

I have been looking at this today. And I can see that there is an NPE bug when the account thing fails to initialize. And also @usambara has reported a (minor) bug when the thing is disposed.

HOWEVER, these bugs are NOT the actual CAUSE of the error; they are simply fallout of what happens after the Gardena server rejects an API call with ‘Error 429 Limit Exceeded’.

Probably ‘Limit Exceeded’ means that the binding did at some time make too many (possibly bad) API calls within a 24 hour period. But the question is WHY the binding made too many calls???

The current binding code has been running on my system without any such errors since June 28th, yet now suddenly after 3 weeks of clean running, we start seeing these errors. Therefore I guess that something did change on the Gardena server side. Although I cannot confirm it. ???

Furthermore, since I had already exceeded the call limit before I enabled logging in my system, it means that I have no log of the API calls that may have exceeded the limit in the first place. Probably I have to wait for 24 hours before the limit count on the server may be reset. So the code / test / release cycle may unfortunately take quite a few days before we can get to the bottom of the problem…

=> So if any of you guys have logs that might give me more immediate clues, can you please post them here?

Suprisingly the binding is going to work right away after disabling and re-enabling, so I imagine if there is a limit exceeded at all?

=> If I would get some advise hwo to capture logs, I’m happy to help.

I’d say the first step towards a fix should be to make sure that the binding continues to retry establishing the connection after seeing an 429 and going OFFLINE. Once it does so, we would learn how long it takes after a 429 to resume operation and all of us could have a closer look at the logs and the call frequency, without ever having to manually disable/enable the Thing anymore. Wdyt?

I can confirm @Kai’s experience, but in a different scale. Service availability today is better than some days ago, but still “limit exceeded” occurs, from “offline twice a day” to “online more than 24h”. I find this on 2 openHAB instances with different Gardena accounts, at different times.
Usually, simple deactivate/activate of Bridge Thing is not good enough to get back online, even with 1h in between. A bundle:restart org.openhab.binding.gardena + de/activate of Bridge Thing helps, anyways oh-service or computer restart.

I think the problem may be that when it encounters some server error (probably initially some error other than 429), it does already continue to retry; and it does so too frequently that finally it reaches the limit of retries. I think perhaps the retries should be rate limited, maybe with a faster cycle initially and with an exponential rate back off. But first it would help to find the original cause of the first retry…

Interesting. The normal communication is via callbacks in ssl web sockets. So I wonder if these websockets are somehow lingering open, and only eliminated on a full restart. ??

Just a small observation after having a look at recent PR’s (for 3.3):

This was introduced in:

So perhaps would explain why it doesn’t recover from 429, at least not immediately?

Hi,
follow log I got before the issue happened, maybe is helpful.

2022-07-22 15:24:23.554 [DEBUG] [rdena.internal.GardenaSmartWebSocket] - Connected to Gardena Webservice (f1990b03c8-My Garden)
2022-07-22 15:24:23.558 [DEBUG] [rdena.internal.GardenaSmartWebSocket] - Connecting to Gardena Webservice (f1990b03c8-My Garden)
2022-07-22 15:24:23.560 [DEBUG] [covery.GardenaDeviceDiscoveryService] - Stopping Gardena discovery scan
2022-07-22 15:24:23.564 [DEBUG] [covery.GardenaDeviceDiscoveryService] - Starting Gardena discovery scan
2022-07-22 15:24:23.566 [DEBUG] [covery.GardenaDeviceDiscoveryService] - Waiting for finishing Gardena device discovery scan
2022-07-22 15:24:23.591 [DEBUG] [covery.GardenaDeviceDiscoveryService] - Finished Gardena device discovery scan on gateway 'f1990b03c8'
2022-07-22 15:24:23.593 [DEBUG] [covery.GardenaDeviceDiscoveryService] - Gardena device discovery scan finished
2022-07-22 15:24:28.563 [DEBUG] [covery.GardenaDeviceDiscoveryService] - Stopping Gardena discovery scan
2022-07-22 17:24:23.524 [DEBUG] [rdena.internal.GardenaSmartWebSocket] - Connection to Gardena Webservice was closed (f1990b03c8-My Garden): code: 1001, reason: Going away
2022-07-22 17:24:23.527 [INFO ] [ng.gardena.internal.GardenaSmartImpl] - Restarting GardenaSmart Webservice (f1990b03c8-My Garden)
2022-07-22 17:24:23.529 [DEBUG] [rdena.internal.GardenaSmartWebSocket] - Closing Gardena Webservice (f1990b03c8-My Garden)
2022-07-22 17:24:26.533 [DEBUG] [ng.gardena.internal.GardenaSmartImpl] - Gardena API token valid
2022-07-22 17:24:26.536 [DEBUG] [ng.gardena.internal.GardenaSmartImpl] - Token expiration: accessToken: 2022-07-23T15:24:10.252977+02:00[Europe/Berlin], refreshToken: 2022-08-01T15:23:21.252875+02:00[Europe/Berlin]
2022-07-22 17:24:26.798 [WARN ] [ng.gardena.internal.GardenaSmartImpl] - Restarting GardenaSmart Webservice failed (f1990b03c8-My Garden): Error 429 null, {"message":"Limit Exceeded"}, restarting binding
2022-07-22 17:24:26.806 [DEBUG] [ternal.handler.GardenaAccountHandler] - Disposing Gardena account 'f1990b03c8'
2022-07-22 17:24:26.808 [DEBUG] [covery.GardenaDeviceDiscoveryService] - Stopping Gardena discovery scan
2022-07-22 17:24:26.810 [DEBUG] [ng.gardena.internal.GardenaSmartImpl] - Disposing GardenaSmart
2022-07-22 17:24:26.811 [DEBUG] [rdena.internal.GardenaSmartWebSocket] - Closing Gardena Webservice (f1990b03c8-My Garden)
2022-07-22 17:26:29.820 [DEBUG] [ternal.handler.GardenaAccountHandler] - GardenaConfig[connectionTimeout: 10, apiKey: d1a2aeaa-6413-48d5-8900-8d581789nnn]
2022-07-22 17:26:29.823 [DEBUG] [ng.gardena.internal.GardenaSmartImpl] - Starting GardenaSmart
2022-07-22 17:26:29.853 [DEBUG] [ng.gardena.internal.GardenaSmartImpl] - Gardena API login using apiSecret, reason: no token available
2022-07-22 17:26:30.136 [DEBUG] [ng.gardena.internal.GardenaSmartImpl] - Token expiration: accessToken: 2022-07-23T17:26:19.136266+02:00[Europe/Berlin], refreshToken: 2022-08-01T17:25:30.136073+02:00[Europe/Berlin]
2022-07-22 17:26:30.139 [DEBUG] [ng.gardena.internal.GardenaSmartImpl] - Gardena API token valid
2022-07-22 17:26:30.141 [DEBUG] [ng.gardena.internal.GardenaSmartImpl] - Token expiration: accessToken: 
2022-07-23T17:26:19.136266+02:00[Europe/Berlin], refreshToken: 022-08-01T17:25:30.136073+02:00[Europe/Berlin]
2022-07-22 17:26:30.606 [DEBUG] [ng.gardena.internal.GardenaSmartImpl] - Gardena API token valid
2022-07-22 17:26:30.609 [DEBUG] [ng.gardena.internal.GardenaSmartImpl] - Token expiration: accessToken: 
2022-07-23T17:26:19.136266+02:00[Europe/Berlin], refreshToken: 2022-08-01T17:25:30.136073+02:00[Europe/Berlin]
2022-07-22 17:26:30.745 [DEBUG] [ng.gardena.internal.GardenaSmartImpl] - Disposing GardenaSmart
2022-07-22 17:26:30.787 [DEBUG] [ternal.handler.GardenaAccountHandler] - Disposing Gardena account 'f1990b03c8'
2022-07-22 17:26:30.789 [DEBUG] [covery.GardenaDeviceDiscoveryService] - Stopping Gardena discovery scan
2022-07-22 17:26:30.790 [DEBUG] [ng.gardena.internal.GardenaSmartImpl] - Disposing GardenaSmart
2022-07-22 17:26:30.791 [WARN ] [ternal.handler.GardenaAccountHandler] - Error 429 null, {"message":"Limit Exceeded"}

@laursen many thanks for the insights. But the issue is actually not ‘how to recover from 429 errors’, but rather ‘how to prevent 429 errors’.

The 429 error comes when the server rejects an API call because it thinks you have made too many calls already. See log below. And you can only recover from this error when the server decides to once more accept calls. (Apparently sometimes it decides rather soon to accept calls again, and sometimes it takes longer before the server will decide to accept calls again…)

2022-07-22 11:30:10.637 [TRACE] [ng.gardena.internal.GardenaSmartImpl] - >>> GET https://api.smart.gardena.dev/v1/locations, data: null
..
2022-07-22 11:30:10.867 [TRACE] [ng.gardena.internal.GardenaSmartImpl] - <<< status:429, {"message":"Limit Exceeded"}

So, as I said before, the issue is why do we get 429 errors? And how to prevent that?

The binding normally makes a couple of API calls on start-up to get the locations and initialize the web socket. Then during normal operation no further calls are made. Instead the binding relies on the server to make call-back notifications over the already established web socket. Normally the web socket should stay open so long as the token remains valid (which should be 24 hours), however in fact the server invalidates the token and drops the web socket every 2 hours. Which requires a couple of API calls again to get a new token and open a new web socket. In total therefore, the binding is making around 50 API calls per 24 hours. And we have seen over many weeks of operation that the server does not complain about that number of calls. However since the last few days, the server IS now complaining about too many API calls. Why?

Many thanks for the log. It is more or less the same as what I have. So my question is 'do you have anything in the logs during the 24 hours prior to this 429 error that would indicate some reason why the server would suddenly report such 429 error? i.e. is there a lot of other Gardena related traffic in your log (indeed is there ANY traffic) prior to the part you quoted just now?