sendHttpGetRequest() and packet loss

My openHAB system is sending notifications over the Internet, using HTTP GET. It can happen that packages are lost - this means that the entire notification is lost. In the log I see error messages like this:

[ERROR] [.smarthome.model.script.actions.HTTP] - Fatal transport error: java.util.concurrent.ExecutionException: java.io.EOFException

Is there a way to queue messages which are send using sendHttpGetRequest(), and re-send them later?

That suggests there’s a problem with the server / connection. You want to wrap your call in a try / catch block.

While (retryCount < 3 && res!= whatyouwant) {
try {
Make call
retryCount ++
}
Catch {
Log warning
}
}

Or something like that :slight_smile:
(on phone, pseudo code…

Well, the endpoint itself is alive, I control that server, and there is nothing in the log. Not even a connection request. So the packages are lost on the way …

I can try that loop, but was hoping for something more queue-like. Looks like I have to write something …

When connecting to an unknown system, even though it’s your server there are many unknowns in the path, it’s not uncommon to have error handling and retry code in.

Ok, accepted. That’s what I am asking. I can’t find any good solutions for error and retry. Your loop is an improvement, but will block the rule until the loop is finished and runs into the final error, or until the request is submitted. For a notification, that’s rather not nice.

You are right, I was not planning to sleep for a long time. I’m well aware of the limited thread resources. Also I don’t want to block the rule just for a notification.

Still the timer seems to be only second choice, especially if any given network problem lasts longer than just a few seconds, and this might create dozens or hundreds of timers.

No, the whole point is you have one timer. You try the http GET. If it fails, you reschedule the one timer to try again later until it works or you’ve tried too many times and give up.

There is only one Timer and it is only using resources when it is actively running, unlike the while loop approach which will consume the runtime thread waiting.

There is one times - per failed request. If the openHAB system is offline, this will add up.

Since sendHttpGetRequest is an Action initiated from an openHAB Rule, if openHAB is down then there will be no requests.

If you meant that if the service that openHAB is calling is down, well that is why I said " or you’ve tried too many times and give up." That should limit the number of Timers that get created. It’s far better to have 100 or 1000 Timers sitting around waiting to run than to have just five Rules tied up with a while loop preventing any other Rules from running.

Using a Timer is a second choice if you actually have a first choice. I see no other choice here that is any better.

You could potentially try to apply Design Pattern: Gate Keeper (see the Complex Example using Queues) but that’s really just a more formalized example of the what I’m proposing here.

My fault, I meant when the Internet connection at home is down, and no notifications can be send. Obviously if openHAB itself is down, nothing will happen.

In terms I was thinking of having a relatively short timeout, catch the error and if the notification can’t be send, I write it into a file based queue. An openHAB cron job (maybe every 30 seconds) picks up the queue and tries to send the notifications. This is a bit more complex, but will not use a times for every notification, also notifications will survive an openHAB restart.

In theory I could write every notification into the queue, but then I have delays for every notification, as the cron job will only pick them up later.

Sounds like a better approach to me actually; write your “queue processor” so that it kicks off when the head of the queue is loaded, rather than cron. If that fails immediately, then it can set up timer based retries.

Out of curiosity, how would you hold the queue? String with pipe separated value pairs for item:command? Better way?

I can see concurrency in loading, reading, removing getting in the way with the above.

Edit:
Dynamically add new items to a group?

I think I will go with executeCommandLine() and write every request into a unique temp file in a spool directory. This way I don’t have to rely on any external service like a database or in-memory database. Using items or variables as queue seem to be complicated, as I can’t hold a lock on it (without much overhead), and removing items somewhere from the string/queue seems to be complicated as well.

But then again, I was hoping someone already solved this problem in a usable way …

@rossko57 I actually like your idea, and think I will implement it this way: queue everything, and then kick off the processor.

That’s what the third example in the gatekeeper DP demonstrates. We The big difference here is you would only want to delay if the call failed. Otherwise work off the messages as fast as they come in. When things are working there shouldn’t be a noticable delay.

The “Gatekeeper” example is ok, but not persistent. Like, notifications won’t survive a power loss of the openHAB system.

Here is a tool which will take over the sending of the requests (from sendHttpGetRequests), and spool requests in case they can’t be send. Demo rules and installation instructions included.