I have multiple openHAB instances which work together. To check if openHAB is online on another server I run following code:
# Warten bis Server gestartet ist logger -t echo "waiting for OpenHAB Server..." while ! timeout 0.2 ping -c 1 -n $SERVER_IP &> /dev/null do logger -t echo "." done logger -t echo "Server is back online" # Warten bis HTTPS-Dienst ausgeführt wird logger -t echo "waiting for OpenHAB startup..." until $(curl --insecure -s -o /dev/null --head $PROTOCOL://$SERVER_IP) do logger -t echo "." sleep 5 done logger -t echo "OpenHAB is back online"
At first I wait until the server is reachable. Maybe the server is not started or at the moment it is starting or restarting. After that I try to make an HTTP-Request. During the init of openHAB you would receive at first 404 (page not found) and 500 (internal server error). After that you will receive 200 ok or in my case because of an unsecure https request 401. But this means openHAB is starting. However, during this startup, sitemaps, items, rules, etc. are loaded.
I’m currently working with a sleep command here, but it’s imprecise and it doesn’t always work.
What I want to achieve is to run a command for executing a rule after the whole system is started. Sometimes it works but because of the delay he tries to execute this rule to early.
If you look inside the Karaf Console and use
log:tail you will you will notice that the models are loaded. So maybe it could be a good solution that you could get an information that all models have been loaded.
What I also have done on my openHAB instances was following tutorial which show me the uptime of my server and openHAB. Both informations are different. Especially when you restart the openhab2.service.
So I have a similiar question to this topic and want to know if all config files have been loaded. Yes I have used the word models for it because if you have a look inside the Karaf Console they talk about models.
In my opinion the “system” openHAB is booted when not only the web service was started but all these conf files could be loaded. Before that it is not functional. What I could already determine, is that you can already control items via the site map and e.g. turn lamps on and off, before then actually the rules are loaded. Only after these could be loaded, the corresponding rules could be triggered. Self-explanatory.
Of course, if you have several systems running together, it is annoying if some items or rules have not been loaded yet. It can lead to one of the systems crashing due to an exception. This must then be restarted.
To be able to boot all of them cleanly, I would like to have a check if all systems are not only running, but if they could actually load all their models already. I hope you understand the difference. I have not shown my whole bash script. I of course still configure the parameters for the IPs as that I then run
systemctl start openhab2.service after it thinks that the other openhab server is running. Now of course you could go and say if it’s delayed start then it should be ready after the other one is completely started. It would be nice. The items and rules take different amounts of time to load and there are 8 different devices in total. Because of the different performance, this check is relatively reliable, but sometimes one or the other device needs a little longer and then it doesn’t work quite so easily.
Of course I was working with ssh so maybe something like inside the fist link with the process could be work better than my https request.
But the running process would also only mean that it has started so far that all models are loaded and still not that all could be loaded already.
Maybe people here have good ideas or even solutions.
Thanks in advance.