There was a problem in early milestone releases of OH 4 like M2 where if a Thing doesn’t come ONLINE start level 100 is never reached. That was fixed in subsequent releases.
Note that snapshots and milestone releases are not really intended to be used for the long term. It’s almost impossible to support these versions because they require knowledge that isn’t well documented anywhere (e.g. which bugs existed at that time). If you do not intend to keep up with the latest milestones (4.0 M2 was six months ago now) please stick with the releases.
Enabled DEBUG log for org.openhab and restarted.
First run (attached log), I noticed the Twinkly binding complained a bit since my Christmas lights are not connected.
Removing Twinkly did nothing. Still stuck at startlevel 70.
I read the logs for an hour or so, but could not see anything that caught my eye.
If somebody a bit more fluent in reading openhab logs could take a look, I would be grateful.
Searching for StartLevelService showed that it only logged reaching level 0.
Attached file (remove .log extension) org.openhab.debug.gz.log (363.0 KB)
Info: there is a host posting to the rest api during startup (GTV7PowerConsumption)
This ends up well, and have been like this for years, so that should not be a problem.
During restore from influxDB, both java & influxdb hogs CPU for 1-2minutes. (lots of historical data)
This is probably worth filing an issue. Though it’s a trick problem. Runlevel 80 is supposed to mean “Things initialized”. If a Thing never becomes ONLINE does that mean things are never initialized? Should a broken Thing prevent advancing to runlevel 100?
Maybe it can be as simple as treating a Thing as initialized so long as it changes from INITIALIZING to any other state, not just ONLINE.
Yes, guess this has been refactored lately. Didn’t seem to have any practical implications though.
Only reason I noticed was that J-N-K asked me to check it when I had problems with restoring from influxDB in 4.0.0-M3. This issue looks OK now, just a few items timing out. The 5s timeout could be marginal on systems with a large database.