They can have version constraints but not all of them do. Without counting, with a quick scan through the topics, roughly a third to half of the entries do not have versioning in the title and it’s that versioning that represents those constraints.
I want to make sure everyone knows I’m not advocating for this behavior. I think that the marketplace add-ons should be reinstalled or at least attempted to be reinstalled and errors in the logs if there is a failure.
But to answer the question, that can work only if the bundle actually indicates which versions it supports. For the unconstrained bindings we can’t know what version it supports. So special handling needs to be done in those cases. And there may not be a version that supports the new OH version that was just installed so that too needs special handling.
I think the overall problem is solvable.
I know @Nadahar did some work to make it so one post could post multple builds, one for each version, but I think that never got merged. That work might be relevant here as well.
Indeed, and that is probably best discussed in a separate topic. I am not personally invested in this though, since I do not use any marketplace addons. However, it seems that the current behaviour has been a source of a lot of frustrations both expressed here, and possibly quietly tolerated by many.
Part of the problem is I think not many maintainers do use marketplace addons.
There’s not a lot of visibility into the problems nor motivation to do something about it. But we make forward progress even on these types of issues over time, so I’m hopeful.
It’s not that simple in any way. Marketplace authors basically have to guess on these version constraints, most people never update them from the initial values. Think about it - you make an add-on that works with the current version of OH - and you’re asked to know in advance when it will break in the future.. this is why I’ve advocated for support for open-ended version ranges - because it makes more sense to specify the upper limit once the add-on has shown itself to be broken by a OH change. So, you shouldn’t put too much trust in these ranges - the easiest for authors is to not specify a range, because then at least, the add-on will be available.
Also, there is an attempt to reinstall marketplace bundles after upgrade, but it’s quite broken, so it fails most of the time, without a trace. It’s only attempted once per add-on, and the “list” of add-ons is then cleared from the JSONDB if the attempt fails.
I’ve done quite a few changes to this in a branch, but it never made it to a PR (and thus was never merged) because I couldn’t complete the work because add-on handling is broken on a more fundamental level, and I never could figure out how to solve it - nor did I get much feedback when I tried to ask for help to figure out how to handle it.
I still have this code in a branch and still hope to be able to solve it some day, but I’ve been discouraged that the work I’d had to put in to fix the fundamental add-on handling would be worth it - or be merged, because it’s likely to become yet another “hot potatoe” that nobody wants to review.
edit: I see that it doesn’t make sense why the marketplace handing has to do with the general add-on handling. The reason is the management of versions. I must store this information somewhere, and to do that, I must know what bundles are “what type”. For marketplace bundles (JARs) there are two sources of version information, and they might not match at all. Generally, the JAR versioning follows “Maven versioning” where a proper release is just a version, and everything else is a “snapshot”. On the marketplace, there are alpha, beta etc., but no “snapshot” concept. So, the only thing that makes sense is that for markedplace add-ons, the version is the version specified on the markedplace, not embedded in the JAR. The rules for non-marketplace add-ons are different, so to get this right, I must properly know what is and what isn’t a marketplace add-on. The current “tagging” isn’t sufficient to achieve this, and it only exists in the JSONDB - information from which isn’t aways available in the places where this is handled.
But you can be pretty sure that your add-on will break between a point update. 5.0 add-ons are unlikely to work in 5.1, for example, They might work but in most cases they do not. Patch releases are very unlikely to break the add-on though. So up to the next release makes sense as an initial upper limit Then one can set the upper limit once the add-on has shown itself not to be broken.
So that’s where the removal comes from. But it seems to do this silently. I’d have expected at least a warning that it not only failed to install it but isn’t going to try ever again.
I disagree. Whether or not it will break depends on what changes are made to the API, and to some degree dependencies. It can break “at any time” when these things are changed, and given that “major releases” are time-based, not feature-based, there really is no correlation of even probability. I just made an add-on here the other day which I expect to work from 4.1.0 (I haven’t actually tested it on anything older than 4.2.3) and “far into the future”. I could have made it work with even earlier versions, but I would have to have made it for Java 11, while I chose Java 17, which dictates the lower limit. It also works perfectly fine with 5.1.
When it comes to non-bundles, like UI widgets etc., it’s even more unpredictable when or if it will break.
The “common practice” now is to set the upper limit to the next major version, but this has a bade side-effect - things just “disappear” from the marketplace for no good reason when a new major version is released. I observed that lots of stuff “disappeared” with 5.0, because nobody has updated the range. The vast majority most likely works just as well with 5.x.
I think a much better strategy would be to have them open ended, unless a known factor that will break them exists, and that the upper end is modified when they are known to break. This wouldn’t have to rely on the original authors, when it is discovered that an add-on is in fact broken, moderators or others with access could modify the range. I think this would give the best user experience overall, because nobody is in a position to actually predict when they will break.
The factors that decide if something breaks or not are way too complicated to predict, which is why official add-ons are built every night. That way, changes that cause breaks are detected and can be addressed immediately. You can’t expect marketplace add-on authors to have an automated build environment that does the same, which means that there’s no “automated” way to figure out when they break.
It’s indeed silent, and it’s also bugged in that it isn’t thread-safe, so multiple installation attempts are started in parallel, which I assume is the most common reason why they fail. I have already made fixes for many of these things in my previous work.
The JSONDB is automatically cleared of all add-ons that aren’t currently installed:
… but kept in missingAddons, which gets one install attempt further down:
if (!missingAddons.isEmpty()) {
logger.info("Re-installing missing add-ons from remote repository: {}", missingAddons);
scheduler.execute(() -> missingAddons.forEach(this::install));
}
That attempt logs one line that should be visible in the log.
The subsequent installation attempt doesn’t log anything, but instead creates events, which I assume should be visible in the event log:
And the number of developers posting add-ons to the marketplace who would have any idea how to do that can probably be counted on one hand. I guess We should expect all these amateur and first time Java programmers to know how write an OH add-on that works with four or five different JVMs and work with multiple versions of the OH API.
Seems reasonable. I don’t want to argue about this so whatever.
All I’m saying is that when there is no reasonable upper bound, having it open seems a better option than just making a wild guess. If there are circumstances that make a break likely at some point, by all means, specify an upper bound.
For UI widgets I think we’ve never shipped a breaking change at all. At least I never knowingly did so, and even the upgrade to Vue 3 shouldn’t break widgets.
I partly disagree here, the next minor release will likely break things. When posting an add-on for 5.0 to the marketplace, I do expect it to fail on 5.1 as any Karaf upgrade can require a recompilation.
It doesn’t for a regular bundle that isn’t a “feature”/KAR (remember that official add-ons are, which makes them break “much more easily”). It has no idea of Karaf at all, and is happy as long as its dependencies are met and the API remains valid. I have two add-ons on the marketplace that works from 4.1.0 until latest, and there’s nothing special about them other than that I’ve built them with Java 17. The more dependencies you have, the higher the chance of anything breaking, obviously, but many add-ons don’t need that many dependencies.
Thanks for that information. Maybe this should be added to the docs.
I’m not shure what the right place is. Maybe here before “Special Features”?: Main UI | openHAB
Thanks everybody for the conversation about the Marketplace Bundles.
I totally agree that it might be a problem to do an autoupgrade and it’s better to remove them.
What do you think of adding something like “prevously installed bundles” Page that is available for some days after upgrading to a nother build or uninstalling any bundle?
I’m thinking of some kind of a list that links directly to the bundle (if it’s a marketplace or an offical one). If you reinstall it it will disappear from the list.
I’ve been thinking of something like this (not that would disappear though, but a list where the user could either choose “install” or “remove” manually). But, this all stopped when I got stuck on the larger marketplace work.
FWIW the reason I constrained the roborock binding only to 5.0; is that the code has been merged into 5.1 - so for 5.1 users I prefer they use the maintained version.
I’ve run into an issue with running behind traefik where some channels won’t link (and won’t give any visible errors in the UI, 400 error in browser dev tools). Apparently it’s because of the # (%23) character used in some of the calls and traefik rejects that as a security measure. My traefik logs showed it like this:
github.com/traefik/traefik/v3/pkg/server/router/deny.go:52 >
Rejecting request because it contains encoded character %23 in the URL path:
/rest/links/ZWave__BathroomLights_BathroomLights_scene_state_scene_001_Sensor/
homeassistant%3Adevice%3A5510905061%3Azwavejs2mqtt_5F0xda4abed1_5Fnode6%3Ascene_state_scene_001%23sensor
Not sure if it’s just limited to the home assistant binding or not since I just started testing
Edit: looks like the fix is to add this to your traefik.yml entrypoint:
http:
encodedCharacters:
allowEncodedHash: true
there’s also these ones available, I’m not sure if any of those need to be turned on or not yet:
Just installed openHAB 5.1.0.RC1 on my Production system and happy to report that everything seems to be going well so far. This was a direct upgrade from 5.0.3, as I never put the production system on the prior milestones.
I have checked things/items on most bindings, and all look ok so far. The only exception I had to address was reinstalling the Smarthome/J TCP/UDP binding. (This is well covered already in previous discussions, so this step was not unexpected)
This did however take slightly longer to figure, as this was not appearing in the marketplace, and turns out the setting “Include (Potentially) Incompatible Add-ons” had reverted to OFF (Not sure when that happened, if it was a result of the RC1 deployment, or happened a while ago…).
I don’t know if I am dreaming, but everything seems ‘snappier’ (faster) in the UI:
Opening a page seems to populate quicker on my Android app (values and dynamic icons)
Navigating away from the page, and back to the page later, results in the page loading immediately, including the values and dynamic icons. I am sure that I used to have to wait for it to repopulate every time when re-entering the page.
This is especially noticeable as I have a slow Internet connection, and it usually feels laggy when navigating through OpenHAB, but today I tried a remote connection, which also includes a very poor mobile data connection, and the navigation felt much faster.
Tried a few of my JS (UI based) rules, and all seem to run OK so far.
Nice work, and thanks to all those who contributed to getting this release this far.