I’d like to split the discussion about repo structures and ranks for addons into its separate topic.
Let me re-emphasize my concerns: Not having any formal code reviews will deteriorate the code quality and stop the growing of the openHAB code base as addons will be all over the place, but not part of the project.
I’d therefore like to keep all “important” add-ons under a similar process as today (but hopefully with more active maintainers), while having half-finished stuff and things for small audiences “living in the wild”.
I’ll post in this topic, but I think this issue and lack of enough maintainer hours are related.
Letting the addon ecosystem be less managed is not without its downsides. There will certainly be awful, bug-ridden addons. There will probably be more overall instability as well for a typical user. It will create frustration for new users as it will take more time for them to learn and explore the openhab landscape.
A major development bottleneck is lack of enough maintainer man hours. I certainly understand the desire for wanting to solve things by increasing maintainer man hours, but that just doesn’t seem like a realistic goal. More like a nice dream.
With openhab’s addon framework it is easy in theory to pop in a development/forked jar file in place of the original, but I don’t find myself doing it. I see lots of interesting forks and development efforts, but the developers seldom post prepared jar files for users to experiment with. With a bit of time I can create these jar files on my own, but many users can not, and I would rather not have to.
Let’s create a development framework where I can point my openhab install at addon repos instead of providing it with addon jar files. This will allow my addons to autoupdate as the repos change. It will also allow users to easily experiment with development forks by pointing their openhab install at whatever forked repo they desire.
It will create a competitive addon ecosystem where we don’t need long term dedicated maintainers. If a maintainer flakes out, anyone else can just fork the repo and share their work quickly with any users who are interested. Let’s not make users have to set up a development environment to achieve this.
Of course, anyone who hates this can stick with the current architecture. If you want your openhab runtime to use the jar files you give it instead of pointing it at actively developed repos, that is your choice. Let’s make it easy for users to choose their desired balance of stability and new features. I like the idea of empowering users even if it makes things more difficult for newbies.
Here my thoughts about the topic.
At first let me say that I use OH in a “production” environment and therefore need a reliable (ideal 24/7) software. I’m defininatly not interessted in experimenting with an unstable system and I guess most users aren’t. Based on this, there are some restrictions for the integration of bindings.
My main point is that no single binding is able to crash the whole system (see current topic about OutOfMemoryError in Sonos binding). So it shouldn’t be possible for everyone who has no experience is programming to implement a binding and put it in the “official” repository, whatever that may be. And I also don’t want to deal with dozens of repositories to find interesting bindings (so first find the repository, then the binding … not good).
So if I would have some wishes about this topic
Use a binding interface that prevent bindings to provide any harm to the rest of the system (I’ve read about socket communication between core and binding)
Keep a repository where all “official” bindings are accessible. These bindings should meet some criteria like being activly maintained (according to bugs, etc) and have passed some kind of testing (like review, basic functionality, …), so that they could be seen as reliable. Whether or not there are more categories like users, etc doesn’t matter from my point of view.
Of course, if anyone has an interesting idea about a binding for a (new) gadget he is free to implement and use it (after all the interface is public). If he things it is stable and interesting enough for other users, it can be decleared “offical” (after passing the test). But, having the 15th implementation of a http binding might also not being so cool
I know that this doesn’t solve the maintainer problem but have you tried a static analysis (aka review) tool to reduce the workload in that area, e.g. PMD (https://pmd.github.io/)?
Why to point to a repo? Why not just a jar? These addons may as well live in Maven central. It would be awesome to have some sort of management app/script for addons that can list, add new, delete, change versions of addons installed.
I have been following with a great interest the starting discussion “The Future of openHAB”
And now, after reading again these posts, I am reinforcing my original idea (that i had since the very first day i got into OH) that you are in deep troubles because the whole idea of “keeping control” (checking code, importing into the main code stream, …) is just evil.
I do not have any knowledge of any other open source and open system where such a rigid and “military” structure has been deployed.
Anywhere and on any other technology the workflow is exactly the opposite one:
develop a plugin/addon/…
test it on the framework for bugs, compatibility, …
make it available to anyone interested in testing as alpha/beta version
debug & fix loop
consolidate it into a final version 1.0
publish as on open source plugin (maven central + github or whatever, depending on the technologies involved)
optional: publish as plugin on the available plugin list on the open source open system web site
and the final user is free to test it, use it, fork it, fix it, promote pull requests, open a fix request, vote the plugin, comment the plugin, … anyway everything at his own risk.
This is the normal workflow that any open source developer and/or user is used to.
And this workflow just works; it is THE open source workflow. I never understood both the meaning of integrating the plugins/addons into the base of OH and the meaning of keeping the control over the plugin developer code. May be I am just not getting some important part of the process but apart from the open source license applied, that must be the same, I (both as a OH developer and OH user) really never understood the reason behind this overwhelming organization.
Well, the addon jar file would sit in some sort of repo, either github or maven central or wherever. The point is that you would point your openhab runtime at a repo that contains a jar file, not give it a static jar file. That way as the jar file in the repo gets updated, openhab will automatically start using the updated jar file. And, as I stated before, users wouldn’t HAVE to do this, If they prefer they could continue using static jar files on their local system. It would just make things a lot easier for people that want to experiment with a forked/development repo or who just want to stay up to date on the addon.
OK, I’m joining this discussion a bit late, but here is my 2 cents.
In my opinion, the structure affects 2 types of audiences: developers and users.
As a user, I really don’t care where all the jars and code are stored. I just want to download the core runtime, fire it up and then find and install add-ons directly from within the application. Sort of like you can with the Marketplace in eclipse, but then better Given that openHAB is built around OSGI, we can use that as an advantage so that any downloaded add-ons are activated immediately without having to restart the application.
To keep quality good, there needs to be governance. Not everyone should be able to add to the market place, only a select few. Governance should not be a bottleneck in the process, so rather than waiting for a full review before things are added, they could be added with different ratings. For example a new unreviewed/tested binding could be added with the rating ‘experimental’. Once bindings move through the governance process, their rating could increase (e.g. beta, review in progress, certified, …).
As a binding developer, I do care about code quality and the location of the code, but above all I like to keep things simple. I remember spending way too much time on building and troubleshooting build problems of bundles, which I never even used. If development takes a long time, which it tends to do in this ecosystem, then having all the bundles together in one repo, just makes keeping current with the main branch more complicated than it needs to be. So my vote goes to a dedicated repo per bundle.
Furthermore, I believe it will improve the quality of the code, as it becomes easier for the contributors to keep track of their issues and feature requests. Personnally, I tried to follow defects for my bundles in openHAB, but there is just too much activity which didn’t concern me, so I stopped trying to follow after a while.
Adding my 2 ct as well making the post from @dvanherbergen 4ct as I share much of his opinion.
From user point of view we should have a sort of market place. Where you can select to see the 2/3 groups of bindings. I imagine you have the code checked ones, active maintained / tested ones ones and more experimental / unreviewed ones
Dunno if you have seen, but kind of how on Synology drives have an addons repository, however than with the list of community repro’s centrally maintained.
As developer I would like that it is very easy to share a beta version, would like to be able to push myself at any moment a beta version which users that want that can pick it up and use it. For the released ones I like the code reviews as I think that adds to the quality.
How about using an OBR repository dedicated to openHAB? This could be used to store the different binary versions of add-ons.
We could add a market place layer on top of that with some publishing governance built in. Then all we would need is an add-on manager bundle which allows you to browse and install from the market place.
@Kai, if you have no objections, I’m happy to start hacking away at a solution for this over the next few months. It will be a good way for me to get started again with openHAB development.
For the repos, one option would be to split it into 3 types:
openhab core : the minimum runtime required
openhab add-ons : certified add-ons
standalone add-ons : addons under active development with their own lifecycle. These could move to openhab addons when their maintainer looses interest and wants to find a good home for the addon…
I’m already dreaming of what would be the perfect solution from user point of view (german: Goldrandlösung ;-))…
As a user I could download and install addons from a central repository. I will also get notified if there is an update available for my addons, they are not directly connected to the SW-lifecycle of the OH-Core. I can select, per addon, which level of addon I want to run (released, beta, development, …) which can be changed at any time. With this I’m able to get an earlier version of certain addons when there are problems with the current version (can’t get worse with not finally tested new version) or the maintainer added cool new features I really want to have. Even as developer you can select “development” state for your addon and will always have your newest version via official distribution channel.
If then there is some kind of mechanism that prevents the addons to crash the whole system as they are running in a kind of sandbox…
Guys, it is ok to start dreaming and I also have millions of ideas what would be cool for users and developers.
But please note that I started this thread for a specific reason: There are two few people doing the overall coordination vs. too many people doing contributions (which need to be coordinated to some extend).
For me as a maintainer of the runtime, it is VERY valuable to be able to have a workspace with all addons in it, so that on refactorings, things can be tested to some extend and “bulk” actions can be performed (like creating a release, applying code formatting or whatever).
From an individual contributor perspective, it is of course nicer and more flexible to have a single small repo - but this means that these are indeed 3rd party “addons” and not a part of the openHAB project itself anymore (since nobody will be track the governance, IP & license issues etc.
@davy, it is great to have you back on board!
If you are interested in improving the user experience here, I would suggest you join me and @maggu2810 on https://github.com/openhab/openhab2/issues/194. We have actually implemented a lot already there; add-ons can be located in any Maven Repo and the new runtime comes with a UI where users can easily install/uninstall the add-ons.
Note that this is already the status quo of openHAB 2: The add-ons are already located in 3 different source repositories (ESH, OH1 and OH2), but for the openHAB 2 distribution they are all assembled, so that the user does not have to care. With the openHAB 1.8 release we also want to remove the runtime and designer code from the openHAB 1 repository, so that only the add-ons are left in there. For openHAB 2, I also plan to split the add-ons from the runtime. For add-on developers it will therefore in future be pretty easy to work on their code, because all they need in their workspace is their own bundle, nothing else. But again: To do all these changes is a lot of work that I am mainly doing alone - I would wish to have more people willing to help on such things!
So what is going to happen to the non-binding code changes that have been made since 1.7.1? In what ways will the 1.8 release differ in nature from prior releases? Your comment gives rise to worry of wasted effort, so please clarify and thank you.
Sorry if this was misleading: What I meant was that the 1.8 runtime will be the last one - and it will of course include all changes that have been done since 1.7.1! AFTER this release, the sources will be removed so that there isn’t any 1.9.0-SNAPSHOT runtime. Worries gone now?
Yes; thanks for clarifying. Makes perfect sense, except in the case where there would be strong arguments for a 1.8.1 maintenance release, but I suspect you’ve thought of that already. But knowing that there will be no 1.9 runtime is good!
Except that it doesn’t scale. Imagine having 600 openHAB add-ons
I believe most of the things you mention can still be done by just importing different repos into one workspace. And some of the others, are probably things you shouldn’t spend time on. But I can understand the need for a good working environment and it makes sense to keep things simple.
Personally, I prefer an approach where a contributor would not even have to clone a repo. In my dreams, a contributor would be able download a single esh.binding.api.jar and implement a binding any way they want. Keeping all connections between the runtime and the bindings limited to a single api spec will make it easier to maintain both the binding and the runtime.
But there does need to be some governance on add-ons. I don’t think it needs to be a ‘hard’ governance, where bindings don’t get included in the release because they are not approved yet. I would much prefer a system of governance through ratings. Ratings like ‘official’, ‘certified’, ‘experimental’, etc should be enough. You could have a system that if a bindings gets too many defects, its rating is lowered automatically to ‘buggy’. That may motivate developers to resolve bugs. I also believe this ‘governance’ should be more limited than today. What is really important is that the code doesn’t pose a thread to application stability and that it’s reasonably documented. But does the code for the different bindings all need to use the same code formatter? For me, that would be a choice that can be left up to the developer of the binding as long as the binding is maintained outside of openHAB and with that, it’s one worry less for the reviewer… Of course, when bindings are donated back to openHAB, a more rigid review process could still be used to make sure the IP is OK.
I did not contribute to the discussion because I am not a professional software developer, and I have zero experience in provisioning systems and so forth, and thus prefer to shut up. However, I do crank out a great deal of source code at various points in ESH and OH, so I might throw in my thoughts, albeit not very structured
I would prefer a system whereby
there is one overall “add-on” repo, community-owned by all “usual suspects”, eg. more that @kai and thomas
every author of an add-on takes on the responsibility of self-reviewing the code, and resolve bugs
every author has the possibility to push commits to the repo, do tagging, follow-up things. I know this implies a minimal level of trust, but if things are broken, we can always fix them afterwards
add-ons can be added immediately to a repo, so that at least everyone (from user to developer) finds its way to the add-on, and can use it in a test/evaluation/production environment. I have about 10 or so bindings that are still in the pipe as a pull-request. There are running fine > 1 year in my own production environment, I fix the bugs on them, but no one else is using them, so they do not evolve, code quality does not evolve, and so forth. I think stuff needs to be shared early on, even if it is not finished
I think we should stay pragmatic, and stick with a simple ranking system as suggested by @davy
We have a repo that contains ALL reviewed/approved/clean openHAB 1 bindings (openhab/openhab)
We have a repo that contains ALL reviewed/approved/clean openHAB 2 bindings (openhab/openhab2-addons)
We have an IDE setup, which in the minimal setup does not require any of these repos to be checked out - so you can potentially work on code that is located somewhere completely different.
Targetplatform and core bundles are all published to Maven, so that it should not be to hard to create new github projects for individual add-ons (@hakan’s suggestion), but I am not sure we should go this way (just want to mention that my preparations make this possible as well).
We have the distro project, which assembles the runtime with the add-ons from these sources. It is fairly easy to add further locations, either directly to the official distro or by the user himself. The add-ons do not even have to be available as source code, but only need to be deployed to some Maven repo.
I think it would be great to have a build server setup that is building all PRs and publishes such add-ons to a Maven repo. We could then have an new “experimental” Karaf feature, which people can activate on their runtime and which would make all such add-ons directly available.
I have about 10 or so bindings that are still in the pipe as a pull-request.
That is indeed a problem. It’s definitely not nice if you have put in all the effort to create a binding and then it just ends up stuck in limbo for a long time.
Targetplatform and core bundles are all published to Maven, so that it should not be to hard to create new github projects for individual add-ons (@hakan’s suggestion), but I am not sure we should go this way
It will be unavoidable that bindings move away from the main openHAB repo if the review/integration process takes too long. We should find a solution in [this thread] (Growing the number of active maintainers).
This is definitely my preferred solution. I think for the openHAB 1 add-ons, we made already good progress here as we have a bunch of maintainers for them meanwhile. For ESH and OH2 I am still completely alone, which definitely has to change when even more developers are now moving over.
A big part of the long binding review queue is also due to the fact that my priorities were more on getting the ESH framework and APIs stable and usable than to invite people to write new bindings for OH2. I still do not consider the APIs to be final, so here is still the risk that stuff gets broken. That’s why I prefer any effort of our @maintainer’s and developers to be spend on making existing bindings work smoothly on OH2, before having to many people actively porting stuff over.