Removal of the OH 1.x Compatibility Layer

I’m familiar with this code and how it enables compatibility of 1.x bindings under 2.x is actually pretty straightforward. To the extent that 2.x bindings use the event bus, and through the compat1x layer 1.x bindings also use the same event bus, it’s clear that both binding types can use the event bus, since they’re doing so right now.

No modifications to the 1.x bindings themselves would be needed to support different ways of modifying item binding configurations, because 1.x bindings are already completely ignorant of .items files, and have always been since I’ve used openHAB. How items that are bound to 1.x bindings are configured could be extended to use APIs in addition to .items files. I suspect this wasn’t done because openHAB was split into ESH and openHAB at the same time the 2.x binding model was introduced, so this code split probably made it just too difficult, lamentably, at the time to properly support 1.x bindings. But with the re-coalescing of ESH code back into OH, it seems like an opportunity to undo this chasm, and the happy continued existence of “item bindings” alongside the later “thing bindings” could be easily documented, supported in APIs and UIs, and the whole product could be greatly simplified without losing any functionality. And the religion of killing off perfectly good and stable functionality could finally be put to rest.

And if all available developers want to have a “my way or the highway” or “not invented here” attitude about strengthening the totality of the openHAB product instead of lopping off old bits of perfectly good functionality, that is absolutely their right, just as it was my right to go do something else with my time back in 2017.

I’m just here trying to open some eyes to possibilities that people might like as a path forward, because I think openHAB still has great potential and is a massive warehouse of great integration and architecture.

7 Likes

I know of at least one other person who quit doing OH development because of this attitude.

It’s important that stakeholders know why developers are drawn to, or repelled from, a project. So if this person were to return with their opinions, it might be constructive for stakeholders to know them (if presented constructively of course!). Open and honest communication can only help a project.

7 Likes

The way I read it, @vossivossi means that even if you only use OH2 bindings then there is still the "schizophrenic configuration”. Some things can only be done in the GUI, others only via text config files. Dropping the 1.x bindings will not change that.

1 Like

That’s another topic and is already in discussion so why do you mention it here? ^^ that will be solved in the end. The question is how to preserve / rescue the labour that went into the OH1 bindings. Maybe the OH1 maintainers can separate the logic as best as possible so that embedding it into an OH2 binding shell is not much work.

Ione thing that does come to mind on this topic. Maybe if we stick with the move forward approach people will come out of the wood work and update things.

I agree this topic is very challenging but sometimes you find out who is really serious when you take the hard road. The hard road being what @Kai suggests leaving 1.x behind.

Trust me I hate saying this but it can many times be what moves us as people forward. Certainly I want discussions like this one prior to making a decision. But I think we have discussed plenty may be time to start acting. Either way of course!

1 Like

What does the parallel operation of an OH2/3 and OH1 engine (as proposed by @David_Graeff) on the same computer mean in terms of performance? I am asking as an Raspberry Pi owner …

Thanks!

You need more memory for those two OH instances.

That’s a problem on a Pi, I assume…

It depends, I think, especially if you are running other services on the pi or not (node-red, homegear, …).
If your pi is on the limit, would it be a real problem to eventually put another pi in service for this scenario or use a more powerful system?
(For me it’s really not, but I like servers, so I think I am to biased to give an opinion for this)

If we have to run multiple Pi to have a fully functional smart home server, many users would be disappointed. And yes, for me, a Pi is “the” perfect hardware for a smart home server. Whatever direction OH takes, if it does not run on a single Pi, I assume many users will leave…

1 Like

Not only for you but for many users.
There are not many alternatives with more RAM.
Maybe the Pine64 but i‘m not sure about the easy setup like it‘s possible with openhabian.

Back to topic.

@David_Graeff this question was meant serious.
I know the work depends on the binding and it‘s functionality but what do you think would be enough for a developer to do the work?
Let‘s take the CalDav binding as an example.

I thought this is very much on the topic: If the OH 1.x compatibility in the future is realized by an additonal OH 1.x server running (as proposed by @David_Graeff above) I am simply afraid that this will be a pain on a single Pi.

I mean it is your wish to run OH1 bindings. You don’t need to. And if you do, it’s not many bindings is it? OH1 will not consume much RAM in that case. Maybe 50 to 100 MB.

Probably around 250 for a small binding.

Well, I only use the bindings I need. In my case, the CalDAV binding is the only 1.x binding (I think), but it is essential. Not having this one breaks everything. Call it a wish, but if you put it this way, using OH in general is also only a wish.

100MB is 10% of the RAM, and looking on the utilization, this would be relevant! Noteworty, my Pi only runs OH with few bindings and a handful of rules - so consider others who might have a more complex system…

Ok, it seems really many poeple like to do this. For me it isn’t really a server, but when I think server I think of min 8 drive bays and so on. So a small server would be a NUC for me. And that’s really small to me…

IIRC it was mentioned the compatibility-layer is in the end an OH1-Core which is running inside OH2. So if the compat-layer isn’t there in OH3 wouldn’t it consume less RAM (of course it depends of the bindings used and so on, but you know what I mean?)
And besides this may be when OH3 is ready there is a PI3++ or PI4 with a little more RAM than today. I don’t know if it is really a problem to run OH1 on the PI, too when time has come. But as said, I am not really a PI-expert.

Yeah it‘s not a server in the classic term of servers.
As Tobi said, it‘s a smart home server or better defined a openHAB server. In terms of software not hardware.
Many users or techies are using RasPis for their smart home installations.
There‘s better hardware for sure, indeed a RasPi is a good compromise for this purpose (openHAB).

A RasPi 4 wont be released before 2020 :wink:

Of course the pi is a kind of server here. It’s just for me no really a server. But by definition it is serving services and is a server and is quite powerful for its size, thinking of what power servers of my definition had 20 years ago!
Ok, but OH3 isn’t coming before 2020, either. Is it? :slight_smile:

I believe openHABian supports Pine64 out of the box. But if not, the manual install of openHABian will work on any Debian based distro. I’ve used openHABian on an Armbian running on a BananaPi without problem.

It depends on what else you have running on your RPi. From what has been described, the OH 1.x Compatibility Layer is almost a separate copy of OH 1.8 running on your OH 2 Karaf. Remove that, perhaps it frees up enough to run it as a separate application instead of as a bundle inside the one Karaf.

To raise a more philosophical question, what is the responsibility of any software project to provide backwards compatibility? You have extreme cases like Microsoft Windows where one can almost run just about any software created for Windows in the last 20 years still. Then you have phones like Android and iPhone. On a lark I tried to get my old Moto Droid (the one with the slide out keyboard) to run. Nothing worked.

Losing backward compatibility does come at a cost in loss of users for sure, perhaps loss of developers too. But maintaining backward compatibility comes at a cost to and I’m hearing from the developers that they are no longer willing to pay that cost.

It’s not ideal, but I can’t think of a better compromise than what is proposed, running multiple instances of OH and federating to retain support for the bindings that no one has been willing to migrate to the new architecture (for what ever reasons).

But it will have more RAM. :slight_smile: And I wouldn’t expect OH 3.0 to be released too much before 2020 either. The timing might work out nicely.

And to directly address CalDav, there has been work in David’s PaperUI-NG study to bring more CalDav capabilities into OH proper which might be suitable for replacing this binding. The devil is in the details of course.

This really sounds great and would resolve some problems for me! Would you give me a hint where I could find more details on that? Thanks :blush: