Three-Stage-Environment with Docker on Synology

Dear all,

I want to setup a three-stage-environment (dev test, prod) for openHAB 3.2 with docker on my Synology.

For doing config migration from one container to the next (e.g. dev → test) I am thinking to copy all files located in the mounted folders (addons, config) over.

But can it be that simple? I assume not…

Does anybody have experience with such a setup and has ideas for a migration script?


Much depends on the environment you want this to work in, and what you expect to do with it. For example, if you have zwave devices they’ll only talk to one instance, rendering the others useless for many purposes.

Yes myself and others have been experimenting with this but this is merely a concept from the Professional IT Outsourcing Business that does not match with a home owner to “operate” his home for several reasons, one being what Rossko mentioned, many devices you cannot share across instances.
Long story short, it isn’ t worth going for.
A useful tool is the OH remote binding to tie instances together so you can Develop+ Test in one instance. “Migration” then is as simple as Copy n Paste or file copIng.

PS even more so when you run everything in a single physical box. That’s harmful to overall availability and not recommended.
If availability is your goal then this is an XY problem type of question and you should rather fundamentally change your HW setup.

Would you like to involve e.g. alex as well ? As far as I understand this would mean you need to use different profiles on your smartphone/tablet to separate the dev/test/prod instances from each other.

I see that I should have better explained what I want to achieve with the setup.

First, it is not about improving availability; it is about integrity. I want to have dev, test and prod to be (config and data wise) fully separated to not impact my production configuration, while I dev or play around.

I am fully aware that dev, test and prod meet at the level of physics, i.e. devices. E.g. I have zWave and of course I am aware that only one openHAB instance would be able to talk to the controller at the same time.

For that reason my intention is not to run dev, test, and prod in parallel. There would be only one instance up and running at the same time.

Main use case:

  • Implementing and testing config changes safely on a separate environment by shutting down prod container und starting dev container.
  • As soon as openHAB is being required to work flawlessly (e.g. development has to be interrupted for some reason), shutting down dev container and bring back the prod container.
  • Once configuration work has completed move all to the test container and use that for some time to validate changes are OK and as expected.
  • Once changes are validated move all configuration to prod container and activate that container.

The containers are already running with OH3.2.0 on a RS1221+, even if not required in parallel, since I separated network.

The only question I still have:

How to migrate configuration data from one container to another?

I know. That is already considered in my system design and validated. I have two openHAB (3.1 and 3.2) talked to the same controller - of course not at the same time, but in sequence.

Yes, I do use Alexa and also Siri.

I guess I do not need multiple profiles, if only one of the three instances is running at the same time.

Well, that’s either very simple

… but in your intended setup I presume it is going to be a pita because you will be having parts of the system you cannot simply copy but will have to adapt every time you want to sync forward.
Re-link items to channels and things to hardware, state changes in actuators every time you restart etc … there is much more work and pitfalls ahead than what you seem to be anticipating. The devil is in the details on these.
And yes I have been there before, doing a very similar thing for a multi million $ business telco switch system … the “migration script” finally was on the order of several thousand lines of code, not to mention the time it took to make it work under any conditions, in particular with ever-changing config/rules on the source system side.
Which is another of those reasons I mentioned why this 3-stage concept is inappropriate for application to home automation. Take the advice or not, but I’d still claim it ain’t worth the effort.

On a sidenote, I don’t see the point about all of this when you shutdown production every time to test development. Simpler and less impacting to edit production right away, there’s better means to ensure integrity (e.g. git config storage).

@mstormi, I do not want to start or have a discussion on the philosophy of multi-stage environments. They are considered as best-practice for corporate application environments. I respect your view, that this is inappropriate for a home automation user to operate his home, but that was not my question. So thanks for your view.

That being said, I am coming back to my question:

Does anybody have experience with migrating openHAB configuration from one container to another?

Yes, yes it really is that easy, unless you have some other requirements and aims not yet mentioned.

Frankly, it’s way less work to use git and branches for this most of the time, only resorting to separate containers to deal with upgrades (and often it’s not worth it even then). If it’s the same config information, running it in a separate container is kind of pointless. All it proves is “yes, the same config running on the same software and hardware runs the same way, who would have thought?”

Ultimately, most of us with a professional IT background have been down this path before and we all pretty much decide it’s simply not worth the effort. It is so much work for so little benefit. But some lessons must be experienced to be leaned sometimes.

Not really.

As long it is the same config, you might be right. But as said, I want to do config changes in dev environment and move that over to the succeeding environments. I aim to do this with upgrades as well.

I just “migrated” OH3.2 from MacOS to my Synology-based container, which was very easy: Copy over all files in /conf und /userdata.
I was afraid of doing that since I read frightening stuff about this move. It turned out that I only had to find a solution for speedtest CLI, which was easy, since I replaced it with a dedicated container.

For me it was a quite easy business case: I had to replace an outdated Mac Mini with openHab on it and an AirportExpress as TimeMachine target. Synology was my perfect match for that replacement: openHab containers + acting as TimeMachine target.

Maybe I do not see the effort yet, but if migrating from one container to another is that easy as from the Mac to a container, I am not afraid of it.
I am not sure if the effort for getting familiar with git (not a developer) and keeping that maintained is lower. I plan to have as few config change as possible… :wink: If there is one, I copy over to test/prod, once it is stable.

Lets see how it works out…

This topic was automatically closed 41 days after the last reply. New replies are no longer allowed.