Development System architecture

Hello, I did not found any possibilities about development, test and production OH3 System architecture and processes; what are the experience as well as best practices in this Forum with and without docker?

If I have two or three same RPI4, which file (OH, Docker, etc.) would like to transfer from one machine to other?

br HoBaKa

If I understand correctly what you are asking about is running three instances of openHAB, one production, one for test, and one for development.

Personally, I think the test instance is unnecessary in all circumstances. Given that it’s just you doing the development, the amount of risk of failure that is mitigated by having a dedicated test environment is not worth the extra cost and effort required to build a test environment. Instead do your testing in development and mitigate any problems by taking backups before deploying to production so you can roll back if required.

For day-to-day development of automations and such I don’t see the cost and effort of maintaining separate production and development systems being worth the risk mitigation it buys.

A big problem you will encounter is that many technologies require access to physical devices and only one instance of OH at a time can access the physical hardware (e.g. a Zigbee Coordinator). So you will either have to wait to test that stuff until you deploy into production or you will have to do periods of processing. In either case that all but destroys the whole purpose of maintaining a separate production system because either you’ll have to be doing tests live on production or you’ll have to take production down to do testing and development.

Therefore if you have a dependency on physical hardware I just don’t see how maintaining separate production and development environments mitigates enough risk to justify the considerable cost and effort to set up and maintain. Instead, use a well defined and tested backup and restore and be diligent about backing up and restoring. In addition, use a source control system like git that allows the creation of and easily switching between branches.

If you do not have a dependency on physical hardware then it becomes much easier. You can usually indeed run the two instances at the same time as long as the APIs being interacted with allow more than one client at a time. Though even there you will need to build into the system some way to keep the development automations from interacting with the “real” physical devices. If it does then once again that defeats the purpose of maintaining a production system. But then if you don’t actually interact with the real devices, how do you know that what you are doing actually works? And we are right back to the same place where you need to run periods of processing which defeats the whole point of having a production system.

So again, I don’t see that the cost and effort is worth while and again recommend a good backup and restore procedure and the use of source control with branching support instead. To really have a true development environment, you’d have to have a separate set of all your devices that only exist in that development environment so what you do in development doesn’t impact production which can be a considerable cost.

However, I do see a lot of value in running and maintaining separate instances of openHAB during a migration between versions and have actually done this myself. How I did most recently when migrating from OH 2.5 to 3.0 is as follows.

Background:

  • I run everything on VMs on a server class machine
  • All services are running in Docker containers
  • All machines and Docker containers are installed and configured using Ansible
  • My approach to home automation is just that, automation, not control; this means that disruptions in functionality on the UIs are not a problem for me.
  • I use git to source control my configs.
  1. I kept my old OH 2.5 container running in place and unchanged to start.
  2. I started up a new OH 3 container on a different VM to start transitioning to the new instance.
  3. Little by little I moved rules and other configuration from the OH 2.5 to OH 3 instance. In this case I also moved from text based configs to managed configs so ultimately I needed to recreate most things. If one sticks with text based configs, this would be copying the files over one by one and testing things out. I moved the hardware dependent stuff over last.
  4. Once everything was moved from the OH 2.5 to the OH 3 container I shutdown the 2.5 instance and used Ansible to spin up the OH 3 instance on the production VM and took down the development instance on the other machine.

Alternatives:

  • If you don’t have separate VMs, Docker makes it relatively easy to do this all on one machine. However you’ll have to expose the ports individually and map them to alternative ports so it doesn’t interfere with the production instance. If you have technologies that use things like network wide broadcasts or DHCP listening or the like that might break if you don’t use --net=host and you’ll have to wait until you get to production to test that stuff.

  • You can also do it all on one machine using periods of processing. While working on the development instance you’ll have to shut down the production instance. When done for the day, shut down the development instance and restart production. When done with development, delete the old production and your development folders become production.

If after all this you determine that it is worth while maintaining three separate instances then to move configs from one instance to the next you just take a copy of all the volumes you’ve mounted to the container from one machine to the next. If not using containers, you can just use openhab-cli backup and restore to move the configs from one machine to the next.

There is no simple (and correct) answer to open questions with little input like this.
I think you should explain better why you’re asking this, and provide way more context.

Want to provide a commercial service ? SLA ?
What’s the frequency and extent of changes you’re aiming at ?
Are you on site ?
What’s your intention in asking this ?
What do you want to test ? Base software (OH and bindings) ? server-sided hardware ? rules ? smart devices ?
One-time ? Permanent ?
What sort of changes do you want to apply and subsequently distribute forward ?
what do you want to transfer ? what’s your toolset for this ?

Generally speaking, a nice ‘tool’ is the remote OH binding. It allows you to have say a ZigBee stick and its binding in one machine and the things/items on another (or even on all machines, no matter how you call them)
This devel/test/prod thinking is … well, somewhat outdated and not really clever to apply in smart home systems.

Rich already explained very well why it is typically not possible to duplicate the hardware. Your system might seem simple today without much external hardware, but trust me, it will get complicated and soon you will have ZWave, Zigbee, mqtt and so on.

Now there is an option to share the same hardware but having different environments for testing / development via the REST API and HABApp. You do have to put your Python logic in HABApp.

My current production machine is a PI3 containing OpenHab and HABApp (and various hardware). My development machine is a regular desktop with PyCharm and HABApp. There is a level of abstraction here as well. All my logic can be unit tested from within PyCharm. So for the majority of the time, I don’t even have to run HABApp on my desktop. When I am ready to deploy the code in production, I will quickly run HABApp to make sure the code loads and parse the OpenHAB items quickly (it connects to the OpenHab instance on the PI via the REST API). If all is good, then I shutoff that HABApp instance on the desktop, transfer the code to the PI (via GitHub push/pull), and restart the HABApp service there.

So this is doable, but it requires you to change to a different strategy. OpenHab now just contains the items and the bindings. Your code is elsewhere. It is definitely not the the majority of users.

Thanks for all the positive feedback.

In my first step I have following small and lean IT system Infrastructure:

one Raspi 4; and one same Raspi nearby
two USB SSD (Intenso) on usb switch
power adapter Anker for Raspi and USB-Switch
one USV from APC

manuel daily backup via Raspi CardCopy from one SSD to other; later automation with rpi clone
OH3 specific files daily to seperate NAS

Software architecture:
OH 3.1 in Docker
Portrainer in Docker
KNX Tools in Docker
httpd in Docker

round about 1000 items
————————-

Advantage:
I will go back, take the Backup SSD to Production Raspi.