Stabelize a production system by working on a test system

installation
test
testing
Tags: #<Tag:0x00007fd322228358> #<Tag:0x00007fd322228218> #<Tag:0x00007fd3222280d8>

(Frank Römer) #1

Hello there,

actually openhab is something my hobby and I try a lot around. But… therefore sometimes my openhab installation is not available, has default setting instead of the right one, dadida, I think you might know what I mean.

How do you solve that? … If you think that’s a problem…

Thinking about a test System to play with, I faced the problem that I don’t have the data to work with like in the production system.
I came up with the following idea:

Generally use two openhab installations, one (Production) has all bindings and items and propagates all item info via MQTT. A second installation (Dev/Test) is subscribed to that and has items of the same name bound to the mqtt channel.

Advantages: - You get all data in a lifely manner on the test, so you test under realistic situation
- You don’t push stuff to your production that you need to deinstall (which somethimes doesn’t seem so complete after all) later on. So you keep you system clean
Disadvanges: Even if you have a clever mechanism on the production so push all item status to MQTT you need all items with an additional mqtt binding on the test/dev system. So that doubles your work.

My questions: What do you think about this idea? Worth trying it or stupid, because… something?
What dou you think might be the best way to propagate the items status via mqtt?
I see two possibilities: Rules or persistence. My favorite would be persistence, because it gives you some automation mechanism, but I don’t have much experience here.

Btw: This way you don’t need a test/dev System permanently. A VM somewhere reachable that is just running while working on it might be enough.

What do you think?

Thanks and best regards

Frank


(Rich Koshak) #2

It is indeed a problem.

Personally, I solve it using periods of processing.

I run OH in Docker and I maintain two sets of folders, a production set and a test set. When I want to do some massive changes I will stop the OH production container and start the OH test container. Then I’ll make my changes to the test config folders. If for some reason I don’t finish in one sitting or run into problems, then I simply stop my test container and restart the production container.

Once I’ve finished testing in my test configuration I’ll check in my changes to my git server and pull them to my production system and those changes now become part of production.

One of the key drivers of this is that you can’t have more than one OH (or any program really) access the USB controllers (e.g. Zwave dongle) at the same time.

Though to be honest, I almost never do this and just make changes to production because most of the changes I need to make are tiny tweaks.

I don’t think it is a bad idea at all. Though for me it seems like a whole lot of extra work and I probably wouldn’t do it personally. It does solve the USB controller problem though.

Use the MQTT Eventbus configuration on your Production system. This will let your Test environment subscribe to all events on all Items in your production. You don’t have to replicate all your Items in the test environment, though I would because there can be interactions, particularly with Rules. In your Test environment, you replicate those Items you need with MQTT binding to subscribe for updates from the event bus. If you want to isolate what your test environment is doing from production then you can simply not publish the updates back on the event bus.


(Frank Römer) #3

Thanks a lot, that gives me some ideas to think about.

Docker seem a cool idea to get that stuff running. I guess you manage the packaging manually or do you work with a helm chart?

But that means, meanwhile development the System is unmanaged and data and decisions from rules gathered in this period of time will be lost?

Thanks and best regards

Frank


(Thomas Binder) #4

My basic problem with a test system is always: I don’t seem to be able to get a spare house including all installations handy… :wink:

So - best thing would be to clone my house and use it as a dev-house a test-house, a integration-house and a production-house! unfortunately this won’t work…

Of course, if I use “virtual” items [1] that’s not a problem - but I always run into problems if I want to test some changes on let’s say my heating configuration. I only have one heating - and only one RS232 to it… So?
Same goes with my KNX - I’d like to try out the KNX2 binding - but I only have one KNX-Installation and I’m sure, I’d interfere with each instances…

the idea with the MQTT-eventbus is cool and I use it for my test instance also. But - I move the whole binding I’m about to test to the test-instance. So in my heating case - I’d have to deactivate the heating integration in my prod-OH2 and activate it in my test-OH2 - after changing everything I’d have to merge it back… pain in the ass - but I don’t see another way.

[1] no proxy items, but real items that can be accessed from both installations like API-calls or whatever)


(Frank Römer) #5

mmmh, okay that will work and it gives you real items. That’s a plus.
The disadvantage is… I’m working e.g. right now an my windows shutter items, connected to themperature and if temperature too low and windows open and time too long, give notice to sonos.

So working on that is not much different than copying the whole stuff.

And aren’t we then with Rich’ idea, to shut down prod and work on a prod clone as test?

Interesting point! Until now I have no api calls to the items in openhab directly. I just “control” them in rules.
If e.g. I call an api on homematic server, that would work against homematic, not the item in openhab?
Maybe I don’t get the point.

Thanks and best regards

Frank


(Rich Koshak) #6

To deploy and manage the containers I use Ansible. My setup is not too complex and I like having just one system I need to use to build up and update my VMs. Not everything on the VM is docker (e.g. setting up file shares and users).

I’d like to learn Kubernetes at some point but so far it doesn’t provide any significant benefits for me to make learning it a higher priority. Especially since I can deploy and start an Image with one task in an Ansible playbook.

I guess it all depends on what your Ha system is doing and why you are needing to run a test environment. For me, the test environment is so I can quickly and easily roll back my changes if I break something or can’t finish in one sitting. As such the test system is an exact duplicate of my production system so all Rules, communications, and persistence that the production system would do are still happening. So nothing is really missed.

But I will also mention the following:

  1. This is a home automation (HA) system, not an industrial control system or the control software of a space probe. What is the real impact that a few minutes or hours of data is lost or a rule fails to run? Is the impact large enough to go through the efforts of building up a completely parallel system, knowing that this is physically impossible in some cases (e.g. Zwave, see Thomas’s reply)? For me the answer is “no” because…

  2. I purposefully build my HA to fail gracefully. If my OH is down it is no big deal, it just reverts back to its non-automated mode. So I’ll have to press the button on the remote to open the garage door. I’ll have to flip the wall switch to turn on the light. The Nest will continue to run the HVAC by its own algorithms, etc. Nothing meaningful gets lost and nothing dangerous happens.

I think the point is that rather than using MQTT Event Bus for everything or proxy Items, he configures Items to communicate with his one homematic server from both production and test at the same time.

I would also like to emphasize part of Thomas’s point. You only have one physical device. For example, you don’t have a separate rollershutter controller for production and another one for test. So either your test environment is going to be working with virtual simulated devices or it will be interacting with the production devices. To interact with the production devices would kind of defeat the purpose of having a separate test environment. But to interact with a simulated device means that you are not working with the actual binding and that greatly lessens the benefit you will get from having the test environment in the first place.

For example, let’s say you want to test out a new zwave device and some rules to go with it. You can’t do that in your test environment without taking down zwave in your production environment because only one instance of OH can communicate with the Zwave controller at a time and a device can only be paired to a single zwave controller at a time. So you either have to take down zwave on your production or you have to test the device in production.

OK, so we have set up the device in production and will test out our rules in the test environment. Now you either have to provide a simulation of the zwave device in your test environment or you have to allow your test environment to reach across to production and interact with the device itself. If you do that then your test environment isn’t really separate from production anymore.

So what benefit is the test environment really providing now? It isn’t zero benefit because you can test your rules separately with simulated devices. But is that enough to warrant the significant amount of work required to set up your test environment in the first place?

Everyone will have their own answer to this question but for me setting up a test environment will have too many compromises and hole between it and production to make the effort remotely worthwhile.


(Frank Römer) #7

Yes, you are right :blush:


(Thomas Binder) #8

Interesting points…
My API example is on importing information into openHAB from external appliances. So e.g. the weather from Wundergrund or my Nuki keyturner. Both provide APIs I could use in both environments. But as I send commands to the Nuki from both installations I’m sure to get into some side effects. And we’re talking about three commands to one device. My KNX installation has like 70+ actuators and 50+ sensors sending telegrams. I’m not sure to not interfere, if I connect from knx1 and knx2 from my two instances…

My automation is also “automation only”! Like Rich I don’t rely on oh2 running - except for some low-level things like checking the wind and temperature and sending a block on my blinds - they go up. So if it’s hot and sunny outside and my production isn’t running - they go up… No big deal … But still…


(Frank Römer) #9

After a bit more thinking about it, I believe @rlkoshak and @binderth you have been very polite with me.

My idea is a stupid one. It works only with affordable amout of work for “read-only” devices, so using mqtt one-way. I’m many using these and generate messages from that,
I haven’t thought about the other way for actors. That is way too much work,

But the ideas of a docker image, doesn’t get out of my head.

@rlkoshak, if I got you right, you have these two dockers on the same system, both using the underlying services like persistence etc.? Are you using docker volumes to have two separated OH configuration on the same machine not beating each other?

Forgive me I’m thinking very visual.

That looks like an intense amount of configuration work. But is by far the better solution to my question :slight_smile:

Have I got you right?


(Rich Koshak) #10

Mostly right.

I just noticed an error in the drawing. The cylinder that InfluxDB points to should be labeled “InfluxDB Database Files”.

I have one Mosquitto instance running in its own container. I have one InfluxDB running in its own container. I have one Grafana instance running in its own container. I have a git server running on another server in its own container.

So all of that remains the same.

I configuration control my conf and parts of userdata on the git server.

There is a separate folder on my host for each container, prod and test. I don’t mess with samba but I see no reason why you couldn’t use it if you wanted to. To initially populate test I checkout the everything saved in git. If I wanted to, now is when I would modify influxdb.cfg to use a different db name and user. Prod would already have the latest and greatest checked out.

I use the following Ansible task to download the openHAB Docker image and create an image named “openhab”.

- name: Start openHAB
  docker_container:
    detach: True
    devices:
      - "/dev/ttyUSB0:/dev/ttyUSB0:rwm"
    hostname: argus.koshak.net
    image: openhab/openhab:2.2.0-amd64-debian
    log_driver: syslog
    name: openhab
    network_mode: host
    pull: True
    recreate: True
    restart: True
    restart_policy: always
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - /etc/timezone:/etc/timezone:ro
      - "{{ openhab_data }}/conf:/openhab/conf"
      - "{{ openhab_data }}/userdata:/openhab/userdata"
      - "{{ openhab_data }}/addons:/openhab/addons"
      - "{{ openhab_data }}/.java:/openhab/.java"

{{openhab_data}} is a variable pointing to the root folder of my production openhab config files.

Then I docker stop openhab and run the same task again with any necessary modifications (e.g. change to use 2.3.0-amd64-debian-snapshot), use {{openhab_test_data}} which points to the root of the test folder, and use the name openhabtest. This is all done in a single Ansible playlist.

That gets me set up initially. Then when I want to switch between prod and test to make some big changes I just run docker stop openhab and docker start openhabtest. I make my changes in test, test them, and when I’m done check in the changes. Stop openhabtest, git pull in the prod folder to get the changes I made in test, then start openhab again. I might have to recreate the openhab container if the change is an upgrade of OH. I use addons.cfg to manage my bindings so those get checked in and the new addons will be installed in production when it reads that config file.

The way I run there is literally no difference between test and prod except:

  • they may be running a different version of the OH container
  • they mount different volumes
  • until I check in my changes and pull them any changes I make in the test environment do not appear in prod.

Docker makes this pretty easy and except for a couple of extra tasks in my Ansible playbook I’ve not had to do any additional config beyond what I already would have done if all I had was Prod. The magic is in mounting different volumes to the two containers and coming up with a way to synchronize between those two volumes.

Oh, and never run both containers at the same time.

At the end of the day, I’m incredibly lazy and have not nearly as much time to work on my home automation as I would like. Anything that would take an intense amount of configuration work just to set up a test environment would never get done. I’d do without a test environment if it were much more work than this. And even still, as easy as this was to set up and use, I rarely use it. I mostly just make changes on production. :smiling_imp:


(Frank Römer) #11

Brilliant @rlkoshak!!

That’s a great piece of work.
And I guess solves my still not satisfying backup and restore problem by-the-way…

Puuuuh, it will take some time until I got that running.:astonished:

Thank you so much!!

Frank