Example: Configuration Management of OH Configs using Git

,

Continuing the discussion from Exec binding and echo, netcat, a pipe, carriage return and quotation marks:

I decided to respond n a new thread because I think others might be interested in the answer.

Here is a full description of my source control for my OH configurations.

#git server
I use a Docker Container running git and ssh with a folder I backup mounted as a volume for the repositories. I do this for several reasons. First, I wanted to learn how to use Docker and how it works and this was a really low impact low complexity way to get my foot in the door. I like that I can reconstitute the service instantly (I also cm control my Dockerfile) and I like that I can add some additional isolation and additional layers of security. For example, I set ssh up for the git user in that container to only allow login for users with the right cert (a different cert than my host), and only allow certain hosts on my LAN access to it, which is pretty easy to do through Docker.

I like the security because I do have sensitive files checked in to my repos (passwords, ssh certs, locations, etc.). So indeed no, I do not use github for any of this. For the one project that I have shared on github I actually created two repos, one for github and one for my personal deployment. For the github repo I use git ignore to exclude some files and simply don’t put others there in the first place.

This is kind of a hassle but for this particular project (sensorReporter) I have three (soon more) machines I need to deploy it to it is really really nice to be able to clone the repo and run a script and be up and running with passwords and certs and everything. And using git makes it really easy to keep them all updated.

But getting back to the security, I must admit I have not really looked that deeply into whether I’m really buying some real security or just the illusion of security. But even if there is no value added security wise, the ability to run docker build and a minute or two later have my service completely restored is a huge win.

#Backup
The backup is multifaceted. First I usually have one or two clones of the repos scattered about which provides one level of backup. The other is I have Bacula keeping backups of the drive where the git repos reside. If I were really paranoid I’d probably sign up for some offsite service but haven’t gotten around to it.

#OH Configs
I have one repo to capture all of my OH conf changes. These changes include /etc/openhab, '/usr/share/openhab/webapps` so I keep them both in one repo which I clone into my home directory and symlink them to to their proper locations.

3 Likes

Hello @rlkoshak !
I was thinking about writing a similar topic on how to publish your openhab config. Thanks for starting that thread :slight_smile:
I am using a similar setup, Docker (with openhab2), OS Setup done with Puppet and my config deployed through git. The important difference is that I am using Blackbox to encrypt the important files inside the gitrepo with which I can publish the rest on github. I am thinking about switching the encryption to git-crypt as it seems a bit easier to use but Blackbox is good eneough for now.

Cheers
Hannes

Excellent ideas. I was unaware of these transparent encryption products and I’m certain others here will be glad to know of their existence.

I’m personally ambivalent about the encryption options mainly because I tend to use git in a relatively lazy and undisciplined way on my own projects. I love the idea but am afraid that it would open me up for added risk. I’m much too prone to make a bunch of little changes over time then do a bid git add . && git commit -m "changes" && git push with no recollection of what actually changed and that raises the risk too high that I may forget to add some new file to the encryption or gitignore that I may have forgotten needs to be protected. So either I would need to just encrypt everything at which point there is no point in using a public repo (except for maybe the offsite backup) or setting up my own repo, which is what I currently do.

For me, the discipline required for me to transfer my changes from my private repo to the public repo helps me both document my changes better and avoid sensitive info.

What are your experiences with running OH in Docker? Have you had any hidden gotchas or performance issues? I’ve opted to move from my monolithic server setup to a containerized deployment (hence my experiments with Docker) instead of a VM as my machine, while way more powerful than a Raspberry Pi is still just a laptop.

The gitignore is taken care of by blackbox. All I do is black_register_new_file myfile.cfg and it is encrypted. On the openhab server i use blackbox_postdeploy (which you can configure as a posthook iirc or as part of a Makefile).
On openhab1 you can use the configadmin binding which helped me on the pushover script.
You just define all your tokens in the cfg files and all your rules are clean from secrets :slight_smile:

items:

String pushover_token_barbara  "Pushover Barbara [%s]" (Maintenance,PersistentCurrent)  {configadmin = "pushover:barbaraToken"}
String pushover_token_both    "Pushover both [%s]" (Maintenance,PersistentCurrent)  {configadmin = "pushover:bothToken"}
String pushover_token_default   "Pushover default [%s]" (Maintenance,PersistentCurrent) {configadmin = "pushover:defaultToken"}

pushover.cfg:

barbaraToken=xxx
bothToken=xxx

Unfortunately this does not seem to work with openhab2 yet :frowning:

I have it running on an Intel NUC with Debian and I am quite happy with it. We are nearly done with the first Dockerfile for openhab2 and I have daily builds triggered on my dockerhub repo until it is released.
Because of the way how docker works, you should see near to no virtualization overhead, which you would have with classic virtualization (roughly 5% there).
The only problem that arose was with external devices directly attached to the NUC (Z-Stick, RFXCom etc). In docker you can pass the --device option but you need to reference the device itself, not a symlink. The Z-Stick requires you to take it out to include new devices and upon reinsert you need to restart your openhab to pass the new devicename. But I think this is ok until the problem is fixed with docker release 1.11 :slight_smile:

Sorry for the wall of text. I hope it helps.
Cheers
Hannes

wow. @rlkoshak thanks for starting the new thread. This is all a bit beyond me, but has given me plenty to research (I have opened 6 new tabs since beginning this thread :slight_smile:).

I ended up just cloning the /etc/openhab/ folder into my github account with /configurations/openhab.cfg as the only line in .gitignore. Now I’m trying to figure out the best way to to work with this. I cloned that new repository to a folder on my desktop and I edit it there with Atom, then (I guess) push to github, then pull from github on my raspberry pi (I haven’t actually gotten this far yet, this is my first experience with git, but 'bout time)? Is there an cleaner workflow than this?

Of those 6 open tabs I’ll be checking out git-encrypt first i think…

I’m using another appoach, which lets me have all my OpenHAB-files on Github.
The approach is to use Ansible to automate the provisioning to the server.

In addition I’m using Ansible Vault to store sensitive data encrypted.

Another advantage this gives me is that I store IP-addresses and URL’s in the vault, so I do not have to repeat them in, for example, the items-files.

I run one single command to provision/set up the whole server to be ready for OpenHAB.

If I need to install OpenHAB on a new server I’ll use the same command, and all needed software will be installed.

You can have a look at it here: https://github.com/steintore/ansible_openhab_linux

If you see things like “{{samsungac_ip}}” this means that the provisioning will replace this with the value defined in the vault.

1 Like

+1 for Ansible. I use it to configure all my remote sensor/actuator Raspberry Pis. Makes setup fast and repeatable. It also makes routine auditing and maintenance easy (e.g. I have Playbooks to run a Tripwire report on all the machines).

Why not use https://gitlab.com or https://bitbucket.org for a free private repo?

It may be “private” but if it is on someone else’s servers, you don’t own it and you don’t have full control over it. While you trust the current company, that company may go out of business, change hands, or change business plans and start to harvest it sell your “private” repo.

That is the admittedly extreme but still accurate paranoid position. For me personally, it took less work to setup my own gogs docker image than it would have to figure out all the encryption options to protect my password and other sensitive information. So I’m not paranoid, I’m just lazy.

Short question. When you improve your scripts, how does a change get into the template folder? I assume you develop on your production machine. How do changes get into the template folder?

Hi, I do actually do the changes on my work/home computer. The code is in my github-repo, so I have it cloned on my laptop. Then I do the changes and either deploy it directly from my laptop or I commit to github and deploy from my Openhab-server.

So the short answer is that the changes are done directly into the templates folder.

Interesting. I have the challenge to maintain a OH installation in my fathers house 400km away. I am looking for a way to ease deployment. Is there a way to open ports in a protected way to provision the code. ATM I do not have a VPN between my home and my fathers house.

Using ansible to provision all you need is ssh-access to the computer you want to deploy on.
So, you can forward port 21 from the router to the OpenHAB-server.
To make it more secure do not use a password to log in but a ssh-key.

You can then choose where to deploy to in a hosts-file (https://github.com/steintore/my-openhab2-config/blob/master/hosts)

As you can see I’ve commentet out a line "bossext:999"
this is configured in my local ~/.ssh/config file so that I on my local machine also can type “ssh bossext” and connect to my server at home from anywhere.

Thanks. I will give it a try.