What way to access config files and backup strategy do YOU use?

Hi all,
recently I had to re-install openhab due to an upgrade issue from 2.4 to 2.5. Everything I had setup in the text files was fairly easy. Items configured via PaperUI I had to setup from scratch. And for some modifications, I was lucky that I kind of documented them in the forum here cause I didn’t backup that file. :crazy_face:

While doing this I wondered what the best way is to access config and other files as well as the best backup strategy. My current setup:

  • openhab2 is running on a head-less NAS using OpenMediaVault as OS
  • to be able to access config files via Visual Studio Code, I share the folders etc/openhab2 and /var/lib/openhab2 via SMB. Instead of doing this for each, I’ve copied them into one folder and linked them from their original location via soft link (ln -s …)
  • I run a backup via rsync everyday on those folders, just excluding temp and cache.

So my question to you: what do you do? Do you backup the same folders, less or more? How do you work - rather via files or graphical interface?

Cheers
Jan

Items are in .items files. Rules are using the NGRE and written in Python. Sitemaps, persitence, transformations are all in their respective text configs. Things I manage entirely within PaperUI or the REST API.

For backup I use a personal git server. I check in and push $OH_CONF and $OH_USERDATA whenever I make a significant change. It’s nice to have the history. I can go back to my OH 1.7 configs if I need to. Like you, I also skip checking in the cache and tmp folder with a .gitignore.

I set everything up using Ansible, the playbooks for which are also checked into git. This is also nice as I can look back at past configurations and everything is documented.

So in a pinch, I can blow away a whole machine (virtual machine), run the playbooks and rebuild the whole machine from scratch. It works quite well for me.

1 Like

That’s a cool approach, @rlkoshak. Can you show how your Ansible playbook looks like?

I’ve hundreds of lines of playbooks and Ansible is a whole ecosystem as big and as complex as openHAB in many ways. So I’ll show just a couple of my playbooks.

First I’ll say that I literally make zero changes to any of my servers or RPis unless it’s through Ansible. Thus every configuration change, every software install, everything at all get’s captured, saved, and the history tracked in git. I never have to think back to “how did I do that? What was installed on that machine?”

Here is my openHAB playbook.

--
- name: Change openhab group to 9001
  group:
    gid: 9001
    name: openhab
    state: present
    system: yes
  become: yes

- name: Create openhab user
  user:
    comment: 'openHAB'
    createhome: no
    name: openhab
    shell: /bin/false
    state: present
    system: yes
    uid: 9001 # uid of openhab user inside the official container
    group: openhab
  become: yes

- name: Add the openhab user to the dialout group
  command: usermod -a -G dialout openhab
  become: yes

- name: Add {{ share_user }} to the openhab group
  command: usermod -a -G openhab {{ share_user }}
  become: yes

- name: Set permissions on openhab data folder so we can check out into it
  file:
    path: "{{ openhab_data }}"
    state: directory
    owner: openhab
    group: openhab
    mode: u=rwx,g=rwx,o=rx
  become: yes

- name: Checkout openhab config
  git:
    repo: "{{ openhab_conf_repo }}"
    dest: "{{ openhab_data }}"
    accept_hostkey: yes
  become: yes

- name: Change ownership of openhab config
  file:
    path: "{{ openhab_data }}"
    owner: openhab
    group: openhab
    recurse: yes
  become: yes


- name: Create expected folders if they don't already exist
  file:
    path: "{{ item }}"
    state: directory
    owner: openhab
    group: openhab
  become: yes
  become_user: openhab
  with_items:
    - "{{ openhab_data }}/conf"
    - "{{ openhab_data }}/userdata"
    - "{{ openhab_data }}/addons"
    - "{{ openhab_data }}/.java"

- name: Create database
  influxdb_database:
    hostname: "{{ influxdb_ip_address }}"
    database_name: "{{ openhab_influxdb_database_name }}"
    state: present
    username: admin
    password: "{{ influxdb_admin_password }}"

# TODO there is currently a bug which prevents us from using influx in the container
- name: Create openhab user
#  command: influx -username admin -password {{ influxdb_admin_password }} -database '{{ openhab_influxdb_database_name }}' -execute "CREATE USER {{ influx_openhab_user }} WITH PASSWORD '{{ influx_openhab_password }}'"
  command: curl -XPOST http://localhost:8086/query?db={{ openhab_influxdb_database_name }}&u=admin&p={{ influxdb_admin_password }} --data-urlencode "q=CREATE USER {{ influx_openhab_user }} WITH PASSWORD '{{ influx_openhab_password }}'"

- name: Give openhab permissions on openhab db
#  command: influx -username admin -password {{ influxdb_admin_password }} -database '{{ openhab_influxdb_database_name }}' -execute "GRANT ALL ON {{ openhab_influxdb_database_name }} TO {{ influx_openhab_user }}"
  command: curl -XPOST http://localhost:8086/query?db={{ openhab_influxdb_database_name }}&u=admin&p={{ influxdb_admin_password }} --data-urlencode "q=GRANT ALL ON {{ openhab_influxdb_database_name }} TO {{ influx_openhab_user }}"

- name: Create grafana user
#  command: influx -username admin -password {{ influxdb_admin_password }} -database '{{ openhab_influxdb_database_name }}' -execute "CREATE USER {{ influx_grafana_user }} WITH PASSWORD '{{ influx_grafana_password }}'"
  command: curl -XPOST http://localhost:8086/query?db={{ openhab_influxdb_database_name }}&u=admin&p={{ influxdb_admin_password }} --data-urlencode "q=CREATE USER {{ influx_grafana_user }} WITH PASSWORD '{{ influx_grafana_password }}'"

- name: Give grafana read permissions on openhab_db
#  command: influx -username admin -password {{ influxdb_admin_password }} -database '{{ openhab_influxdb_database_name }}' -execute "GRANT READ ON {{ openhab_influxdb_database_name }} TO {{ influx_grafana_user }}"
  command: curl -XPOST http://localhost:8086/query?db={{ openhab_influxdb_database_name }}&u=admin&p={{ influxdb_admin_password }} --data-urlencode "q=GRANT READ ON {{ openhab_influxdb_database_name }} TO {{ influx_grafana_user }}"

- name: Update openHAB docker
  docker_container:
    detach: True
    devices:
      - "/dev/ttyUSB0:/dev/ttyUSB0:rwm"
      - "/dev/ttyUSB1:/dev/ttyUSB1:rwm"
    env:
#      EXTRA_JAVA_OPTS: "-Xbootclasspath/a:/openhab/conf/automation/jython/jython-standalone-2.7.0.jar -Dpython.home=/ope
nhab/conf/automation/jython -Dpython.path=/openhab/conf/automation/lib/python"
      CRYPTO_POLICY: unlimited
    hostname: argus.koshak.net
    image: "{{ openhab_version }}"
    log_driver: syslog
    name: openhab
    network_mode: host
    pull: True
    recreate: True
    restart: True
    restart_policy: always
    tty: yes
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - /etc/timezone:/etc/timezone:ro
      - "{{ openhab_data }}/conf:/openhab/conf"
      - "{{ openhab_data }}/userdata:/openhab/userdata"
      - "{{ openhab_data }}/addons:/openhab/addons"

Stuff in {{ }} are variables pulled in from another file. This playbook

  • creates and makes sure the openhab Group exits and has the GID of 9001
  • creates an openhab user with a uid of 9001
  • update the groups the openhab user belongs to
  • creates and sets the permissions for the folder where the configs are stored
  • check out the config from my personal git server into the config folder (this includes both conf and userdata)
  • fix the ownership of the checked out files (I probably don’t need this, I can check it out as the openhab user in the first place)
  • create the userdata, conf, and addons folder, I’ve been using this playbook for a long time and I don’t need the .java folder any more, it’s a hold over from the Nest v1 binding
  • configure InfluxDB database so OH has a place for persistence
  • configure the InfluxDB so the grafana user has access to the openhab database in InfluxDB
  • finally, download and start openHAB using Docker.

Here’s my Mosquitto playbook.

---

- name: Create mosquitto user
  user:
    comment: 'Mosquitto'
    createhome: no
    name: mosquitto
    shell: /bin/false
    state: present
    system: yes
  become: yes

- name: Mount mosquitto from file share
  include_role:
    name: mount-cifs
  vars:
    mount_mode: '0660'
    cifs_user: "{{ share_user }}"
    cifs_pass: "{{ share_pass }}"
    cifs_domain: "{{ workgroup }}"
    mount_user: "mosquitto"
    mount_path: "{{ mosquitto_data }}"
    mount_src: "{{ mosquitto_mount }}"

- name: Create mosquitto directories
  file:
    path: "{{ item }}"
    state: directory
    mode: u=rwx,g=rwx,o=rx
  become: yes
  become_user: mosquitto
  with_items:
    - "{{ mosquitto_data }}/config"
    - "{{ mosquitto_data }}/data"
    - "{{ mosquitto_data }}/log"

- name: Copy the prepared mosquitto.conf
  copy:
    src: mosquitto.conf
    dest: "{{ mosquitto_data }}/config/mosquitto.conf"
    mode: u=rw,g=rw,o=r
  become: yes
  become_user: mosquitto

- name: Install mosquitto cients, temporarily install mosquitto
  apt:
    name: "{{ item }}"
    update_cache: no
  become: yes
  with_items:
    - mosquitto
    - mosquitto-clients
    - openssl

- name: Install pexpect
  pip:
    name: pexpect
  become: yes

- name: Generate passwd file
  expect:
    command: mosquitto_passwd -c {{ mosquitto_data }}/config/passwd {{ mosquitto_user }}
    responses:
      'Password\:' : "{{ mosquitto_passwd }}"
      'Reenter password\:' : "{{mosquitto_passwd }}"
  become: yes
  become_user: mosquitto

- name: Remove mosquitto
  apt:
    name: mosquitto
    update_cache: no
    purge: yes
    state: absent
  become: yes

- name: Recreate mosquitto user
  user:
    comment: 'Mosquitto'
    createhome: no
    name: mosquitto
    shell: /bin/false
    state: present
    system: yes
  become: yes

- name: Does the CA already exist?
  stat:
    path: "{{ mosquitto_data }}/config/ca.crt"
  register: mosquitto_ca

- name: Create mosquitto certificate authority
  expect:
    command: openssl req -new -x509 -days 1461 -extensions v3_ca -keyout {{ mosquitto_data }}/config/ca.key -out {{ mosquitto_data }}/config/ca.crt
    responses:
      'Enter PEM pass phrase\:' : "{{ mosquitto_ca_passwd }}"
      'Verifying \- Enter PEM pass phrase\:' : "{{ mosquitto_ca_passwd }}"
      'Country Name \(2 letter code\) \[AU\]\:' : "{{ mosquitto_ca_country }}"
      'State or Province Name \(full name\) \[Some\-State\]\:' : "{{ mosquitto_ca_state }}"
      'Locality Name \(eg, city\) \[\]\:' : "{{ mosquitto_ca_city }}"
      'Organization Name \(eg, company\) \[Internet Widgits Pty Ltd\]\:' : "{{ mosquitto_ca_org }}"
      'Organizational Unit Name \(eg, section\) \[\]\:' : "{{ mosquitto_ca_unit }}"
      'Common Name \(e\.g\. server FQDN or YOUR name\) \[\]\:' : "{{ mosquitto_ca_fqdn }}"
      'Email Address \[\]\:' : "{{ mosquitto_ca_email }}"
  when: mosquitto_ca.stat.islnk is not defined
  become: yes
  become_user: mosquitto

- name: Create server key and cert
  include_role:
    name: create-cert
  vars:
    key: "{{ mosquitto_data }}/config/server.key"
    csr: "{{ mosquitto_data }}/config/server.csr"
    crt: "{{ mosquitto_data }}/config/server.crt"
    cert_country: "{{ mosquitto_ca_country }}"
    cert_state: "{{ mosquitto_ca_state }}"
    cert_city: "{{ mosquitto_ca_city }}"
    cert_org: "{{ mosquitto_ca_org }}"
    cert_unit: "{{ mosquitto_ca_unit }}"
    cert_fqdn: "{{ mosquitto_ca_fqdn }}"
    cert_email: "{{ mosquitto_ca_email }}"
    ca_crt: "{{ mosquitto_data }}/config/ca.crt"
    ca_passwd: "{{ mosquitto_ca_passwd }}"
    ca_key: "{{ mosquitto_data }}/config/ca.key"
  become: yes
  become_user: mosquitto

- name: Get uid
  command: id -u mosquitto
  register: mosquitto_uid

- name: Get gid
  command: id -g mosquitto
  register: mosquitto_gid

- name: Run the eclipse-mosquitto docker container
  docker_container:
    detach: True
    exposed_ports:
      - 1883
      - 9001
      - 8883
    hostname: mosquitto.koshak.net
    image: eclipse-mosquitto
    log_driver: syslog
    name: mosquitto
    published_ports:
      - "1883:1883"
      - "9001:9001"
      - "8883:8883"
    recreate: True
    restart: True
    restart_policy: always
    state: started
    user: 999 # TODO why can't I use the variable created above?
    volumes:
      - /etc/passwd:/etc/passwd:ro
      - /etc/localtime:/etc/localtime:ro
      - /usr/share/zoneinfo:/usr/share/zoneinfo:ro
      - "{{ mosquitto_data }}/config:/mosquitto/config"
      - "{{ mosquitto_data }}/log:/mosquitto/log"
      - "{{ mosquitto_data }}/data:/mosquitto/data"

This playbook

  • creates a mosquitto user
  • mounts the mosquitto data folder from my NAS using CIFS (this is all handled by another role), I need to change this to NFS really
  • create the mosquitto folders if they don’t exist
  • I’ve a precreated mosquitto.conf file that I use that is part of the playbook, so I copy that over to the machine (the playbook assumes that the CIFS mount is empty)
  • temporarily install mosquitto on the host so I can more easily create the passwd file for the Mosquitto authentication
  • creates the CA and certs for encrypted Moquitto connections
  • finally, download and start up the Mosquitto in Docker

All of these above playbooks are put into a special directory structure called roles. So at the top level I can create a file that tells Ansible what to run on a given host. So my home-auto.yml file is

---

# Prerequisites:
# - install ssh
# - set up ssh for certificate only logins
# - install python

# TODO set up root with ssh certs for git interactions

- hosts: homeauto
  roles:
    - common
    - fish
    - mount-home
    - vm
    - docker
    - msmtp
    - openzwave
    - openhab
    - grafana
    - screengrab
    - share-ohconf
    - multitail
    - glances
    - sensorreporter
    - tripwire

So, if I were to want to build a new home-auto machine from scratch I just need to update the inventory to tell Ansible that the new machine is a home-auto machine, do the minimal prerequisites (make it so we can ssh and that Python is available on the host) and run ansible-playbook home-auto.yml and wait. In the end I’ll have an identically configured home-auto machine.

2 Likes

Really great - I’m jealous, that could have saved me hours recently…

I use openHABian which offers Samba mounts so it’s easy to edit in Emacs from another machine. I also installed Bacula File Daemon on the box and it gets backed up every night to encrypted tapes on a remote server. This means if it fails I have to reinstall openHABian, reinstall Bacula, and run a tape restore. Still that’s not much work and it’s the same as all my other servers.

Probably one of the more unusual elements to this setup is I have a daily email sent to an offsite webmail account which attaches a GPG encrypted Bacula database dump and password manager file. The password manager is pretty solid to start with as it’s both passphrase and Yubikey encrypted. The GPG private key is separately printed as QR codes and kept in a bank safe. The passphrase for the GPG key is described with the document but to decode the passphrase you’d have to know lots of personal details (ie I don’t need to trust the bank staff).

1 Like

I use a drive copier and copy the whole ssd after changes. My system very stable as far as changes go.

I just zip items and things folder or run the backup script if I add a few things in between the drive copy.

1 Like

I run openhab via docker, which is managed in a git repo. Configuration files (/openhab/conf) reside in a different git repo and I have a lightweight deployment job (read rsync) that places the files where the docker container has them mounted.

My workflow is essentially make changes, then commit and push. The deployment job automation puts them where they need to be, openhab notices, and reups.

Things that are still needed is some sort of linting/testing to vet the configs before deployment – kinda missing the CI in my CI/CD…

Also, zwave discovery and pairing requires interacting with the GUI. But once new devices are paired, and I get the node info, all further thing/item/rule configuration is via the process described above.

Nice to see this example, that’s more or less my holy grail for my openhab setup. Seems like I have now worked out most of the kinks in my openhab docker setup (read: I can now follow my own notes to repeat my installation manually without finding new problems) and I’m currently working on familiarizing myself with Ansible :slight_smile:

Just wanted to ask you about a little intro post - but seems like you have that covered already :smiley: