Openhab2+ESXI 6.5+zwave.me usb dongle

Checked it just now and only /dev/ttyACM0 exists no sign of a /dev/ttyUSB0

Biggest difference is see is you using the Aeotec dongle whilst i’m using the Zwave.me dongle. Depending on the USB<->uart implementation of the two sticks… or better yet the difference between them… this might explain why my stick shows up as /dev/ttyACM0 and yours as /dev/ttyUSB0

I have a similar setup - esxi 6.5, Ubuntu 16.04, zwave.me USB stick but with openHAB running in a docker as well. My device shows up on /dev/ttyACM0 as well, so I guess that your port is probably correct. As an alternative to the socat/ser2net approach that Rich mentions, you could try to use usbip. This is very easy to use, and I believe is included in a number of distros (if not, can be readily installed with apt install etc). This shares your device over the lan, but on your ‘receiving’ computer, makes your device appear as a standard USB device (again at the same /dev/ttyACM0 in my case).

One thing you may wish to try first though is install a copy of the OpenZwave Control panel (there is a docker container available on the docker hub), and see if you can access your z-wave stick through this. This will give semi-direct access, with extensive logging, and so you can tell quickly if your device is at least accessible correctly to your VM. If this works fine, then you know the issue is probably on the openHAB installation/configuration side.

1 Like

Thank you for your help and offering of alternative solutions. Might i ask what Java version u are using?

I remember having experienced z-wave problems before due to a wrong java version… i’m currently running Zulu by the way

It looks like the Docker image comes with:

openjdk version "1.8.0_112"
OpenJDK Runtime Environment (Zulu 8.19.0.1-linux64) (build 1.8.0_112-b16)
OpenJDK 64-Bit Server VM (Zulu 8.19.0.1-linux64) (build 25.112-b16, mixed mode)

I’m also running zulu in my docker image (ENV JAVA_URL="https://www.azul.com/downloads/zulu/zdk-8-ga-linux_x64.tar.gz").

Downgraded my java to 1.8.0_112… strangely zwave started working… sort off… devices take forever to initialize… once initialized they work… but after some time zwave seems to stop working at all…

in the mean time i have played with ser2net/socat which seems to be working just fine… but i still don’t want to give up on my Zwave.me usb dongle passthru in ESXI 6.5 :slight_smile:

If you got it working with ser2net/socat I know many in this community would benefit from a tutorial. @smar, the same goes for usbip.

There are a lot of reasons beyond not being able to pass it through to a VM that makes that an attractive option for many including:

  • freedom to place the controller in an optimal position
  • easier crash recovery/failover
  • support for multiple controllers in different “zones” in your deployment from one central OH

Like I said above, I spend a couple of evenings trying to get it to work hosting my controller on an RPi and never could get it to work. Then I learned about the USB drivers issue (I think it was you @smar who helped me with that) and abandoned the IP approach. I’d still like to know how to make it work though.

I will write up a small tutorial tomorrow… no problem

1 Like

Sure, I can write up a brief tutorial if there is interest in this (though it may take a week or two till I get time to rebuild it again so as to ensure I capture all the steps).

I myself have my USB devices on an RP3, connected via usbip to my openHAB VMs running under esxi on a separate physical box (I keep 2 VMs of openHAB, so that I always have one almost up-to-date copy as backup, in case I break anything when testing/upgrading/fiddling!). This makes it easy to switch my USB devices between the VMs via a bash script.

1 Like

I like the redundancy. I will probably add a redundant Docker instance when I upgrade to next. I don’t have the resources on my server to keep a spare full OH VM running and I don’t have anything controlled by OH that isn’t backed up by a manual or native controller so a few hours or days of downtime isn’t the end of the world. And since I build all my VMs using Ansible, rebuilding a VM doesn’t take much work on my part.

But I really like the idea of moving my controller to a better location.

Don’t you just keep your original docker image file? That is then your backup from which you can instantiate however many running containers you need. When I download a new docker image, I tag it with (a) the build number, and (b) the tag ‘latest’. My docker run script that I use to start a new container then stops/removes any old running container, and starts a new container based on the ‘latest’ tag image. However, old images are still available in case I need to revert back.

The reason I keep another VM as well is that it has all my config/userdata folders semi-synced (config files are always kept in sync via syncthings, whilst the userdata folder is synced by cron jobs every few hours). OpenHAB itself is not constantly running on this standby VM. System resources (cpu/memory) used is next to nothing according to esxi - it is only harddisk space that I have to sacrifice (which admittedly is limited on the SSD drive!). I do also keep my VMs on separate physical disks.

As you say, not really the end of the world if the system goes down, but as I am travelling frequently, it makes it easier to vpn in and switch to a working system if something goes wrong (e.g. I had a disk failure few months back on one of my esxi drives that was hosting the running OH instance. Took about 5 minutes to be back up and running).

WRT the controller positioning, I have certainly found it useful to be able to have all my USB devices on a separate RPi, which can be located anywhere, quickly rebuilt from a backup image etc.

PS would you mind sharing your ansible playbooks for rebuilding your VM/docker etc? I’ve just started looking into it and it does seem to be a very powerful tool.

It isn’t so much the Image I’m worried about. I can download that no problem and I do keep the second from most recent one. The big thing is to have a hot backup of my conf/userdata so I can just spin up the old container with the old config stuff in a pinch. But to make that work I’ll need two different docker run configs, one that mounts the current one and one that mounts the backup so that I’d be just a couple of commands away from recovery.

And it is the RAM that I’m running right up against the limits of my server right now. I’m pretty much maxed out and need to go ahead and spend the $100 for more RAM. another VM that would take 2G or more would put me over.

No problem, though it might take some time to do a thorough posting. I was lazy in a lot of my roles and have hard coded paths and sensitive information embedded rather than following Ansible best practices. In short, what I have is a little embarrasing at the moment. I do plan to post the full playbook to github and possibly Ansible Galaxy at somepoint.

I do everything using Ansible instead of Docker Compose. I don’t know how much you know about Ansible yet so please ask questions if you have them.

Over all I have my openHAB configs in a locally hosted git server and the playbook checks the config out as part of installation.

openHAB Role

meta/main.yml
Instead of Docker Compose I use meta and dependencies to install the other related apps like mosquitto, influxdb, and grafana.

---
dependencies:
  - { role: mosquitto }
  - { role: influxdb }
  - { role: grafana }

vars/main.yml

---
openhab_conf_repo: <git path to config git repo> 
openhab_data: /opt/openhab2

influxdb_ip_address: <domain name or path ti influxdb>
openhab_influxdb_database_name: openhab_db
influx_openhab_user: openhab
influx_openhab_password: <password>
influx_grafana_user: grafana
influx_grafana_password: <password>

Obviously, replace the stuff in < > with appropriate values.

tasks/main.yml

The playbook:

  • create a user and group to match the UID/GID in the container
  • add the openhab user to the dialout group
  • add my main user {{share_user}} to the openhab group
  • fix permissions on the main folder where all the OH files will go and checkout the config there
  • creates any missing folders
  • creates the InfluxDB user and database for OH
  • creates the Grafan user on InfluxDB and gives it permission on the OH database (actions are idempotent so if the user already exists nothing happens)
  • finally download and run the official OH Image from DockerHub
---
- name: Change openhab group to 9001
  group:
    gid: 9001
    name: openhab
    state: present
    system: yes
  become: yes

- name: Create openhab user
  user:
    comment: 'openHAB'
    createhome: no
    name: openhab
    shell: /bin/false
    state: present
    system: yes
    uid: 9001 # uid of openhab user inside the official container
    group: openhab
  become: yes

- name: Add the openhab user to the dialout group
  command: usermod -a -G dialout openhab
  become: yes

- name: Add {{ share_user }} to the openhab group
  command: usermod -a -G openhab {{ share_user }}
  become: yes

- name: Set permissions on openhab data folder so we can check out into it
  file:
    path: "{{ openhab_data }}"
    state: directory
    owner: openhab
    group: openhab
    mode: u=rwx,g=rwx,o=rx
  become: yes

- name: Checkout openhab config
  git:
    repo: "{{ openhab_conf_repo }}"
    dest: "{{ openhab_data }}"
    accept_hostkey: yes
  become: yes

- name: Change ownership of openhab config
  file:
    path: "{{ openhab_data }}"
    owner: openhab
    group: openhab
    recurse: yes
  become: yes

- name: Create expected folders if they don't already exist
  file:
    path: "{{ item }}"
    state: directory
    owner: openhab
    group: openhab
  become: yes
  become_user: openhab
  with_items:
    - "{{ openhab_data }}/conf"
    - "{{ openhab_data }}/userdata"
    - "{{ openhab_data }}/addons"
    - "{{ openhab_data }}/.java"

- name: Create database
  influxdb_database:
    hostname: "{{ influxdb_ip_address }}"
    database_name: "{{ openhab_influxdb_database_name }}"
    state: present
    username: admin
    password: "{{ influxdb_admin_password }}"

# TODO there is currently a bug which prevents us from using influx in the container
- name: Create openhab user
#  command: influx -username admin -password {{ influxdb_admin_password }} -database '{{ openhab_influxdb_database_name }}' -execute "CREATE USER {{ influx_openhab_user }} WITH PASSWORD '{{ influx_openhab_password }}'"
  command: curl -XPOST http://localhost:8086/query?db={{ openhab_influxdb_database_name }}&u=admin&p={{ influxdb_admin_password }} --data-urlencode "q=CREATE USER {{ influx_openhab_user }} WITH PASSWORD '{{ influx_openhab_password }}'"

- name: Give openhab permissions on openhab db
#  command: influx -username admin -password {{ influxdb_admin_password }} -database '{{ openhab_influxdb_database_name }}' -execute "GRANT ALL ON {{ openhab_influxdb_database_name }} TO {{ influx_openhab_user }}"
  command: curl -XPOST http://localhost:8086/query?db={{ openhab_influxdb_database_name }}&u=admin&p={{ influxdb_admin_password }} --data-urlencode "q=GRANT ALL ON {{ openhab_influxdb_database_name }} TO {{ influx_openhab_user }}"

- name: Create grafana user
#  command: influx -username admin -password {{ influxdb_admin_password }} -database '{{ openhab_influxdb_database_name }}' -execute "CREATE USER {{ influx_grafana_user }} WITH PASSWORD '{{ influx_grafana_password }}'"
  command: curl -XPOST http://localhost:8086/query?db={{ openhab_influxdb_database_name }}&u=admin&p={{ influxdb_admin_password }} --data-urlencode "q=CREATE USER {{ influx_grafana_user }} WITH PASSWORD '{{ influx_grafana_password }}'"

- name: Give grafana read permissions on openhab_db
#  command: influx -username admin -password {{ influxdb_admin_password }} -database '{{ openhab_influxdb_database_name }}' -execute "GRANT READ ON {{ openhab_influxdb_database_name }} TO {{ influx_grafana_user }}"
  command: curl -XPOST http://localhost:8086/query?db={{ openhab_influxdb_database_name }}&u=admin&p={{ influxdb_admin_password }} --data-urlencode "q=GRANT READ ON {{ openhab_influxdb_database_name }} TO {{ influx_grafana_user }}"

# TODO download Jython and add to env in docker call

- name: Start openHAB
  docker_container:
    detach: True
    devices:
      - "/dev/ttyUSB0:/dev/ttyUSB0:rwm"
    hostname: argus.koshak.net
    image: openhab/openhab:2.1.0-snapshot-amd64
    log_driver: syslog
    name: openhab
    network_mode: host
    pull: True
    recreate: True
    restart: True
    restart_policy: always
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - /etc/timezone:/etc/timezone:ro
      - "{{ openhab_data }}/conf:/openhab/conf"
      - "{{ openhab_data }}/userdata:/openhab/userdata"
      - "{{ openhab_data }}/addons:/openhab/addons"
      - "{{ openhab_data }}/.java:/openhab/.java"

Obviously there are a lot of areas that need improvements. I welcome comments and critiques.

Mosquitto

NOTE: I’m pretty certain there are better develped Mosquitto roles available on github and Ansible Galaxy. I’ve not yet tested anything to do with certs though do generate them.

I put my mosquitto.conf file in files/mosquitto.conf which will get uploaded. You could use lineinfile and/or blockinfile to insert config changes to the default conf but I find doing it this way easier.

vars/main.yml

Where possible I try to put the data that gets mounted to the containers on my NAS. I don’t do that for openHAB because I had some permission and file event problems that I haven’t tried to solve yet (probably driven by the fact I’m using CIFS rather than NFS, darned mixed environment).

---

mosquitto_data: /mnt/mosquitto
mosquitto_mount: <cifs path to shared folder>
mosquitto_user: "{{ share_user }}"
mosquitto_passwd: "{{ share_pass }}"
mosquitto_ca_passwd: "{{ share_pass }}"
mosquitto_ca_country: <country>
mosquitto_ca_state: <state>
mosquitto_ca_city: <city>
mosquitto_ca_org: <org>
mosquitto_ca_unit: <unit>
mosquitto_ca_fqdn: "{{ ansible_hostname }}.<domain if you got it>"
mosquitto_ca_email: "{{ email_login }}"

vars/main.yml

  • create mosquitto user
  • mount the file share where the mosquitto stuff will go
  • create the directories that get mounted to the container if they don’t exist
  • copy over the conf file
  • install mosquitto clients on the host so we can generate the passwd file, generate the passwd file, and remove the clients
  • recreate the mosquitto user because it will have been removed when we uninstalled the clients
  • create the mosquitto CA if it doesn’t exist
  • create the certs
  • download and run the eclipse-mosquitto Docker image from DockerHub
---

- name: Create mosquitto user
  user:
    comment: 'Mosquitto'
    createhome: no
    name: mosquitto
    shell: /bin/false
    state: present
    system: yes
  become: yes

- name: Mount mosquitto from file share
  include_role:
    name: mount-cifs
  vars:
    mount_mode: '0660'
    cifs_user: "{{ share_user }}"
    cifs_pass: "{{ share_pass }}"
    cifs_domain: "{{ workgroup }}"
    mount_user: "mosquitto"
    mount_path: "{{ mosquitto_data }}"
    mount_src: "{{ mosquitto_mount }}"

- name: Create mosquitto directories
  file:
    path: "{{ item }}"
    state: directory
    mode: u=rwx,g=rwx,o=rx
  become: yes
  become_user: mosquitto
  with_items:
    - "{{ mosquitto_data }}/config"
    - "{{ mosquitto_data }}/data"
    - "{{ mosquitto_data }}/log"

- name: Copy the prepared mosquitto.conf
  copy:
    src: mosquitto.conf
    dest: "{{ mosquitto_data }}/config/mosquitto.conf"
    mode: u=rw,g=rw,o=r
  become: yes
  become_user: mosquitto

- name: Install mosquitto cients, temporarily install mosquitto
  apt:
    name: "{{ item }}"
    update_cache: no
  become: yes
  with_items:
    - mosquitto
    - mosquitto-clients
    - openssl

- name: Install pexpect
  pip:
    name: pexpect
  become: yes

- name: Generate passwd file
  expect:
    command: mosquitto_passwd -c {{ mosquitto_data }}/config/passwd {{ mosquitto_user }}
    responses:
      'Password\:' : "{{ mosquitto_passwd }}"
      'Reenter password\:' : "{{mosquitto_passwd }}"
  become: yes
  become_user: mosquitto

- name: Remove mosquitto
  apt:
    name: mosquitto
    update_cache: no
    purge: yes
    state: absent
  become: yes

- name: Recreate mosquitto user
  user:
    comment: 'Mosquitto'
    createhome: no
    name: mosquitto
    shell: /bin/false
    state: present
    system: yes
  become: yes

- name: Does the CA already exist?
  stat:
    path: "{{ mosquitto_data }}/config/ca.crt"
  register: mosquitto_ca

- name: Create mosquitto certificate authority
  expect:
    command: openssl req -new -x509 -days 1461 -extensions v3_ca -keyout {{ mosquitto_data }}/config/ca.key -out {{ mosquitto_data }}/config/ca.crt
    responses:
      'Enter PEM pass phrase\:' : "{{ mosquitto_ca_passwd }}"
      'Verifying \- Enter PEM pass phrase\:' : "{{ mosquitto_ca_passwd }}"
      'Country Name \(2 letter code\) \[AU\]\:' : "{{ mosquitto_ca_country }}"
      'State or Province Name \(full name\) \[Some\-State\]\:' : "{{ mosquitto_ca_state }}"
      'Locality Name \(eg, city\) \[\]\:' : "{{ mosquitto_ca_city }}"
      'Organization Name \(eg, company\) \[Internet Widgits Pty Ltd\]\:' : "{{ mosquitto_ca_org }}"
      'Organizational Unit Name \(eg, section\) \[\]\:' : "{{ mosquitto_ca_unit }}"
      'Common Name \(e\.g\. server FQDN or YOUR name\) \[\]\:' : "{{ mosquitto_ca_fqdn }}"
      'Email Address \[\]\:' : "{{ mosquitto_ca_email }}"
  when: mosquitto_ca.stat.islnk is not defined
  become: yes
  become_user: mosquitto

- name: Create server key and cert
  include_role:
    name: create-cert
  vars:
    key: "{{ mosquitto_data }}/config/server.key"
    csr: "{{ mosquitto_data }}/config/server.csr"
    crt: "{{ mosquitto_data }}/config/server.crt"
    cert_country: "{{ mosquitto_ca_country }}"
    cert_state: "{{ mosquitto_ca_state }}"
    cert_city: "{{ mosquitto_ca_city }}"
    cert_org: "{{ mosquitto_ca_org }}"
    cert_unit: "{{ mosquitto_ca_unit }}"
    cert_fqdn: "{{ mosquitto_ca_fqdn }}"
    cert_email: "{{ mosquitto_ca_email }}"
    ca_crt: "{{ mosquitto_data }}/config/ca.crt"
    ca_passwd: "{{ mosquitto_ca_passwd }}"
    ca_key: "{{ mosquitto_data }}/config/ca.key"
  become: yes
  become_user: mosquitto

- name: Get uid
  command: id -u mosquitto
  register: mosquitto_uid

- name: Get gid
  command: id -g mosquitto
  register: mosquitto_gid


- name: Run the eclipse-mosquitto docker container
  docker_container:
    detach: True
    exposed_ports:
      - 1883
      - 9001
      - 8883
    hostname: mosquitto.koshak.net
    image: eclipse-mosquitto
    log_driver: syslog
    name: mosquitto
    published_ports:
      - "1883:1883"
      - "9001:9001"
      - "8883:8883"
    recreate: True
    restart: True
    restart_policy: always
    state: started
    user: 999 # TODO why can't I use the variable created above?
    volumes:
      - /etc/passwd:/etc/passwd:ro
      - /etc/localtime:/etc/localtime:ro
      - /usr/share/zoneinfo:/usr/share/zoneinfo:ro
      - "{{ mosquitto_data }}/config:/mosquitto/config"
      - "{{ mosquitto_data }}/log:/mosquitto/log"
      - "{{ mosquitto_data }}/data:/mosquitto/data"

InfluxDB

As with mosquitto.conf, I put influxdb.conf into files and copy it over to the host rather than editing the file in place.

vars/main.yml

---
influxdb_data: /mnt/influxdb
influxdb_mount: <cisf path to shared folder>
influxdb_admin_password: <password>

tasks/main.yml

  • Create influxdb user, add main login user to the influxdb group and mount the share.
  • Create the needed directories if they don’t exist and copy over the conf file
  • Download and start the official influxdb image from DockerHub
  • Give it a chance to come up, then install the influxdb python libraries (needed by Ansible to interat with InfluxDB)
  • Create the admin user
---

- name: Create influxdb user
  user:
    comment: 'InfluxDB Server'
    createhome: no
    name: influxdb
    shell: /usr/sbin/nologin
    state: present
    system: yes
  become: true

- name: Add {{ share_user }} to the influxdb group
  command: usermod -a -G influxdb {{ share_user }}
  become: yes

- name: Mount influxdb home from file share
  include_role:
    name: mount-cifs
  vars:
    mount_mode: '0666'
    cifs_user: "{{ share_user }}"
    cifs_pass: "{{ share_pass }}"
    cifs_domain: "{{ workgroup }}"
    mount_user: "influxdb"
    mount_path: "{{ influxdb_data }}"
    mount_src: "{{ influxdb_mount }}"

- name: Create needed directories
  file:
    path: "{{ item }}"
    state: directory
  become: yes
  become_user: influxdb
  with_items:
    - "{{ influxdb_data }}/config"
    - "{{ influxdb_data }}/data"
    - "{{ influxdb_data }}/logs"

- name: Copy the influxdb.conf file
  copy:
    src: influxdb.conf
    dest: "{{ influxdb_data }}/config/influxdb.conf"
    mode: a=r
  become: yes
  become_user: influxdb

- name: Start InfluxDB
  docker_container:
    detach: True
    exposed_ports:
      - 8086
    hostname: argus.koshak.net
    image: influxdb
    log_driver: syslog
    name: influxdb
    published_ports:
      - "8086:8086"
    pull: True
    restart: True
    restart_policy: always
    user: 998:998 # TODO get this from tasks or variable
    volumes:
      - "{{ influxdb_data }}/config:/etc/influxdb"
      - "{{ influxdb_data }}/data:/var/lib/influxdb"
      - "{{ influxdb_data }}/logs:/var/log/influxdb"
      - /etc/localtime:/etc/localtime:ro
      - /etc/passwd:/etc/passwd:ro

- name: Sleep for a few to give it a chance to come up
  pause:
    seconds: 20
    prompt: "Waiting for InfluxDB to come up"

- name: Install influxdb python module
  pip:
    name: influxdb
    state: present
  become: yes

# Note there is a current bug in Docker preventing one from using exec to run influx so using curl and the rest api here
- name: Create admin user
#  command: influx -execute "CREATE USER admin WITH PASSWORD '{{ influxdb_admin_password }}' WITH ALL PRIVILEGES"
  command: curl -XPOST http://localhost:8086/query --data-urlencode "q=CREATE USER admin WITH PASSWORD '{{ influxdb_admin_password }}' WITH ALL PRIVILEGES"

Grafana

vars/main.yml

---
grafana_data: /opt/grafana

tasks/main.yml

  • create the grafana user and add the main login to the group
  • create the data directories for grafan config and logs and such
  • download and run the grafana/grafana docker image from DockerHub
---
- name: Create grafana user
  user:
    comment: 'Grafana Server'
    createhome: no
    name: grafana
    shell: /usr/sbin/nologin
    state: present
    system: yes
  become: true

- name: Add {{ share_user }} to the grafana group
  command: usermod -a -G grafana {{ share_user }}
  become: yes

- name: Create grafana directories
  file:
    path: "{{ item }}"
    state: directory
    owner: grafana
    group: grafana
    mode: a=rwx
  become: yes
  with_items:
    - "{{ grafana_data }}"

- name: Start Grafana
  docker_container:
    detach: True
    exposed_ports:
      - 3000
    hostname: grafana.koshak.net
    image: grafana/grafana
    log_driver: syslog
    name: grafana
    env:
      GF_USERS_ALLOW_SIGN_UP: "false"
      GF_AUTH_ANONYMOUS_ENABLED: "true"
    published_ports:
      - "3000:3000"
    pull: True
    restart: True
    restart_policy: always
    volumes:
      - "{{ grafana_data }}:/var/lib/grafana"
      - /etc/localtime:/etc/localtime:ro
      - /etc/passwd:/etc/passwd:ro
2 Likes

Esxi manages physical RAM usage dynamically and allows over allocation (in the same way it does for virtual disks), unless you explicitly force it to reserve physical RAM for a given VM. Of course this doesn’t mean you would go out and give large amounts to each and every VM and hope to get the same performance as physical RAM, but in normal cases your VMs would not be using the full RAM allocation, and thus a standby VM (which uses only a few hundred MB of RAM, even though allocated say 2Gb), would work fine. Again, this is how my systems have been running for a couple of years now.

I’ve just started out with it and am still reading up on how best to use it etc, and so will no doubt have questions as I go along. Thanks for the directions on your current setup and the offer of help - I am sure I will have many questions as I go along. Using a local git is a great idea - I started off just copying from a backup I keep on my NAS, which has never been that ideal.

1 Like

Oh, I know, but I’m actively using that much RAM. I’ve really oversubscribed this machine. I’m just at the level where everything I have will run OK but the last time I tried to add a VM just the RAM needed to install the OS put me over. And when I did everything went to pot.

I really do need some more RAM.

You can learn everything you need to to make it work online but I found this book very helpful. I think I picked it up as part of a HumbeBundle or got it as one of (Packt’s freebees)[Search | Packt Subscription]. But having read it I would have bought it outright when I started had I know of it.

I spin up a Gogs docker container for my git. It provides most of the features of something like github in a pretty tiny package. It isn’t all roses though. Whenever I upgrade I have to reimport my ssh public key, and because I also want to ssh to the host machine it runs on I had to set up a custom config in my .ssh folders to map ssh gogs.koshak.net to ssh -P 10022 medusa.koshak.net. But once set up it works great.

Again, I put a preconfigured app.ini in the files folder.

main/vars

---
gogs_data: /mnt/gogs
gogs_mount: <cifs path to share>
gogs_db_name: gogs_production
gogs_db_user: git
gogs_db_passwd: <password

main/tasks

---

- name: Create a git user
  user:
    comment: 'Gogs'
    createhome: no
    name: git
    shell: /bin/bash
    state: present
    system: yes
  become: yes

- name: Mount gogs working folder
  include_role:
    name: mount-cifs
  vars:
    mount_mode: '0660'
    cifs_user: "{{ share_user }}"
    cifs_pass: "{{ share_pass }}"
    cifs_domain: "{{ workgroup }}"
    mount_user: git
    mount_path: "{{ gogs_data }}"
    mount_src: "{{ gogs_mount }}"

- name: Start gogs
  docker_container:
    detach: True
    exposed_ports:
      - 22
      - 3000
    hostname: chimera.koshak.net
    image: gogs/gogs
    log_driver: syslog
    name: gogs
    published_ports:
      - "10022:22"
      - "3000:3000"
    pull: True
    recreate: True
    restart: True
    restart_policy: always
    volumes:
      - "{{ gogs_data }}:/data"
      - /etc/passwd:/etc/passwd:ro
      - /usr/share/zoneinfo:/usr/share/zoneinfo:ro

And here is some advice I can offer as you go down the Ansible path:

  • Don’t bother with /etc/ansible/hosts, create a the inventory file in your local folder (configuration controlled with git or what ever of course) and pass it to ansible-playbook using the -i option.

  • Follow the standard project structure:

# your main playbooks which will primarily just list the roles to apply to which categories of servers in your inventory
site.yml
webservers.yml
fooservers.yml
roles/
    common/
        /tasks      # the main script that gets executed
        /handlers   # tasks that get performed when certain other tasks get performed, don't worry to much about these
        /files      # helper files that are avaialble to ansible, usually for uploading to the host
        /templates  # I haven't done anything with templates yet
        /vars       # variables used in this task
        /defaults   # default values for variables that are not overridden elsewhere (e.g. in /vars)
        /meta       # where to define other roles this role depends upon
    openhab/
        ...

Each of the above folders except files will have a main.yml file in it

And an example lot level yml file 

home-auto.yml # builds my home automation vm

---

# Prerequisites:
# - install ssh
# - set up ssh for certificate only logins
# - install python
# - all hosts added to inventory under [home-auto] 

# TODO set up root with ssh certs for git interactions

- hosts: home-auto
  roles:
    - common
    - fish
    - mount-home
    - vm
    - docker
    - ssmtp
    - openzwave
    - openhab
    - share-ohconf
    - multitail
    - zoneminder
  • ansible-vault can be used to encrypt sensitive information like passwords
  • use ansible-galaxy init to create an empty skeleton role with all of its folders
  • ansible-galaxy can also be used to install roles from their community (kinda like dockerhub but for Ansible roles)
  • The best YML/Ansible aware IDE I’ve found is PyCharm Community with the YAML and YAML/Ansible plug-ins. Because best practice is to scatter these files all over the place I found using an IDE really improves my productivity. It also is variable aware so ctrl-space completion works.

Good luck!

Feel free to shot me a PM or reply back here if you run into any trouble or have questions.

2 Likes

Been there, done that :slight_smile:.

Thanks for all the ansible stuff - this will save me a great deal of time getting up to speed! I have a few things that I still need to figure out in my head, and so will definitely have a look at that book. Will definitely take you up on your offer of help :slight_smile:

Posted my write up of the steps required to get ser2net and socat up and running and share USB devices over the network…

click here for the guide

1 Like

Hi, did I understand you right that you used USBIP for your z wave USB stick, and shared to your OH running as docker?
Could you please share a tutorial or how you did this successfully? I cannot get OH docker to see the z wave stick sahred with socat/ser2net

Hi

Yes, I did use usbip to connect my openHAB to a zwave stick on a remote device. However, I have not been doing this for some time now as I ended up relocating my openHAB box (for unrelated reasons), and so can’t remember the precise details.

I suggest you use one of the many standard guides on the net for installing and using usbip onto your OS, e.g. https://developer.ridgerun.com/wiki/index.php?title=How_to_setup_and_use_USB/IP. It is a common tool often included in some distros, and if not, binaries are generally readily available for install using the OS’s package installation tools.

The one thing to be careful of is that you need to ensure that you are using version 2 of usbip (and not the commonly found version 1) if your linux is on a recent’ish kernel. Doing an apt install usbip on Ubuntu 16.04 used to install the old non-working version. I had to install the updated version by by doing apt install linux-tools-<kernel-number>, where <kernel-number> is the kernel version of your linux. I noticed that the tutorial I linked above uses linux-tools-generic, rather than the one for a specific kernel. That didn’t work for me - I had to use the kernel specific build.

Once usbip is installed on both the server and the client, the various ‘modules’ loaded and the usbipd daemon started on the server, it is a straight forward process of:

  1. Exporting the usb device from the server (via the usbip bind command)

  2. Checking that the usb device is visible remotely from the client (usbip list -r <server_ip> where <server_ip> is your usb device host server

  3. Attaching the usb device to your client using usbip attach command

(All the above can be done in bash shell scripts, so that it is automatically run if you restart your servers.)

Once attached, the device appears as any other usb device on your linux system, e.g. you can see the device through lsusb etc. It will also appear as a standard usb device in your /dev folder, e.g. /dev/ttyACM0 etc. Again, as your OS sees it as a standard usb device, you can also use udev to symlink it if you wanted a fixed path.

Hope that helps.

Wow.
I am clearly out of my league.
Would you happen to have a step by step idiot’s guide for this written somewhere?
I tried the way this site did
https://community.home-assistant.io/t/rpi-as-z-wave-zigbee-over-ip-server-for-hass/23006 and I could not complete the client side(Ubuntu VM on an esxi 6.0) without errors

Sorry, no I don’t have anything further written down.