Organizing my smart home with Docker

As I’ve built up my smart home, I’ve found myself using an increasing number of interdependent services. As they can cause issues with each other, I was very attracted to Docker as a way to keep everything isolated and portable. That said, I ran into some hiccups with my setup, so I thought I’d share a few of my suggestions and best practices in the hopes they help someone out there.

I am far from an expert, and these are my experiences. I’ve learned by failing and by lots of googling and trial and error. They are not the only, or even necessarily the best way to do things, just the best as far as I’ve learned. I’m very open to suggestions, though my primary intent with this post is to help others get through some of the stickier points.

Also, my only experience with Docker has been running on an x86 Ubuntu server. While much of this probably will apply to other systems (especially Macs and other Linux flavors), I can’t guarantee that.

I use Docker to run the following services on my server.

  • openHAB Home Automation
  • MotionEye CCTV
  • Emby media server
  • Logitech Media Server + 2 SqueezeLite instances for audio streaming
  • ShairPort Sync for audio streaming from Apple devices
  • InfluxDB
  • Grafana for beautiful graphs
  • Frontail for viewing logs
  • UniFi poller for bringing UniFi data into my Grafana graphs
  • Portainer for container management

My list of containers in Portainer

That’s a lot of software with conflicting requirements, and I think it’d be really hard to keep it all running happily using just SystemD. This is where Docker really shines!

I strongly recommend Portainer to keep track of your Docker containers. While you can theoretically do everything you need from the command line, I find it very helpful to view everything at a glance and make small changes. I would make getting that set up the first step of a successful Docker setup.

Next, I’d identify software that doesn’t necessarily benefit from the Docker experience. Initially, I planned a strict Docker-only stance for my IoT services, but I found that to be less than optimal. nodeRED tended (for me) to work better installed directly on the host machine, and I also (before I switched to MotionEye) found that Shinobi CCTV worked better directly on the machine. I don’t remember why, but I had trouble with my Mosquitto container and decided just to run it as a SystemD service. Finally, I used QLC+ for DMX control, and that seems to work best on its own system, so it got its own RPi3. YMMV.

Next, I start with the service’s entry on Docker Hub and look at the docker run examples. I take the example, create a text file on my desktop computer, and use that as a template. That way, if I ever need to recreate the container, I can do so using the exact same variables as I used to build it. An important caveat here is that, if you make changes to your container using Portainer, that it’s important to include these changes in your notes.

Next, if my container requires a persisted volume (as most do), I always use a bound volume using the I prefer the ability to have all of my container data in one place. I use the /opt directory, and create a subdirectory for the service, i.e., /opt/grafana . I prefer the ability to have all of my container data in one place. I use the /opt directory, and create a subdirectory for the service, i.e., /opt/grafana . I make sure the run command reflects the proper file path:

-v /opt/grafana/data:/var/lib/grafana \

Note that for Docker flags the host machine’s info is on the left and the container is on the right. For this example, the folder /opt/grafana/data appears to the container as /var/lib/grafana .

Next, and I think this issue creates some of the biggest headaches: I consider file permissions. If the container doesn’t have permission to view the files inside it’s directory, it can’t function properly. To avoid going in to unnecessary detail I’m going to link some recommended reading here:

https://www.tecmint.com/add-users-in-linux/

Basically, to get your permissions straight, you will want to do the following:

  • Create a new user and group for your service (i.e., grafana ) with a specific user and group ID (I used 9004 for grafana)
  • Add your primary user to the group, so you have read/write permissions over the files
  • Give ownership to the new user: sudo chown -Rv grafana:grafana /opt/grafana
  • Change permissions: sudo chmod -Rv 775 /opt/grafana
  • Make Docker run the container as the proper user (add to your Docker Run): --user=9004 \

At this point, you should be ready to try firing up the container. I keep an instance of Portainer up so I can monitor the new container. The Log is often very helpful in diagnosing issues that keep containers from coming up or working properly.

Here’s my full Grafana run example (minus a few settings I set here that would be unnecessary for the purposes of demonstration):

sudo docker run \
-d \
-p 3000:3000 \
-p 8081:8081 \
--restart=always \
--name=grafana \
--user=9004 \
-e GF_SECURITY_ALLOW_EMBEDDING=true \
-e GF_AUTH_ANONYMOUS_ENABLED=true \
-v /opt/grafana/data:/var/lib/grafana \
grafana/grafana

My container is now happily running, and is set up to run on every reboot. If I need to make changes, I can test them in Portainer but make sure to keep them documented in my text file. This way, if I every need to start over with a fresh install of my server, I can do so with a copy of my /opt folder and a collection of saved docker run commands.

One final but useful piece of info: how to SSH into your container. For this example, I’ll use InfluxDB as it’s the only container that I needed to SSH in to for setup purposes. For a container named influxdb use this command:

docker exec -it influxdb /bin/bash

You then can interact with the container directly. In the case of InfluxDB, this is the easiest way to create new users and databases.

I hope this is helpful! For many of you this is probably repeating things you know well, but I hope this might be helpful for someone!

8 Likes

I think it would be very useful to explain what didn’t work well for you. I don’t use nodeRED but I am using Shinobi in Docker without complaint. Though I’ve only three cameras and I’ve kind of moved on from Shinobi and am using TinyCam on an old tablet instead.

Also, Mosquitto works well for me in a container so it would be interesting to learn what problems you experienced.

One practice I tend to use is to mount a few networking and time based files as read only into the container to make sure the containers that need to can see other hosts by name and have the correct time zone and such.

  - /etc/hosts:ro
  - /etc/passwd:/etc/passwd:ro
  - /etc/localtime:/etc/localtime:ro
  - /usr/share/zoneinfo:/usr/share/zoneinfo:ro
  - /etc/timezone:/etc/timezone:ro

That can help for some containers where this information matters. For example, I can use the FQDN in openHAB to connect to Mosquitto instead of needing to use the IP address which can let me more easily move it if I need to at a later date.

Absolutely, I’d be happy to share.

nodeRED gave me the most problems. I had a lot of trouble getting the HomeKit nodes working. There are some custom nodeRED Docker containers designed to allow HomeKit to work from inside a container, but they weren’t consistently updated and were finicky to get working. I also had trouble installing other nodes inside the container. Finally, I wanted to run exec commands from nodeRED, particularly since this was impossible from my openHAB container, so it seemed easier to just install it as a service than to keep trying to get the container to work.

As far as Shinobi - I had issues using my graphics card to accelerate the motion detection, and I thought a “metal” install might solve my problem based on the advice from the Shinobi installation page.. It didn’t end up solving my hardware problem, and I ended up switching away from Shinobi anyways. So, maybe a poor choice of words to say it worked better directly on the machine - I did that because the docs recommended it.

Finally, Mosquitto: I actually don’t remember what went wrong. I do remember struggling to get authentication working via the config files but am not sure what my issue was. I think it’s entirely possible I was just tired after a fresh reinstall of my server that I just opted for the easier installation. Perhaps I should try it again with fresh eyes.

Do you make this a practice with all your containers? I can see how this would reduce some issues.

By the way, and I should have said it in my first post - thank you for all your advice and posts on Docker. I certainly wouldn’t have figured any of this out without lots of great advice from you!

For Shinobi I’m running on VMs and I don’t have a discrete graphics card so graphics acceleration isn’t an option for me so I never encountered those problems. I think I did experience a need to increase the number of file handle or something like that. I’d have to go back to my Ansible scripts.

Yes, I always do this unless it causes some problems. I’ve not encountered any problems doing this so far. Often, I’ll also mount /etc/passwd depending on how the container handles the runtime user. Sometimes it has to be run as root and then the container itself moves to another user (e.g. openHAB) so using --user doesn’t really work well.

The biggest problem I’ve encountered tends to be with databases. Originally I had the idea that I’d put all the data on a network mount and since I have a mixed network I chose CIFS (i.e. samba). Unfortunately with a CIFS mount everything has the same permissions and PostgreSQL and InfluxDB like to have rw for their user only for the data files. In the end I moved to storing the data on the host and have a cron job to backup the database to my NAS every night instead. I may move this to a NFS mount at some point but what I have running now seems robust enough.

Thanks! Though I’m by no means a Docker expert. So don’t take anything I say as a best practice. :wink:

And just for the record I’m running three VMs with docker containers:

argus: my home automation server

  • openHAB
  • grafana
  • InfluxDB
  • Mosquitto
  • Portainer Agent
  • Grafana Image Renderer
  • Shinobi

medusa: my media server

  • Guacamole (HTML5 VNC/RDP server, lets me access my machines through the web)
  • Guacd (part of Guacamole)
  • elasticsearch
  • Gogs
  • Next Cloud
  • Redis
  • PostgreSQL
  • Plex Media Server
  • Calibre
  • Portainer Agent

huginn: my virtual desktop

  • code-server
  • Portainer

I’m still fighting with getting elasticsearch to work with Next Cloud so it’s not really being used. Shinobi is running but I’m not doing anything with it at the moment. My NAS VM just runs OMV and doesn’t run any containers. All four VMs are running on the same ESXi server.

1 Like

Hi,

I’m trying to transfer my current openhabian setup to docker.
So far Openhab runs ok …
Next,
I wanted to add frontail in my docker_compose.yml as follows

frontail:
image: “mthenw/frontail:latest”
container_name: “frontail”
restart: always
network_mode: host
command: --disable-usage-stats --ui-highlight --ui-highlight-preset /frontail/preset/openhab.json -t openhab -n 200 /logs/openhab.log /logs/events.log
depends_on:
- openhab
volumes:
- “${PWD}/frontail/preset.json:/frontail/preset/openhab.json:ro”
- “${PWD}/frontail/openhab.css:/frontail/web/assets/styles/openhab.css:ro”
- “${PWD}/volumes/openhab/userdata/logs:/logs:ro”
ports:
- “8001:9001”

It doesn’t work and when I look in the portainer log I repeatedly get this error =
standard_init_linux.go:211: exec user process caused “exec format error”

Any idea what I’m doing wrong ? If it is permissions related, I don’t have which permissions the frontail container would need.

Thanks in advance
DirkB19

Looks like the docker image is not binary compatible with your architecture.
See e.g. https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/4558

Other possible reasons are mentioned here: https://stackoverflow.com/questions/58298774/standard-init-linux-go211-exec-user-process-caused-exec-format-error

Out of curiosity, why run the containers on different VM’s. And why use VM’s at all?

To remain a bit more on topic:
I moved from running OpenHAB with a few dependencies on an Intel NUC and having a separate machine as a NAS on steriods, to a single server running a bare minimum Ubuntu server just running Docker.

The containers I use are OpenHAB, Portainer, Shinobi, Plex, Grafana, MariaDB, Duplicati (for on and offsite backups), Unifi-controller, pihole, nefit-easy (separate service for automating my thermostat) and watchtower (for automatically updating some of the images)

The OpenHAB conf folder is a Git repo, connected to a private github repo, so that’s save. And so is the folder containing the docker-compose and some configuration folders for the different containers.

Something I still want is a decent reverse proxy, so I get out of the port mapping mess. I gave traefik a try, but couldn’t get it to work the way I wanted.

So I can mess with, restart, break, and otherwise disrupt my home automation services/machines without impacting my son’s ability to access Plex or my wife’s blood sugar readings and treatment calculations (Nightscout running on a different VM not listed above), or the ability for the automated backups to run. It greatly takes the pressure off when something big breaks at the OS level for some reason because that means that not everything is offline. I can go a few days without home automation or without media but not without both at the same time.

Also, there are certain well known ports that can’t be remapped and retain the same functionality. For example, both openHAB and Plex use the same port for network discovery. If you remap one that means that one won’t work. Consequently, Plex and openHAB cannot be fully functional on the same host.

I’ve had mixed luck with Shinobi. Sometimes it will run amok and consume all the file handles on the machine causing everything else (even stuff running in containers) to fail.

It’s also easier for me to manage over all by keeping them separated in VMs. I can take a snapshot before running a big upgrade which takes seconds unlike the hours that would take to fully backup a physical machine before running a big upgrade.

I like OpenMediaVault for my NAS (another VM running on this machine not listed above) but I don’t want to do everything in OMV.

I no longer have a Windows, Mac, or Linux laptop. All we have now are Chromebooks. Having a virtual machine desktop I can log into periodically to do some administration or development or long running tasks (e.g. converting a video file so it works with Roku better) is what let’s me do this. It’s super nice to have a machine that lasts for 12 hours on a battery charge and the ability to access power or run long running tasks when I need to.

So for me, I have lots of good reasons to run separate VMs. Should any of the above change, because I have everything configured in Ansible (see Ansible Revisited) if I ever do want to consolidate it’s super simple to do so.

I think most OH users use nginx with a minority using Apache. I use HAProxy but mostly because it’s built into pfSense which is my firewall. Now that LetsEncrypt allows wild card certs and I pay for a domain name it makes it fairly easy as instead of messing with https://some.dyn.dns/openhab which doesn’t always work for all services, I can use https://openhab.some.dns.