Still seems like a lot of overhead and complexity just for backups when there are tons of full system backup and restore packages available. openHABian comes with scripts to install and configure Amanda and assuming you don’t have problems mounting an external file system to save the backups to you can be up and running with automated backups in minutes. Amanda runs fantastically well headless. And you don’t have to choose a hypervisor, figure out how to get it to run as a service, dealing with snapshot backups and such.
It’s just my opinion and not a strong one.
Now, if you plan on growing this NUC to doing more than just one job then I do highly recommend going with a T1 Hypervisor like KVM, Xen or ESXi and dedicate separate VMs to each purpose. For example, I have everything running on a desktop server machine with separate VMs for my NAS, media services (e.g. plex, gogs, calibre, nextcloud), home automation, and a desktop VM.
Note, I’m still using Docker to install all of the services running on this VM. They are not an either or choice. Containers have a lot to recommend their use even if you are already using a virtualized environment.
About the only thing different, at a high level mind you, between Docker and just installing the software using apt-get is that you have a different command to “install” and run it and you manage the service using docker instead systemctl.
With an apt based install you still need to configure everything with the IP address/hostname and port for the services. Docker has a lot of stuff you can take advantage of like data volumes and private networks between running containers and the like, and they do add a lot a complexity and they are considered best practices, but you don’t have to use any of that.
To install and run OH in Docker you just need to:
- create some folders on your system to store persistent data, mainly addons, userdata, conf.
- run
docker run \
--name openhab \
--net=host \
-v /etc/localtime:/etc/localtime:ro \
-v /etc/timezone:/etc/timezone:ro \
-v openhab_addons:/openhab/addons \
-v openhab_conf:/openhab/conf \
-v openhab_userdata:/openhab/userdata \
-d \
--restart=always \
openhab/openhab:2.3.0-amd64-debian
This will download and run OH 2.3 as a service. All the important files get stored on those folders you created above. And to access OH just use http://hostname:8080 same as you would with it being installed.
Now if you want Mosquitto it’s the same deal. Run docker run with the right parameters and then point OH and all your MQTT clients to hostname:1883.
You will have to learn the right options to pass to the docker command based on what you want to do which might be a small speed bump. But it conflicts between library versions and the like will never happen. Worrying about dependencies and installing libraries and needed services are no longer a worry. Service back (i.e. just backing up OH instead of the full server) is as simple as making a copy of that folders you mount into the container. I even use git to initialize my OH install so if I ever have to move OH for some reason I’m back up and running with two commands
git clone <my openhab2 repo> /opt/openhab
docker run ...
Frankly, most of the software we are running is reasonable hardware independent already, so long as you are not changing CPU architecture. You can’t move an x64 compiled software to an ARM system for example. But that is true of VMs too.
So the migration from one machine to another is pretty painless. Let’s assume you are moving from one NUC to another. If you have Amanda it’s as simple as installing Amanda on the new Nuc and restoring the backup. Even if it isn’t a NUC, I believe it would be possible to migrate to almost any other machine of the same CPU architecture. Linux is really good these days at detecting and loading the right hardware drivers in such a migration. But I can’t say I’ve done it. I’ve heard of it being done but not done it myself.
My preferred approach is to script out the config of my machines using Ansible. So to migrate to a new machine I install the base OS and run my Ansible script that checks out my configs from git and calls the docker run commands to bring back up my services. But one doesn’t need Ansible for this. Most of these services are two to four commands total to get up and running again. A bash script would be plenty. I use Ansible to configure lots of RPis in my system as well so it was a natural extension to configure my VMs this way as well. With this approach I can even change CPU architectures (e.g. move from my VM to a RPi) and all I’d have to change is the docker image I run.
I like this approach because it keeps my backups smaller, I have the full history of all the config changes I’ve made to all of my services in configuration control, and I can quickly redeploy on a fresh OS, for example, when moving from one major release of Ubuntu server to the next, rather than needing to be stuck on the old version or risk the often buggy upgrade process. I can also start with a clean slate very easily and with minimal effort.
Not that I’m aware of. I think it’s largely because hardware independence isn’t that big of a problem in this space. At least with Linux doing the equivalent of popping the hard drive from one machine into another machine of the same architecture is not that big of a deal. Most of the software we run these days don’t really care. And with capabilities like containers, hardware and OS independence is even greater.
That’s my two cents worth. There are lots of opinions on the subject.