Intel NUC vs Laptop for (single) VM running OpenHab and other automation related services

I know, million times discussed topic, but I couldn’t find clear answer/understanding of benefits of one over other platform.

I researched few hours what is THE platform for me, and come to the conclusion that NUC would fit all my demands. Small, powerful, silent.

I am planning on running a virtual machine on it (mainly due to easy backup and hardware independence(?) ) with OpenHab, mqtt, node-red, some database, reports, scripts supporting home automation etc. No dockers for now, I think I would give up on complexity there.

All on this machine will be dedicated to home automation, I will eventually build another server for my other projects, with more muscle and virtualization, network things and what not.

Today I am running OpenHab on a 12 years old laptop without vm, and everything works great. To build a NUC, I got to the price of around 350-400e/usd with 100GB ssd and ~8GB ram, which is basically a price of a laptop with similar performances(f.eks. ASUS Vivobook i3-8130U). But in addition, laptop has a built monitor and keyboard for emergency access, and a battery that helps with soft shutdowns (still does on my old laptop, even thought it holds 10mins, it is plenty for shutting down the system).

Since it was over 10 years I purchased laptop last time, would I be missing something if I go for laptop over NUC?
F.eks. much better virtualization or usb z-wave stick, or something else I do not see or use today?

They would be stored inside server rack, I think both are very small physically.


It really depends on what you plan to do…
For myself I only use openHABian for my terrariums to monitor and control. Snakes are happy :slight_smile: :snake:
My RaspberryPi 3b can run 50 items +/- with InfluxDB and Node-RED with no Problems so far.

If I would plan to make more things in my flat “smart” I would upgrade to NUC/VM/up²

1 Like

I moved from Raspberry, and due to databases and logs and other things that will be there, something more robust is what I would like. I prefer to have a dedicated machine for home automation (not sure exactly why, I guess in case of hardware failure I could use some laptop as a instant solution until I fix the problem). Dedicated pc for home automation also protects it from myself :slight_smile:

It will control the whole house, today I have around 70 “devices” (not all physical), lights, heating, shades, sensors… I run z-wave, ikea, mysensors(diy arudino), xiaomi, 433mhz…

All other things being equal I’d go for a NUC over a laptop. The lower power consumption and better space efficiency is worth it.You will need to run the laptop with the lid open or risk overheating which makes putting it in a rack or a closet challenging.

Either choice will be way overkill to run just your home automation.

I probably wouldn’t bother with the VM and just install everything natively. If you think Docker is added complexity I’d use the same argument for the VM. If you will only ever run the one VM that’s a whole lot of complexity (more than Docker frankly) for not that much benefit.

But however you go, I’d recommend using a Debian based distro as your OS and follow the manual instructions for using openHABian. It’s not just for Raspberry Pis. It will save you a lot of pain and a lot of work.


Thank you @rlkoshak, I actually picked up a lot of ideas from your previous posts on the topic.

Now I am intrigued about the VM, as it was my “solution” for backing up for both software and hardware problems…
Plan was to set up the vm and have it run backups of entire image, that way if something goes wrong I could just roll back entire machine (big fan of clonezila/northon ghost type of backup)
Also if hardware goes bad I could just buy some new nuc, setup the vm as before and “just” copy the entire machine. Most of software will not interact directly with hardware, I have only few usb devices to get in the vm. Here I am imagining this scenario almost as I am on raspberry and it dies, I just buy another and plug in the sd card (or is it?)

My argument on VM vs docker (mind you, I didn’t use neither very much) is that I set up VM only once, since hardware doesn’t change that much and just install the software as needed (creating snapshots as needed).

Instead of having to configure and connect containers for each service, and connect them with each other etc etc (again, I could be very wrong here)

With a no-vm solution I could backup using CloneZilla (a bit tricky to do it headless, although I didn’t put much time into this). But how to be hardware independent then (if nuc dies and I have to temporarily move to main server or backup laptop f.eks.), are there some better solutions to hardware independence?
Is VM providing hardware independence in practical “home office” world at all, or is it only applicable in data centers with a lot of similar machines?

Still seems like a lot of overhead and complexity just for backups when there are tons of full system backup and restore packages available. openHABian comes with scripts to install and configure Amanda and assuming you don’t have problems mounting an external file system to save the backups to you can be up and running with automated backups in minutes. Amanda runs fantastically well headless. And you don’t have to choose a hypervisor, figure out how to get it to run as a service, dealing with snapshot backups and such.

It’s just my opinion and not a strong one.

Now, if you plan on growing this NUC to doing more than just one job then I do highly recommend going with a T1 Hypervisor like KVM, Xen or ESXi and dedicate separate VMs to each purpose. For example, I have everything running on a desktop server machine with separate VMs for my NAS, media services (e.g. plex, gogs, calibre, nextcloud), home automation, and a desktop VM.

Note, I’m still using Docker to install all of the services running on this VM. They are not an either or choice. Containers have a lot to recommend their use even if you are already using a virtualized environment.

About the only thing different, at a high level mind you, between Docker and just installing the software using apt-get is that you have a different command to “install” and run it and you manage the service using docker instead systemctl.

With an apt based install you still need to configure everything with the IP address/hostname and port for the services. Docker has a lot of stuff you can take advantage of like data volumes and private networks between running containers and the like, and they do add a lot a complexity and they are considered best practices, but you don’t have to use any of that.

To install and run OH in Docker you just need to:

  • create some folders on your system to store persistent data, mainly addons, userdata, conf.
  • run
docker run \
        --name openhab \
        --net=host \
        -v /etc/localtime:/etc/localtime:ro \
        -v /etc/timezone:/etc/timezone:ro \
        -v openhab_addons:/openhab/addons \
        -v openhab_conf:/openhab/conf \
        -v openhab_userdata:/openhab/userdata \
        -d \
        --restart=always \

This will download and run OH 2.3 as a service. All the important files get stored on those folders you created above. And to access OH just use http://hostname:8080 same as you would with it being installed.

Now if you want Mosquitto it’s the same deal. Run docker run with the right parameters and then point OH and all your MQTT clients to hostname:1883.

You will have to learn the right options to pass to the docker command based on what you want to do which might be a small speed bump. But it conflicts between library versions and the like will never happen. Worrying about dependencies and installing libraries and needed services are no longer a worry. Service back (i.e. just backing up OH instead of the full server) is as simple as making a copy of that folders you mount into the container. I even use git to initialize my OH install so if I ever have to move OH for some reason I’m back up and running with two commands

  • git clone <my openhab2 repo> /opt/openhab
  • docker run ...

Frankly, most of the software we are running is reasonable hardware independent already, so long as you are not changing CPU architecture. You can’t move an x64 compiled software to an ARM system for example. But that is true of VMs too.

So the migration from one machine to another is pretty painless. Let’s assume you are moving from one NUC to another. If you have Amanda it’s as simple as installing Amanda on the new Nuc and restoring the backup. Even if it isn’t a NUC, I believe it would be possible to migrate to almost any other machine of the same CPU architecture. Linux is really good these days at detecting and loading the right hardware drivers in such a migration. But I can’t say I’ve done it. I’ve heard of it being done but not done it myself.

My preferred approach is to script out the config of my machines using Ansible. So to migrate to a new machine I install the base OS and run my Ansible script that checks out my configs from git and calls the docker run commands to bring back up my services. But one doesn’t need Ansible for this. Most of these services are two to four commands total to get up and running again. A bash script would be plenty. I use Ansible to configure lots of RPis in my system as well so it was a natural extension to configure my VMs this way as well. With this approach I can even change CPU architectures (e.g. move from my VM to a RPi) and all I’d have to change is the docker image I run.

I like this approach because it keeps my backups smaller, I have the full history of all the config changes I’ve made to all of my services in configuration control, and I can quickly redeploy on a fresh OS, for example, when moving from one major release of Ubuntu server to the next, rather than needing to be stuck on the old version or risk the often buggy upgrade process. I can also start with a clean slate very easily and with minimal effort.

Not that I’m aware of. I think it’s largely because hardware independence isn’t that big of a problem in this space. At least with Linux doing the equivalent of popping the hard drive from one machine into another machine of the same architecture is not that big of a deal. Most of the software we run these days don’t really care. And with capabilities like containers, hardware and OS independence is even greater.

That’s my two cents worth. There are lots of opinions on the subject.


I was secretly hoping you would elaborate on the subject @rlkoshak , as I usually do not have any questions after reading your posts :slight_smile:
I appreciate it.

Plan is to have NUC as a dedicated system for home automation, and one HPE microserver that would run similar things you are running (nas, media services for kodi, web dev server, network monitoring and mainteinance etc). Since the bigger server will be VM based (and if I go for hpe, it is amd based I think) then I figured I might as well put NUC in vm.

However I now have much clearer understanding and overview of it all, and you got me interested in Docker mostly for service backup, as right now Node-red is basically my main automation service, and I think I would not be able to restore its backups, as it either sounds or really is quite complicated. I also like the thought of “I can quickly redeploy on a fresh OS, for example, when moving from one major release of Ubuntu server to the next, rather than needing to be stuck on the old version or risk the often buggy upgrade process”. Since the scenario “set it up once and never again” is realistically impossible, I might as well look into Ansible and set all up automagically (now I have bunch of notes that I write as I install each service)

Now I have to get the budget approved from the CFO (my wife :slight_smile: ) and only thing I will think about is weather I will install docker containers on nuc with or without single VM.

Btw, I remember you mentioning considering prometheus/Nagios as monitoring system, have you decided on it yet?

I recently started using webmin for linux systems administration, and like it a lot as now I have 5-6 linux machines (mostly raspbi, kodi, smart mirror, cameras…) that I am configuring with it, but number will only grow in the future especially when big server comes a long, so idea of some centralized system for management/monitoring/maintenance is very interested to me.
Any suggestions on what more is needed for easier linux systems management for “medium-skilled” it person (read: I could learn and understand it eventually - but lack the time and often motivation :slight_smile: )?

To give you a taste, here is my OH Ansible role:

- name: Create openhab group using gid 9001
    gid: 9001
    name: openhab
    state: present
    system: yes
  become: yes

- name: Create openhab user
    comment: 'openHAB'
    createhome: no
    name: openhab
    shell: /bin/false
    state: present
    system: yes
    uid: 9001 # uid of openhab user inside the official container
    group: openhab
  become: yes

- name: Add the openhab user to the dialout group
  command: usermod -a -G dialout openhab
  become: yes

- name: Add {{ share_user }} to the openhab group
  command: usermod -a -G openhab {{ share_user }}
  become: yes

- name: Set permissions on openhab data folder so we can check out into it
    path: "{{ openhab_data }}"
    state: directory
    owner: openhab
    group: openhab
    mode: u=rwx,g=rwx,o=rx
  become: yes

- name: Checkout openhab config
    repo: "{{ openhab_conf_repo }}"
    dest: "{{ openhab_data }}"
    accept_hostkey: yes
  become: yes

- name: Change ownership of openhab config
    path: "{{ openhab_data }}"
    owner: openhab
    group: openhab
    recurse: yes
  become: yes

- name: Create expected folders if they don't already exist
    path: "{{ item }}"
    state: directory
    owner: openhab
    group: openhab
  become: yes
  become_user: openhab
    - "{{ openhab_data }}/conf"
    - "{{ openhab_data }}/userdata"
    - "{{ openhab_data }}/addons"
    - "{{ openhab_data }}/.java"

- name: Create database
    hostname: "{{ influxdb_ip_address }}"
    database_name: "{{ openhab_influxdb_database_name }}"
    state: present
    username: admin
    password: "{{ influxdb_admin_password }}"

# TODO there is currently a bug which prevents us from using influx in the container
- name: Create openhab user
#  command: influx -username admin -password {{ influxdb_admin_password }} -database '{{ openhab_influxdb_database_name }}' -execute "CREATE USER {{ influx_openhab_user }} WITH PASSWORD '{{ influx_openhab_password }}'"
  command: curl -XPOST http://localhost:8086/query?db={{ openhab_influxdb_database_name }}&u=admin&p={{ influxdb_admin_password }} --data-urlencode "q=CREATE USER {{ influx_openhab_user }} WITH PASSWORD '{{ influx_openhab_password }}'"

- name: Give openhab permissions on openhab db
#  command: influx -username admin -password {{ influxdb_admin_password }} -database '{{ openhab_influxdb_database_name }}' -execute "GRANT ALL ON {{ openhab_influxdb_database_name }} TO {{ influx_openhab_user }}"
  command: curl -XPOST http://localhost:8086/query?db={{ openhab_influxdb_database_name }}&u=admin&p={{ influxdb_admin_password }} --data-urlencode "q=GRANT ALL ON {{ openhab_influxdb_database_name }} TO {{ influx_openhab_user }}"

- name: Create grafana user
#  command: influx -username admin -password {{ influxdb_admin_password }} -database '{{ openhab_influxdb_database_name }}' -execute "CREATE USER {{ influx_grafana_user }} WITH PASSWORD '{{ influx_grafana_password }}'"
  command: curl -XPOST http://localhost:8086/query?db={{ openhab_influxdb_database_name }}&u=admin&p={{ influxdb_admin_password }} --data-urlencode "q=CREATE USER {{ influx_grafana_user }} WITH PASSWORD '{{ influx_grafana_password }}'"

- name: Give grafana read permissions on openhab_db
#  command: influx -username admin -password {{ influxdb_admin_password }} -database '{{ openhab_influxdb_database_name }}' -execute "GRANT READ ON {{ openhab_influxdb_database_name }} TO {{ influx_grafana_user }}"
  command: curl -XPOST http://localhost:8086/query?db={{ openhab_influxdb_database_name }}&u=admin&p={{ influxdb_admin_password }} --data-urlencode "q=GRANT READ ON {{ openhab_influxdb_database_name }} TO {{ influx_grafana_user }}"

# TODO download Jython and add to env in docker call

- name: Start openHAB
    detach: True
      - "/dev/ttyUSB0:/dev/ttyUSB0:rwm"
      - "/dev/ttyUSB1:/dev/ttyUSB1:rwm"
    image: openhab/openhab:2.2.0-amd64-debian
    log_driver: syslog
    name: openhab
    network_mode: host
    pull: True
    recreate: True
    restart: True
    restart_policy: always
      - /etc/localtime:/etc/localtime:ro
      - /etc/timezone:/etc/timezone:ro
      - "{{ openhab_data }}/conf:/openhab/conf"
      - "{{ openhab_data }}/userdata:/openhab/userdata"
      - "{{ openhab_data }}/addons:/openhab/addons"
      - "{{ openhab_data }}/.java:/openhab/.java"

Note this is just a snippet really. It has dependencies on the mosquitto, influxdb, and grafan roles so all of those will get installed first before OH get’s installed. And it is a bit more involved then is necessary because I go out of my way to make sure that there is a user on the host that matches the users inside the Docker containers. That is what most of the tasks are dealing with, creating the openhab user and lining up all the uids and gids and giving that user and my main login ({{ share_user }}) the right group memberships.

You will also see I’ve automated the creation of the databases OH needs and giving Grafana permission to read the OH database.

Variables (stuff in {{ }} ) are defined in a different file.

I really like Ansible. My entire configuration is documented and source controlled in git and to set up a new or re-setup a machine all I have to do is install the base OS, create my login user, set up the ssh certs so I can have password less login, and run my Ansible scripts. And since Ansible will run also on Mac (see Homebrew for how to install it) or on Windows (easiest is to install Ubuntu from the Windows Store and run it from there) I can work on and set up any machine on my network from almost any other machine.

And if you do it right, you can upgrade your entire infrastructure by just editing some variables (e.g. the Docker images to pull) and running the same playbooks again. Ansible, when done right, is idempotent. This means that if there is nothing to change it doesn’t change anything. So in essence, you define what the machine should look like and the Ansible will make it so.

I should admit though that I have most definitely not done it right. Until I get a chance to merge things I have to sets of roles, a build set and an update set.

Decided? Yes. I’ll probably do Prometheus since I’m already using Grafana and InfluxDB. Done anything about it? No. I still have a few higher priority projects:

  • Upgrading my OMV machine (holly cow, I’m two versions behind)
  • Getting NextCloud or Alfresco up and running
  • Getting Shinobi up and running
  • Rebuilding my garage door controller on an ESP8266 since my RPi is having issues (probably a going bad SD card)
  • Reworking my backup

I really like Ansible for a lot of this. At least for the setup and maintenance part of it. Once a week or so I just run up update playbooks and everything gets updated to include tripwire databases and such. I can even do it with one tap on my phone thanks to OpenVPN and the JuiceSSH app.

For monitoring I don’t really do anything special. I use Network binding in OH to monitor a few ports but nothing beyond that right now. I like the idea of Prometheus but my needs are not that strong yet to make it a priority.


Hey @rlkoshak one question about deployment of new system with Ansible, curious how you get back the rules/items and other custom things? Is it just a “copy” role, and restart of OH or similar?
I guess that is part of backup/restore/deployment process, but since it sounds very simple, I figured I might as well confirm how it works :slight_smile:

Btw, I started playing with Vagrant and Ansible with help of source posted here Complete automated IoT Server Setup
and boy o boy am I loving it. It is the first time I do not have any fear of linux or destroying something, and am willing to tweak and experiment. I now love linux, as I can f.eks. test permissions and firewall without worrying that I will brake something. I just vagrant destroy, and vagrant up and all is there exactly as I defined it before. And I can do it 10 times if I want to, and it takes ~5min to have linux running and configured how I want it, like a version controlled linux. And I can run it on any pc that I have at my disposal.
A bit late to the VM+ansible party, but it is a mind blowing experience for me…

1 Like

I check in my conf folder and much of the userdata folder (etc, jsondb, zwave, persistence, etc) into a personal git server (Gogs actually). So I have use a git clone task to check them all out. I run in Docker so both can be checked out to the same folder and be stored in the same repo.

I like configuration controlling my configs because I’m a software developer at heart and it is nice having the history of changes to go back through.

My Gogs server gets separately backed up so I don’t actually do anything special to backup my OH config on the OH server itself. I just make sure I check in and push my changes once they are made and tested.

But this is unique to my setup. YMMV.

That is how I felt when I first discovered Test Driven Development (I’m old). All of a suddenly I cna make changes with near abandon and instantly know when something becomes broken. It’s no panacea but it felt very freeing.

I’ve not done anything with Vagrant yet (I usually deal with T1 Hypervisors and unless I’m wrong Vagrant mainly works with Type 2).

Ansible isn’t only useful for VMs too. I have half a dozen RPis of various flavors. If an SD card goes south on one or I get a new one all I have to do is install Raspbian, add the ssh and wpasupplicant file to /boot, boot the machine, add my user and copy over my ssh certs (I should probably grab a copy of an image with just this done) and run my build playbook and boom, clone of my standard RPi config (moved logging and temp to a tempfs, some security changes to configs, etc) or a fully configured RPi depending on what roles I apply.

If you are not yet, I recommend checking your playbooks in to some sort of configuration control as well. Playbooks are code too and that will give you a history of your configs just like it gives you a history of changes to your system configuration.

I’m glad you are finding it useful. I too find it super useful, to the point where I do not take system images of any of my machines. I backup my Ansible playbooks and the data generated (e.g. databases) but not of the machines themselves. Why back that up when I can just rebuild it in about the same amount of time it would take to copy the backed up image over?

Hey @rlkoshak, thanks again for sharing ansible playbook, I am going slowly but firmly in setting up my server (and with much more control this time). Since you are using containers a lot, I was wondering how do you go about restricting access to services? (in regards to firewall I mean)

I installed some helper services and found out that docker goes around ubuntu firewall (using ufw for managing) meaning that all services are available from all network computers. I would like to restrict some containers to only localhost/other containers, but some should be accessible to entire network (like mosquito), and some only to f.eks. my workstation (like node-red and portainer etc). I read on internet that firewall with docker is not that trivial to achieve, an honestly I was looking forward to managing firewall with ansible (for easy overview of openings)

Any input?

I don’t really. I should and it is on the todo list but it’s not a high priority and I haven’t really looked into it that much.

I would probably do it in my pfSense LAN firewall when/if I do implement something.

Hi @dakipro,

I use traefik as a reverse Proxy to access my services. It works with docker so you can configure it that every container gets its own front- and backend, and you can put container specific options as labels on your container.

As default, your service is not accessible from the host, another container or any other device. You have to expose the ports you‘d like to use to access them. So delete these settings and put your containers to one or many docker networks where also traefik is a member of. I haven‘t done it yet, but I think you can restrict acces in traefik:

1 Like

Just to add two cents, a while back I had a laptop with a SSD and wanted to avoid openhab using it; i leveraged fstab to mount the various folders to a USB drive connected to the laptop.
Using that same method its also possible to offload some parts to a network share /etc/openhab2, /var/log/openhab2, etc. For those unfamiliar with fstab, it allows you to mount shares as if it were a local directory, so it is possible the only thing to back up is the fstab file itself.

Currently with openhab running on a raspberry pi 3 and mount the logs on a NAS windows share, haven’t encountered/experienced any issues.

1 Like

thanks @dominikkv this is very interesting, as it might solve both access control and reverse proxy (I never liked nginx). I glanced over docs and I do see option
traefik.frontend.whiteList.sourceRange=RANGE - - Sets a list of IP-Ranges which are allowed to access. I also see people are praising very easy LetsEncrypt integration, which is also interesting.

I will look more into it, but it looks very promising for now, thanks again!

1 Like

Hey @dominikkv , I installed traefik and so far so good, I got some containers back it, I think I understand it a bit.
What looks most logical to me is to use PathPrefixStrip as rule, f.eks. PathPrefixStrip=/portainer/
and assign that to a container. That way if I access my server I go to the portainer directly.
How are you organizing services behind it? In my case, how would you put openhab, which I guess should be at root (at least I think some plugins/services expect it to be in root, and to be on 80/8080 ports)?
How to move traefik itself to another port/url? Any other tips?


Hi Q, if i have VM, does it save my me, from issues like SD card issues?
as i know VM deals mostly with RAM when its on…

Hey @dakipro, I do not use OpenHAB with network_mode: host to avoid those problems. The docker hub page says:

Important: To be able to use UPnP for discovery the container needs to be started with --net=host .

but I am a fan of creating my Things with the config files (to save the config to git), so I do not need any discovery :yum: If you put it in host mode every port is available to the whole network, making traefik useless. But if you realy want to do it, you can set the port where OpenHAB listens on, by declaring an environment var OPENHAB_HTTP_PORT: "8080" and OPENHAB_HTTPS_PORT: "8443".

Furthermore, in traefik I do not route by path, but by host. To achive this I have set the DHCP of my router to use a own DNS server to introduce a top level domain to my network and resolve the domains to traefik. (I use pihole, has also the bonus that ads gets filtered out). This way the user has not to know the ip adress of my server and can simply type openhab.loc :sunglasses:

So this is my docker-compose:

version: '3'
    image: openhab/openhab:2.4.0.M5-amd64-debian
      - traefik
      OPENHAB_HTTP_PORT: "8080"
      OPENHAB_HTTPS_PORT: "8443"
      traefik.enable: "true"
      traefik.frontend.rule: "Host:openhab.loc" "home_traefik"
      traefik.port: "8080"
      traefik.backend: "openhab"

    image: pihole/pihole:4.0.0-1_amd64
      ServerIP: ""
      TZ: "DE"
      WEBPASSWORD: "test123"
      DNS1: ""
      DNS2: ""
      - NET_ADMIN
      - "53:53/tcp"
      - "53:53/udp"
      traefik.enable: "true"
      traefik.frontend.rule: "HostRegexp:pihole.loc,{catchall:.*}"
      traefik.frontend.priority: "1"
      traefik.backend: "pihole" "home_traefik"
      traefik.port: "80"
      - traefik
    image: traefik:1.7.4-alpine
      - "80:80"
      - "/var/run/docker.sock:/var/run/docker.sock"
      - traefik
      traefik.enable: "true"
      traefik.frontend.rule: "Host:traefik.loc" "home_traefik"
      traefik.port: "8080"
      traefik.backend: "traefik"

1 Like

Not to convince you to change, but for future readers on this forum, auto discovered Things get saved to a text file in $OH_USERDATA/jsondb which can, and many of us including me do, save to git to preserve the history of changes.

Again, I’m not saying you need to change what you do. Do what makes sense to you. But it is factually incorrect to imply that automatically discovered Things cannot be saved to git and achieve all the benefits from doing so.