Seeking Recommendations for NUC + Virtualization for openHAB2

I’ve been reading the many messages and threads about migrating away from my current RPIs to the Intel NUC. It’s now 2019 and the suggestions I’ve read seem a bit dated… so asking for your latest thoughts and recommendations.

Here’s what I’m leaning towards and before I spend the $$ I thought it would be wise to validate everything with your many man-years of experience out there.

Thinking to buy headless system: (ssh into it)

  • Intel NUC 8i5 BEK (2.3 GHz Intel Core i5-8259U 4-Core/8-thread; $370)
  • 16 GB RAM (qty=2 Crucial 8GB DDR4 2400 MHz SO-DIMM Memory Module, $53 each)
  • 250 GB DRIVE ( Samsung 250GB 970 EVO NVMe M.2 Internal SSD; $78).

This should certainly give me lots of room for 10+ VMs… I would like to migrate at least 4 of my separate RPi’s currently in use to the NUC:

  1. openHAB2 (about 90 things)
  2. syslog-ng server (capturing logs from my Ubiquiti Edge Router)
  3. my smart-home website (nginx + mySQL database)
  4. Leviton OmniPro2 java “listener” which captures all the pushed notifications and writes the processed events into the mySQL database.
  5. weather-related RPi; also sending info to mySQL database.

Looking for your recommendations/discussion regarding which Virtualization sfwr to use on this quad-core NUC; then use openhabian in the openhab2 VM, probably Ubuntu server for other VMs OS.

I like the idea that if I mess up a VM, I can simply re-load a snapshot etc to get it running again.

I am new to VMs and NUC. This will be a learning experience and should be good learning too for me.

THANKS in advance for your thoughts and help!

On my virtualized J4205, I have 4 OpenVZ instances running multiple microservices, with max load, I’ve pushed it hard and it’s still, to date, had a max 16Watts of power use. With the stuff on your list, personally, I’d opt for a Xeon E3 V5. More power, pretty much same (or less) power consumption and is basically intended for servers.

Totally up to you if you want to run that many on the NUC. I still have a few running on Orange Pi Zero and some on RPI because power consumption-wise, it’s still cheaper than a much larger virtualized server.

For the virtualization, I’ve been using proxmox for a long time. I have OpenVZ (v7) VPSes in the cloud so I figured I’d try it at home. Might go back to proxmox tho. That GUI is lovely

1 Like

Probably not with only 16 gb of RAM. I have a VM server with only 5 VMs and I’m using almost all of my 24 GB of RAM.

If you are heading for that many VMs you should plan on more RAM, or consider using containers and fewer VMs instead. I actually do this too. Each of my VMs has a job and I isolate the services from each other using containers.

If you want that many VMs, I’ll also suggest against ESXi as the free license only allows allocating so many virtual CPUs at a time and it is far below 10 CPUs.

I know at least one user on the forum uses Xen. Were I to start over I’d probably choose Xen or KVM.

I’m running on a Lenovo Desktop Server with upgraded RAM. At the time it was cheaper than a NUC, supported more max RAM (it’s far from maxed out) and it is essentially silent which was a key requirement for me.

VMWare ESXi is the hypervisor.

All of my VMs are Ubuntu 18 Server unless they happen to be software appliances (e.g. OpenMediaVault).

All of my services run inside Docker containers. All of my VMs and RPis are built and managed using Ansible.

Because most of my containers are hardware pinned I don’t see the advantage of taking the next step and using Kubernetes or OpenShift or the like. I’m not running a data center here.

I could be wrong, but I thought it was 2 physical cpus and a max of 8 virtual cpus per VM. That should be more than enough for the VMs Dave mentioned. I haven’t used Xen or KVM in a number of years now, but it used to be that out of the box, esxi would be easier to use for someone new to virtualisation.

All I know is I’m maxed out at 8 virtual CPUs total. If I try to create a new VM it tells me that I’m out of virtual CPUs.

There may be some other issue on your set up. I’ve just checked mine - it is currently running 12 vCPUs across 6 VMs.

As a test, I just created another ubuntu VM with further 4 vCPUs on it (taking total on the esxi host to 16). Again, no warnings or anything - it just accepted it.

I am running esxi 6.7 (HP version).

If there is I don’t know what it could be. The error I got clearly said that I didn’t have enough licences for more VCPUs. This was a long time ago. Maybe they changed the license? I’m currently running 6.7 too, but the last time I tried was at least as far back as 6.5, maybe even 6.0. I don’t remember specifically. I’ll have to experiment when I get home.

Again, I’ve not been following it carefully, but I recall there was some licensing changes with the free version in the not too distant past.

Maybe worth getting a new license key?

1 Like

According to this (Free VMware ESXi 6.7):

Tech Specs and Limitations

  • No commercial support (But great community support)
  • Free ESXi cannot be added to a vCenter Server
  • Some API functionality is missing
  • No physical CPU limitation
  • Number of logical CPUs per host: 480
  • Maximum vCPUs per virtual machine: 8

Other limitations like the 32GB Memory or 2 CPU Socket limit are no longer in place.


Awesome! Thanks for posting. It’s good to know. I might go and add another VCPU to a couple of my existing VMs. I’m using about 80% of my physical CPU though so I don’t have a whole lot of room to grow.

Maybe a naive question, but I see a lot of use of ESXi and little of Virtualbox; now I realize the former is a type 1 hypervisor and the latter a type 2; but we are not talking about too taxing tasks, I would believe, in virtualizing OH or other home applications. Virtualbox is open source, does not seem to run into limits like mentioned above. Just curious, and my apologies if this is off-topic.

VirtualBox is one of the many choices for adding virtualisation on top of an existing OS if you want to occasionally spin up a different environment to play, but if you want a box where its primary task is “run these VMs all the time” then a type 1 hypervisor is needed - and there’s plenty of options for this too!

If VirtualBox does what you need, great, but when I compared it in the past with VirtualPC and VMware (this is before Hyper-V, so things may well have changed) - VirtualBox had the least amount of compatibility and the largest number of headaches getting it running - whereas VMware did everything I needed to - so I’m running ESXi on my server.

I’ve got 9 VMs spun up at the moment, each with at least 2 vCPUs assigned (a couple have 4, another has the full 8) and I’m still able to add more VMs.

1 Like

It depends a lot on what you want out of your host OS. If all the host is going to do is run VMs, and you are OK with a headless server, type 1 (ESXi, Xen, KVM I think, Xen and KVM I think are both opensource as well) is a better choice because the host OS will be using the least amount of the machine’s resources as possible while at the same time providing a nice administration UI through your browser.

Obviously, they have all sorts of APIs and ancillary tools (usually very expensive) to support stuff like migration (i.e. move a running VM to another physical host), dynamically creating/destroying VMs and stuff like that which doesn’t really matter in the home lab all that much.

You can run VirtualBox like this with a headless server and such but it becomes awkward and not as nice of an experience. But if you want the nice GUI you need to run X and Gnome or some other window manager on your server which does consume resources that would better be applied to your VMs.

Familiarity also comes into play. For many of us who have exposure to these types of tools professionally, VMWare is often what we have to use professionally. As was once said of IBM, no one ever lost their job for buying VMWare. At least in my industry.

While VirtualBox is open source, I believe the VirtualBox tools you need to install into the VM OS (video drivers and such) are not open source and require a license to use commercially.

VMWare is just the opposite. The hypervisor is proprietary but openvm-tools is open source. I always found that interesting.

It’s been awhile but not all that long since I’ve done something with VirtualBox. It is usually what I go to when I need to spin up a T2 VM. I’ve not run into any incompatibilities with it. VMWare Workstation has a nicer interface for VM creation but I found VirtualBox to be adequate.

I’ve only ever run Linux and FreeBSD VMs though so if you need Windows or OSX YMMV.

1 Like

I’m running ESXi on an old Dell Optiplex 7020 mini desktop, i5-4590, 24GB RAM. II have 6 VMs, and could still add a few more.