Have to start over hardware wise, looking for ideas

[Skip until “start here” if you don’t care about why and only care about what]

Here I was feeling all smug after having successfully managed to swap out my pfSense server for opnSense and in the process remap my DHCP leases and change my home domain name, switch to using Tailscale instead of OpenVPN and moving from using pfblockerng on pfSense to Pi-Hole.

It all worked and except for one stupid mistake it all worked without errors. If I had known it would be this easy I would have moved to opnsense a while ago.

Well, sometime last night I got a surge or power hick-up or something which caused a number of weird things to occur (e.g. PiHole kept restarting for no apparent reason basically killing my internet until I managed to get to opnsense to reroute DNS around it). I’ve managed to recover everything (PiHole just started working again without intervention which is weird) except my home server’s USB is simply gone.

That’s a problem because this server is hosting my NAS and half of my drives, in particular the backup drives, are USB devices. It’s also hosting my openHAB and of course the Zwave controller/Zigbee coordinator is USB.

Thankfully I’m not running anything actively off of USB drives so all my services are still working except for openHAB. I’ll migrate my openHAB instance to a machine with working USB.

[start here]

So I’m taking this as an opportunity to rethink my setup.

Current Configuration:

  • not running in Docker containers
    ± critical services
Machine Type OS Services
charybdis stand alone intel mini PC FreeBSD opnsense*±
esxi server class desktop format PC ESXi 6.5 Type 1 Hypervisor*
esxi:fafnir Virtual Machine Debian Buster OpenMediaVault*±
esxi:argus Virtual Machine Ubuntu Lite 20.04 openHAB, Mosquitto, Zabbix
esxi:medusa Virtual Machine Ubuntu Lite 20.04 Calibre, Nextcloud, LibrePhotos, Plex, GitLab, PostgreSQL±, Redis, ElasticSearch
esxi: arachne Virtual Machine Android TinyCam Pro*
muninn Raspberry Pi 4 8GB Raspberry Pi OS 64-bit VNC (virtual desktop)*, VaultwardenRS±, Heimdall, Pi-Hole±, Tailscale Exit Node/Subnets±

Almost everything is installed and configured using Ansible and as long as I can access my backups moving them to different machines is not that big of a deal.

Notes:

  • Nextcloud, LibrePhotos, and VaultWarden depend on PostgreSQL. Nextcloud also depends on Redis and ElasticSearch.
  • The folders used by Plex, Calibre, and Nextcloud are NFS mounted from OMV.

What I Like:

  • the flexibility to adjust the resources on the VMs as needed (as long as the host has enough resources)
  • critical services (±) are split between different machines so reboots and restarts have minimal impact
  • the flexibility to move services around as needed
  • it’s quiet. Everything is mounted under my desk, I’ve no place to put a server room so fan noise is a problem.

What I Don’t Like:

  • rebooting OMV is very disruptive and often requires a reboot of the media server too
  • I’ve done nothing on the RPi 4 to limit SD card writes largely because I’m using it as a virtual desktop. I’m nervous about running PiHole on that machine.
  • Prior to adding the RPi 4 a few months ago I was at the limits of what the ESXi server could handle in terms of CPU and RAM.

What would you do?

I’ve my own ideas but I’m open to ideas from others as well. I’m open to anything.

My hard requirements are:

  • must be quiet
  • space efficient, everything is on shelves mounted to the bottom of my desk
  • some services are CPU intensive at times (LibrePhotos auto-tagging, Nextcloud OCR, Plex video encoding)

Some approaches I’ve been pondering:

  • get a “real” NAS (e.g. QNap and replace the firmware with OMV), replace the VMs with a cluster of RPi 4s (so many cables :scream:)

  • get a “real” NAS, a NUC (or equivalent), cluster of RPi 4s (still lots of cables)

  • replace the tower server machine and rebuild what I had (I’d go KVM this time around instead of ESXi)

  • forego the VMs entirely and just run everything on bare metal, if expansion is needed I can spill over to RPis.

  • Kubernetes (or equivalent) cluster for most things, services like openHAB and the virtual desktop on stand alone machines

I am also willing to forego running all the services in Docker and may go that route if I go the cluster of RPis route. But I do like how uniform the management of all the services is and how easy it is to upgrade, backup, and migrate everything. I basically have the same backup and restore script for all these services, the only difference being the paths.

I’m also willing to replace services with alternatives if it makes sense. For example, I’m already planning to move Tailscale to the opnsense machine as soon as support for exit nodes is added on FreeBSD.

I hope this is close enough to openHAB related to belong here. If not I’ll remove it. But I know there is a broad set of experiences and approaches in use here and would love to hear some ideas. I’ll be happy to post progress replies once I decide what to do and start migrating over. It might help answer some of the similar questions that crop up from time to time.

1 Like

I just did a major re-org of my network and server infrastructure, hence a few ideas to get your thought process further based on what I do at home.

  • The simple thing first: I would keep, in your case openSense, standalone and baremetal (just moved my pfSense baremetal)
  • You could run all services in a Hypervisor as containers => I do this on Proxmox via LXCs which give me the benefit of having very lean “VMs”, each service has its own LXC which then allows me to stop/restart/restore how-ever I like without interrupting everything
  • You could also leverage something like docker swarm or kubernetes to get HA for your services running, but this obviously requires multiple servers (which could be RPIs); I got HA running via Proxmox direct but all HA has also limitations (i.e. OH relies for me on zigbee2mqtt which needs a USB stick which is only on one server…)
  • If you got the resources, I would create a dedicated NAS and skip OMV - I did just that a few weeks ago and moved my OMV install to baremetal Proxmox (I know you should not use the Hypervisor as service but its only NAS duties plus media anyways…). This increased reliability and tbh, I did not use a lot of OMV features then NFS/SAMBA, which can be done via CLI
  • I also moved all media services (Jellyfin, the arrs) to my NAS directly so that there is no need for NFS shares across VMs (or at least not many which reduces mount issues during restarts, esp. un-planned restarts)
  • Now using LXCs and VMs extensively, I would never go back to all baremetal for services, simply because back-up and restore is so much easier, esp. with snapshots. You make a mistake, snapshot restored in a matter of seconds.

Now my background is:

  • 1 dedicated Dell R210 II server for pfSense
  • 3 additional Dell R210 IIs for Proxmox HA and all VMs (I did split VM groups onto each server, i.e. everything Smarthome “lives” on Server 2 etc.)
  • 1 Lenovo SFF which lives now in a 2U case for NAS duties (Proxmox install)
  • The 3 Dells plus the Lenovo are in a cluster so that I can easy move around VMs/LXCs
  • Also, the 3 Dells have a shared storage for replication every x minutes

Looking at your requirements for space/power efficiency and at the same time some ompf for certain services, another idea would be to run some of the less power hungry stuff on PIs, then for the things that need more processing, run it on a small form factor device. You could still get HA up this way, with the drawback that if HA is activated, some of the processing hungry services might be moved to less powerful devices (i.e. PIs) but at least they would still be up.

Not sure if above helps :slight_smile:

3 Likes

Absolutely. I ran pfSense as a VM at first until I decided to add a bit more RAM to it. I brought down the machine and boom, the VSphere client quit working meaning I couldn’t finish the setup. I quickly decided to move it to it’s own box at that point. Luckily the VM was already configured to restart automatically on boot so a quick power cycle got me out of that bind.

I’ve certainly considered that benefit as well which is one of the things driving me to consider alternatives. openHAB is the only service that has a specific hardware requirement and I try to make my home automation robust enough where it’s not the end of the world if it’s down for a second. One of the challenges here though is a number of the services I run don’t have ARM64 builds and I’m finding building my own to be quite challenging (it took me days of searching to find the right base image to get Guacamole to work on armhf and I gave up on arm64). So I’d need to have some Intel/AMD machines in the mix at which point things become more expensive. I’ll have to look into Kubernetes/OpenShift/Swarm to see if/how it handles clusters with different architectures.

I used to run my own but I’ve grown to like the services provided by OMV. I use it for Samba/NFS and as a Timecapsule to backup the macs. The killer feature for me is monit though which tells me immediately when there are problems.

It’s not that I can’t set all that up on my own, but OMV made it really easy to do so and to manage it. I might look into FreeNAS and other alternatives but I’ll probably keep some sort of layer on top of it all just to save time. I’d also like to take this opportunity to go from JBOD to a real RAID config (I’ll have to get some new disks) which is one reason I’m half considering a QNAP or the like. But I definitely see advantages to having most of the storage local to the Proxmox machine, though that goes away if a high availability approach is taken.

It does and it gives me some research paths to go down. It would be really nice to have stuff like PostgreSQL and PiHole configure with HA.

If you need Intel/Atom, you could look into Lenovo Tinys, they are often really liked in the selfhosted community etc. as they are well, tiny, but still powerful enough to run a lot of services.

Understood. On the RAID side I run most of my servers on ZFS mirrors which works quite well (for now). On my NAS, I got 4 drives, 2 of which are in a ZFS mirror for important data, and 2 other drives are running in standalone ZFS and mergerFS and leveraged for mostly media storage (which can be recovered easily)

PiHole is quite easy as there is also a bash script to sync all your settings between 2 instances.
This way you only need to “maintain” one. Both piHole IPs can then be entered as DNS servers in open/pfSense.

PostgreSQL is not running for me in HA because I use a 2nd LXC as PostgreSQL replication server.
Hence if the first one fails, the second one takes over automatically which is placed on another server hardware wise.

1 Like

A small update.

It turns out one of my external hard drives had run amok (it started to buzz actually). Unplugging it and a reboot and all my other USB devices started working again. This is good news as it gives me more time to be deliberate in reconfiguring my system. The bad news is I’m not 100% all the files on that HDD are backed up on my second backup drive. I might need to resort to Spinrite to check.

I went down the ZFS rabbit hole and it seems pretty clear that EC RAM is really needed to avoid corruption. That pushes me to server class machines or really expensive NAS machines (TrueNAS Mini starts at $750 without disks). I just can’t justify that amount in my humble setup. And since I don’t want to mess with DIY I’m pretty much left using a more traditional RAID 1 config. My plan is to replace all my aging HDDs with this NAS so I need at least 6 TB. effective storage. 8 TB would be even better but the price per drive goes up pretty fast.

I’m currently considering a Synology DS220+ with a couple of 6TB Seagate Iron Wolf HDDs. QNap seems to have cheaper options but QNap has made a bit of a reputation as not giving two s^!ts about it’s user’s security so I’ll pay a little more to a company that does a little better even though I’m perfectly capable of running QNap in a secure way.

I’ve half decided on getting one to start with and two (eventually, one to start) Intel i7 or AMD equivalent (I don’t know AMD makes as well but I want at least four cores) mini PC. There are at least a couple of models from HP that support up to 32 GB DDR4 RAM which is what I have in my desktop server right now. I can get them fully populated with hardware at 16 GB RAM for < $450. I’d love to find some of these without needing to pay the Windows tax which I hope would bring the cost down a bit but I’ve not done a full search yet.

I spent some time looking into Proxmox and am happy to see there are Ansible tasks for interacting with it. It’s also based on KVM which is a direction I wanted to go anyway so that checks two desires. I’m liking it a little more than Kubernetes, Swarm, and OpenShift as it seems a little more flexible. So I think I’m going to go with that as my base OS.

So the plan is to start with the NAS and a mini PC. I’ll move my storage off of OMV to the Synology and start slowly migrating my services to Proxmox. I’ve an RPi 4 8GB RAM for spill over for now and will plan on getting another mini PC in the future and I’ll rely on Proxmox’s HA capabilities for the critical services.

Thanks for the ideas!

Sounds like a good approach.

I did run a Synology NAS for years without issues, still running the same HDDs in my NAS Server now.

On the Proxmox side, if you want to cluster later, make sure that you cluster the machines before they got any VMs etc. on them as the joining machine needs to be blank.
Also, if you want shared storage later for HA replication (not talking CEPH), you might need to re-org later once you got more machines (there is a short but good YouTube video on it from Craft Computing)

Also for backups, have a look at Proxmox Backup Server (PBS), running it since it’s release without issues for all of my LXCs and VMs. This will also give you incremental backups compared to Proxmox Server normal backup tool.
All of my backups then go into the cloud encrypted and incremental via duplicati.

Edit: on mini PCs, have a look at eBay, Facebook marketplace etc depending on where you are located. Often companies decommission them and they land in bulk there (i.e. my R210 IIs are from a 45 R210 II lot that were sold in a Facebook Server group)

1 Like

Keep in mind that Synology has stopped supporting several types of USB devices. From Synology’s release notes:

USB devices (Wi-Fi dongle, Bluetooth dongle, 3G/4G dongle, USB DAC/speaker, and DTV dongle) are no longer supported. If your Synology NAS is currently connected via a wireless dongle, it will be disconnected after the update.

This includes z-wave/zigbee dongles.

That’s good to know but I’ve never considered running openHAB on the Synology. I’ve seen nothing but problems from people trying to do so and I want nothing to do with that. I’ll have that mini-PC or one of my RPis to run openHAB. The current plan is it’ll run as a container on a separate machine Proxmox.

So that limitation doesn’t really matter to me. I just pulled the trigger and even decided against adding more RAM. At most I’d be plugging a USB hard drive into it (to load it up that first time as I retire all my old 10+ year old HDDs). But even if that’s not supported it’s no big deal to me. I only need it to access the storage over the network. That’s all I need. Everything else will be hosted elsewhere even if it can run on the Synology.

I think about my services in tiers.

Tier Services Purpose
0 OPNSense, Pi-Hole, WiFi AP, etc Networking services
1 ESXi/Proxmox, NAS Storage services
2 PostgreSQL, Redis, ElasticSearch, Mosquitto Services that other services depend on
3 Everything else including openHAB

The stuff in higher level tiers depend on the services running in lower tiers. Therefore I’d like to keep tiers 0, 1 and 2/3 on separate hardware as much as I can (though Pi-Hole won’t run on FreeBSD so I can’t run it on my firewall). I don’t mind if tiers 2 and 3 mix on the same hardware. Since I don’t want Tier 2/3 stuff running on Tier 1 that definitely eliminates running OH on Synology.

Note that just because a service is in tier 3 doesn’t mean it’s not important. VaultwardenRS is critical to me but it’s definitely in Tier 3.

I’m super late to this party, and somewhat less experienced than you guys, but it’s good to see that some of what you’re talking about is also what I’ve done at home!

For context, my main devices (with their main services) are:

  • HP Microserver N40L - Proxmox
  • Raspberry Pi Zero - backup PiHole and backup Wireguard server
  • Minis Forum Z83 - Jellyfin, directly connected to the TV via HDMI

The N40L is an ancient bit of kit, with only a 2 core AMD Turion II processor, and 8GB of RAM. But it’s an absolute champion.

This is exactly how I have Proxmox setup too.

One-thousand times this! I am running the following services on this old machine, all in separate Ubuntu or Debian based containers, and some within a Docker container (within the LXC container) too.

  • Mosquitto
  • PiHole - ads, and also DHCP server
  • openHAB
  • zigbee2mqtt - USB CC2531 passed through
  • Wireguard - primary, setup using PiVPN
  • Unifi controller - mostly shutdown, as it’s a ridiculous resource hog
  • Fileserver (Samba)
  • Flame - instead of Heimdall, which I found very bloated. Doesn’t have the live status cleverness though.
  • Torrent stack
  • PhotoPrism
  • Syncthing
  • NGINX Proxy Manager
  • Broadlink-MQTT
  • Matrix (Synapse)
  • Vaultwarden
  • Jitsi
  • Portainer
  • Coturn

With all the above enabled (excl. Unifi), Proxmox reports ~4.5GB RAM utilised, and roughly 10% idle CPU. The most taxing service is PhotoPrism, but only when its processing new photos. Jellyfin was moved to the Z83 because that has a QuickSync enabled CPU for low-overhead video transcoding - Jellyfin did work on the N40L, but only for direct play videos.

I use the rPi Zero for automatic failover on PiHole (router allows two DNS servers to be set), and manual failover on Wireguard (as in, I have to manually select this second Wireguard server in my Wireguard client if the primary one is down).

This is something that’s been in the back of my mind, but you’ve put so simply in text! For me OPNSense would be in an even higher tier all on its own, but that’s because

  • Internet connectivity is the only thing that is critical for this household - everything else is fun decoration!
  • I’ve never tried using OPNSense.
  • I’m still using the ISP supplied modem/router/switch, so if both PiHoles where to bork I can just reset the DNS servers to some 3rd party, and re-enable the DHCP server, and quickly get internet connectivity restored. Similarly with the WiFi - if the Unifi AP borked, I just switch the ISP units’ WiFi back on.

I don’t think I’ve added too much to the conversation here, but I just wanted to get across that:

  • For most services running in Proxmox LXCs you don’t need a monster of a machine.
  • If you don’t need the status feature of Heimdall there are other less bloated options. Homer is one, I’m using Flame which looks nice and minimalist.

Similar utilization stats for me.
My CPUs are usually very low percentage wise, only Jellyfin transcoding will saturate them.
My RAM though is normally at 90%+ on all of my Dells (8GB) and 50% on my Lenovo NAS {32GB}, that’s though because of ZFS as it utilizes any free RAM it can get.
Services that I run, older post though, not fully up-to-date anymore.

Isn’t RPi the most unreliable thing in your setup? They are hot as hell under heavy load, and those SD cards are much like a lottery “Will I lose all my data today”. Is there any kind of second-hand market of servers in your region? Small/medium enterprises often sell them for ridiculous low price cause of taxes, amortization costs and so on, so a small server rack could cost something like 50-150$ if you are lucky.
I would never move to Synology from OMV, I’m afraid that any second any corp could decide to sell my data to anyone who want to spend 0.5 cent on that, or decide to stop keeping software up-to date, or make you pay for every byte you store, or make anything else stupid.
Do you really have to keep mainframe under your desk? It could be hidden in wardrobe or somewhere else to not bother you with it’s fans and HDDs. I used to keep my desktop inside something like a nightstand with only one 20sm fan, now I’m sitting on my desktop as it’s inside my sofa.
And, oh thank you for this post, just finished Openhab setup (with your help), and now I have to setup all those curious apps you mentioned.

Independent of the reliability of SDCard on the RaspPi, you do want to have a backup instance of PiHole (as the machine might go down for various reasons besides the corrupted SDCard). It is such a critical component that can cause so much pain if it is down. I’ve been running 2 PiHole instances for a while, and it worked very well.

Here’s the note I took when I set up the backup instance.
Assumption: primary Pi-hole is also the DHCP server.

In the primary Pi-hole server, create a new file:

Assumption: primary Pi-hole is also the DHCP server.


In the primary Pi-hole server, create a new file:
sudo vim /etc/dnsmasq.d/02-pihole-dhcp-backup-pihole.conf
Enter the following text:
# Manually added PiHole backup server.      
#https://discourse.pi-hole.net/t/secondary-dns-server-for-dhcp/1874/4
dhcp-option=6,192.168.0.204,192.168.0.211
# stop logging, the SD card will last longer                            quiet-dhcp      
quiet-dhcp6      
quiet-ra
The first IP is the primary Pi-hole server; the second one is the backup.
Restart FTL: sudo /etc/init.d/pihole-FTL restart
Refresh the IPs on other machines in the network to get both DNS server from the DHCP server.
To confirm if a machine has both DNS IPs, run cat /etc/resolv.conf

Like many others:
I have proxmox VE and the rest is either containers or VMs on it. Love snasphoting backups and be able to revert quickly.
And of course be able to update each service on its own + stop/start each service.
The computer is a regular AMD Ryzen 5 but with a lot of ram.
I also use it for media server so all HDDs are in this computer (with a 10 sata extension card).
It’s coupled with a 10Gbe network card to a switch. Yes, overkill, but I have several cameras attached so I do not want the internal network connection be the bottleneck to the computer.
All usb cards (zwave, ZigBee etc) are coded so they always get the same usb number.
A small UPS prevents distorted power.
Pretty much a very cheap (relatively) solution that has not failed me at all.
And quiet (AIO water-cooling)!

@rlkoshak , Why the switch from pfsense to opnsense?

A bit late to party… but my 2cents too. I have been happy running opnsense on apu2 boards from pcengine. I like how robust it is, along with haproxy being super handy. It’s the home network robustness that makes me recommend separate HW for it. This is low power, silent yet powerful.

I noticed you considered OpenShift. I work with it, I wish I had it home :slight_smile: . Just to make sure you know, it can run VMs as well nowadays. There is very little need for VMs at home any more, there are even pre-cooked containers for pretty much anything home related. And nowadays OpenShift could be single node too.

I’ve been happy FreeNAS user for over decade, but I think this is my last freebsd. I want native containers. At this time I run all workloads on FreeNAS in FedoraIoT minimal linux made to run containers only. Personally I don’t actually need kube level scheduling at home, so FedoraIoT is good light way to manage home automation. Perhaps worth a check for you too? I manage all containers there with ansible.

What’s the fan noise situation with the Miocroserver? The size is about right but the price is a little higher than I’d like. I can get a couple of microPCs with i7 quad or hex core CPUs and 16 GB upgradable to 32 GB each for not too much more. For the cost I’m thinking that approach still might be better but I’ve not made the decision yet and am open to alternatives. I also like the idea of having two machines so I can do some HA or at least some redundancy for PiHole and PostgreSQL. Those two seem to have the biggest impact when something goes wrong.

What does Portainter do that Proxmox doesn’t? I experimented with Portainer in my current system a bit and it just didn’t provide much of anything that I didn’t get from Ansible for setup and Zabbix for monitoring. But things may change when I move to Proxmox.

You will notice that the only things in tier 0 are the things that are required for Internet access for me too. If any one of those three goes down, there is no internet access. Everything else is higher level. Even when I add some redundancy with PiHole I’ll still treat it as tier 0 because without DNS there may as well not be internet access. Though I can quickly flip a switch to move to OPNsense as the primary DNS server when something goes wrong (something I’ve had to do a few times since installing it.

I’m definitely happier with OPNsense compared to pfSense. I just don’t trust pfSense as a company any more and they way they rolled out Wireguard really turned me off. And more importantly, HAProxy kept restarting every 30 minutes causing all sorts of problems with remote access to Vaultwarden.

Heimdall was just the first one I found. I don’t find it too bloated in the ways that really matter to me. It starts up plenty fast, runs in a container on the RPi 4, and it’s pretty easy to configure through the browser. But I’m also not enamored with it so will definitely check out the other options. I use Zabbix to monitor stuff.

And the only reason I’m running it at all is when I’m on my internal network it’s a pain to remember the port numbers for all the services running on the different machines.

For me, not so much. I’ve a bunch running all over the house doing all sorts of things from retroPi to running a camera and the the garage doors, and I’ve been using the RPi 4 as my virtual desktop (and a place to run a few spill over services) and haven’t had any problems in years. They only need to restart when I do a dist-upgrade and the kernel changed. I usually have uptimes of around 90 days before I have to do that.

I’ve only had one failed SD card and that was on the camera RPi and that was because I was doing a lot of SD card writes. I’ve reconfigured it since then to not do that.

If one pays just a little bit of attention to writes on them a decent quality SD card (meaning not a counterfeit) will last for a decade or more.

And I never have any of my machine set up where I’ll lose all my data one day. SDDs and HDDs fail too. I always have every file that matters backed up in at least two separate drives. And all my machines are built using Ansible so at most it’s half an hour of unattended configuration time to rebuild any one of my machines.

Yes but none of these types of machines meet my two primary requirements. They must be small and they must be almost silent. Everything found on the surplus market is going to be a screaming (as in loud) rack mounted server. I’ve no place for that in this house.

At which point I’ll move off of it. I ran the numbers but I can get something that is smaller, quieter, and more power efficient which requires far less time for me to set up if I get a Synology compared to DIY/OMV. And even with OMV I’ve rules in the firewall to prevent it from talking to the internet. I do updates manually. I’ll do the same for Synology. Synology as a company will never see this machine on the Internet.

That’s my requirement. That the best location where everything I need is located without running wires to put it somewhere else. And I have tinnitus which is right at the frequency of the fans. Even muffled the sound of the fans would be maddening. I’m willing to pay more to have silence. Under my desk is the most convenient location though as that is where the cable for the internet comes in, it’s the best location for my ZWave controller and Zigbee coordinator, and it’s where my monitors, keyboard and mouse are located. When stuff really goes wrong it’s really nice to be able to plug in a monitor and keyboard.

And as shown by chris and T above, there’s no need for a hot screaming machine to run this stuff. I should be able to run everything I need all but passively cooled except in cases where it’s doing video or image processing. The rest of the time the fans aren’t even needed.

Yes, for now I have one running on one of my VMs and OPNsense is pointed at both of them. I need to set up a synchronization script to keep their configs the same. But I was having both go down at the same time over night which was causing me problems. Turned out to be a file permission problem. I’ve not actually switched over to using PiHole completely yet until I know it’ll run without crashing like that.

But in my testing thankfully when it does go down all I need to do is log into my OPNsense and change the DNS server handed out by DHCP to be itself instead of PiHole and I’m good to go. I’ll be running DHCP on OPNsense and not PiHole so it’s a little easier to recover.

A number of reasons:

  • I was unaware of the frankly childish and churlish behavior Netgate engaged in when OPNsense forked pfSense and started their own project. Had I known the sorts of things they had done I would have never chosen pfSense in the first place.

  • pfSense isn’t open source. https://github.com/rapi3/pfsense-is-closed-source. They have some of their source code publicly available but you can’t do a git clone and build because the system depends on proprietary code.

  • The way they reacted after screwing up the implementation of Wireguard into the FreeBSD kernel and were called out on it.

  • Flakiness of HAProxy on pfSense for me. It’d restart every 30 minutes or so. Haven’t had a single restrart on OPNsense. The configuration UIs for HAProxy on OPNsense make a lot more sense too.

  • Lack of meaningful email alerts on pfSense. About the only thing I was ever able to get pfSense to email me about is when my LetsEncrypt cert was about to expire. OPNsense has monit so I can get alerts on just about anything I want.

  • Netgate is making signs that few if any new features will be added to the community edition of pfSense.

So, for the most part I can sum it up as Netgate is not a company I want to have anything to do with and OPNsense has features I and works better for me over all.

Yep, I know. I work with a couple of ex-Red Hat guys and in the space I live in the old saying applies to Red Hat too: “No one ever lost their job by buying IBM.” It’s all Red Hat all the time. :wink:

One thing I like having a VM for is a virtual desktop. It’s nice to be able to spin up a long running GUI job (e.g. Hanbreak reencode of a video file) and just step away, disconnect, or what ever and have it still running there in the background. It’s also nice to pick up where I’ve left off.

I’m using my RPi4 for this job right now but for some reason it forces me to have a monitor plugged in or else it loses the resolution. So I might go back to an Ubuntu VM as my virtual desktop. But for everything else I agree. I mainly have VMs so I can do stuff like reboots without taking everything offline.

Even later to the party! First of all Rich, thank you for sharing your setup.I dont have time to keep on top of all the latest developments, and finding out about Tailscale is a game changer for me, as I was just about to set up a VPN to deal with CGNAT on my 4G broadband.

I use Synology and I see it has a Tailscale package which is going to be very useful. I have run Openhab on the NAS, both via the OH package (which caused a NAS failure) and in Docker (which was not responsive enough). With an older generation of Pi, I gave up after too many memory card corruptions.

My NAS is a DS415+ (4 disks) with Raid 6, having just shifted from Raid 5 (3+1). My exerience with Syno support has been extreamly good.

Cheers, Martin

I believe openHABian has an option to install Tailscale too.

I’ve been pretty happy with Tailscale so far. I’m eagerly awaiting the ability for the FreeBSD version to support being an exit node so I can move the exit node to my gateway. The change was recently merged. Having set up OpenVPN and Wireguard in the past, Tailscale was super easy in comparison.

@rlkoshak I see you mention couple of times the need to switch DNS if PiHole goes down. I don’t think you need to. I’ve set dhcp so that it tells clients pihole is primary dns, and opnsense secondary. So if pihole is unresponsive for any reason, the queries go to opnsense. This way you don’t need to toggle anything. I run pihole as container in f-iot. After maintenance or whatever, dns queries automatically switch back to pihole.

But doesn’t that mean if PiHole decides it needs to block an address the client will then try the next DNS server in line which in this case won’t block it? I admit I know just enough of some of this stuff to get by and am no expert in how DNS works at this level.

No, you always get an answer from dns, it might say it doesn’t know the name, or returns it’s own address. DNS client only goes to secondary if the DNS won’t answer at all (unreachable).

Works perfect, I’ve had it like this for couple of years already!