Have to start over hardware wise, looking for ideas

That was the one thing I’ve spent the last little bit trying (and failing) to find in the RFCs and elsewhere. Is that universal DNS client behavior? I’d test it but my gateway is already the upstream DNS server for PiHole so I’m not sure yet how to set up the test. And then the test will tell me how it works on Linux but not necessarily OSx, Android, or Windows. I’m hoping to find some way to confirm this as this is going to be one leg (not the only thing by any means) in my parental controls and I need to know how easy it is to get around.

Edit: Found something for Windows. It does not try the again on a name error. From DNS client resolution timeouts - Windows Server | Microsoft Learn

Any Name Error response by any of the DNS servers will cause the process to stop - client doesn’t retry with the next server if the response was negative. Client tries new servers only if the previous are unreachable.

It appears that it’s not universal behavior after all. From this knowledge article on the Broadcom ProxySG. How does the DNS resolution work on the ProxySG?

In SGOS 7.2.1 and SGOS 6.7.5.3 and earlier 6.7 releases, if the response from the first primary DNS server indicates a name error, the ProxySG sends a DNS request to the first alternate DNS server, if one is defined.

Hmmmm.

1 Like

OK, in practise we have pretty much all normal OSes, lin, mc, win, freebsd, tasmota… of which all behave ok. I haven’t experienced any problems. The worst that could happen is some ads show for a while. But of course, this is not theory, just practise and good enough for us.

If you have any other DNS server in your config, devices can bypass pihole and use that instead.
Tested in my environment.

Not all devices do it always, but it happens. Hence having another DNS in your list basically removes the blocking aspect of pihole, again, potentially.
That’s also why there are firewall rules to only allow DNS queries to i.e. pihole and block all other DNS traffic.

The only way to get redundancy with the blocking aspect is then to have redundant piholes or similar services like Adguard.

Quiet but noticeable in a peaceful room. A 2.5" HDD inside it is noisier when whirring around. But it lives in the loft, so I don’t care! I have no idea how noisy or not the newer generation of Microservers are - I only got this one cheap used to mess around with, but it eventually just became the home server.

Manages Docker containers via a web-based UI. Proxmox doesn’t have anything built-in to handle Docker stuff specifically. So for those services that are only distributed (or supported) as Docker containers you have to spin up a VM or LXC, and then install Docker in that, and then get your Docker container setup. Containers within containers! Not the most efficient, but the benefit for me is easy automated backup of the LXC through Proxmox.

I know some people spin up a single VM which is specifically for Docker containers - all the Docker containers are then installed in that single VM, and I suppose Portainer could also be installed in that VM too.

I liked the simplicity of each LXC having its own IP address, and LXCs are far less resource intensive than a full VM (important for my hardware!), so use a single LXC for a single Docker container.

Inside any container that houses Docker I also install the portainer-agent, which then enables that LXC to communicate with the main Portainer LXC (which is also a Docker container inside a container!). I then use Portainer to update the Docker containers as-and-when, instead of using the CLI in each LXC.

I went down the ZFS rabbit hole and it seems pretty clear that ECC [sic] RAM is really needed to avoid corruption.

Not exactly. Or rather yes, it’s literally true. But any other filesystem can corrupt data without ECC RAM also. Not like ZFS all of a sudden requires ECC, it’s just that the reliability of every other failure mechanism is so much reduced that the bit errors in DRAM start to stick out! I have administered a few ZFS systems without ECC for 10+ years now and no issues have come up. I did just upgrade my main ZFS server to something with ECC (Dell R530) but not because of the ECC, it just came along with the server-class hardware.

Take into account that you will need Quorum as servers will “vote” on how to start up if you choose to create a cluster (which I think is needed for HA).
Quorum always needs to be an uneven number (3, 5, etc.). Hence 2 machines are not enough and will result, on un-planned reboot, in a lock-down of both servers (they will not start as no quorum is reached; you can use some CLI command to de-activate it at that moment, but not a perfect solution).
To get around this, and still only use 2 machines, you can though use a Pi and make it a QDevice.
I made this mistake once (when I still had only 2 servers) and then did run for quite a while with 2 servers + Pi as QDevice without issues.

Can confirm this. I do run my 4 R210 IIs with ECC, but my Lenovo SFF in the 2U case has no ECC ram while still using ZFS mirrors and standalone drives. No issues so far.
Also remember that for Proxmox snapshots you will require a certain file system.

My understanding is ZFS uses more caching in RAM than other file systems which makes errors in DRAM more likely to cause problems. But in any case it’s probably moot since I’ve already purchased the Synology which doesn’t support ZFS so I’ll have to live with ext4 (or the like) and RAID1 for the NAS at least. I’m OK with that. I don’t need massive amounts of speed from this thing, mostly just reliability.

If Proxmox requires ZFS then errors there probably won’t be a big deal because I have everything well backed up and setup and configuration is automated. A little bit of down time will not be a big deal.

I’ve been diving into this more since writing that and have decided that a true HA setup is a layer of complexity beyond what I really need. If I run two PiHole instances, one on each machine and do PostgreSQL replication like you mentioned before, again one instance on each machine, I’ll have something rock solid for my purposes.

But if I change my mind, I have that RPi 4 I can set up as you describe.

I still haven’t completely given up on making a cluster of RPi 4s. It’d be more power for less cost overall. But figuring out which docker images support armf and which ones support arm64 and needing to custom build some containers is really holding me back from this option. The pain I had getting Guacamole running on armf and failure to get it to run on arm64 has really turned me off on the whole endeavor. Too much work I don’t want to mess with.

I upgraded my hardware about a year ago using second hand equipment from Ebay.

  • Motherboard: Supermicro X9SRi
  • Processor: Intel Xeon E5-2650v2
  • Memory: 256G ECC
  • Two RAID controllers (running as “dumb” controllers
  • 8 x 4TB HDD RAIDZ2 for storage (movies etc.)
  • 4 x 1TB SSD ZRAID10 for vm etc.

I had the drives and the RAID controllers from my last server, as well as the case, PSU etc.

Running Proxmox/ZFS with a lot of room for vm:s (kvm & lxc). A separate vm for openHAB. Dongles for zwave etc. attached. Located in the basement so I’m not that worried about sound levels. I guess you could make it really quiet by using more expensive fans.

I can really recommend buying second hand stuff. You get a lot of bang for the bucks and the stuff is “server grade” (ECC, IPMI, etc.)

Seconded. But … commercial kit can come with undesireable noise, heat, and power consumption. Just got to weigh your priorities.

Yes, thats correct. ZFS uses all the RAM it can get (freeing it up if it needs to be used by other applications). Hence my high RAM usage stats mentioned beforehand.

That’s fair enough. Esp. for home usage and smaller applications it might be too much anyways. On the other hand, its fun :slight_smile: , but I also use it more to play around with and learn. I could theoretically live without it as well.

Agreed, all my servers (5 of them) as well as RAM are 2nd hand and the Dell R210 IIs at least are either very quiet (if you use iDRAC express and enterprise as you can then IPMI control the fan speed; or if not you can mod them with Noctua fans), but other servers are def. too loud for living room use (that is where I got my server rack). The only thing that I would not recommend 2nd hand is HDDs/SDDs out of obvious reasons.

:+1:

I’m willing to accept less powerful and less capable machines for more money to have something silent and small.

My requirements are not other people’s requirements so I too recommend second hand server class machines, if that fits your needs. They just don’t fit my needs. And believe me I’ve worked it over in my mind for a long time. There is no way I can make a rack mount type machine work for me in terms of size, power, and cooling noise.

It’s fun for me but only up to a point which is why I’m so cautious. I love learning about this sort of thing but I also love doing things like adding to my home automation or wood working or gardening. I have to balance my free time accordingly and don’t want the pressure to be high in migrating to the new hardware. I waited until the rest of the family were on a trip before moving to OPNsense just in case something went wrong.

If I can get up and running quickly and simply and coast for a bit then later on I can explore other options incrementally.

I might be changing my mind just a little bit though on the hardware front. I need to look more into PCI passthrough on Proxmox. If I can create a virtual desktop with a real video card that would make a lot of things I do much nicer such as Handbreak video encoding, 3D modeling parts, and slicing for 3D prints. So I might be right back to looking at bigger boxes that can accept a discrete GPU, assuming the Intel GPUs that come on these little micro PCs can’t be passed into a VM by Proxmox.

Or I can run X and a window manager on Proxmox itself, even though Proxmox recommends against that. I’m OK with going a little bit off standard. Options options options.

Analysis paralysis here I come. :smiley:

Sure, I don’t want a rack mount with high noise. But in my case it’s just a server grade motherboard stuck in a normal PC case. Soundlevel almost the same as my standard PC in the office.

OK, here’s a thought. I can get 3 HP 705 G3-minis with AMD A10-8770E 2.8GHz 8GB RAM each for $165 each on Woot! right now (limit 3 per customer). I could even go for more if I make my wife get an account. :wink: Unless they define customer with having the same address.

https://support.hp.com/us-en/document/c05265526

For < $500 I’d get 12 cores of CPU that are twice as fast as an RPi 4 and 24 GB DDR 3 RAM to play with, upgradable to 48 GB RAM.

It’d be nice to have DDR4 but my current machine is running DDR3 so I could upgrade a couple of these right now with hardware I already have. The Woot! listing appears to be wrong and it does have DDR4. At least I can’t find that model with that processor with anything but DDR4 anywhere. I’ve not run AMD in a long while though, any problems with it and Proxmox? I’ve not encountered any in my brief looking. I also can’t tell if the Radeon R7 GPU in this thing can be passed through to a VM.

I’m thinking mo-machines at half the price is mo-better than fewer faster machines at twice the price.

I’ll think on this for a bit…but gotta decide soon. Woot! deals only last a day!

I have the luxury of having my servers in a separate room, but I need a real/full PC on my desk. I started with a FT02 from Silverstone. That is an absolute almost silent housing. Nevertheless, I wanted more (I mean less). Therefore I kicked out the housing (all housing) and went for a PC with absolute no moving parts, except for the power button.

it’s only an i5.

Still looking for a soft touch power button, it says “click”…

1 Like

For awhile I ran off of a “computer desk”. I embedded the MB and such into my desk. As I’ve grown older I’ve become less punny. :smiley:

Slight update in general. I didn’t opt for those AMD machines and am still on the hunt. I didn’t change my mind so much as realize I need to wait for my next paycheck before buying a bunch of computers. I’m still bouncing between two really nice 6 core i7 or i5 with 32 GB RAM or three or four of these really cheap four core jobs (i5 or AMD). I’m not sure which way I want to go, though if I did get four of them, I don’t know how much benefit I’d get from running VMs at all given I’m not going down the HA path. I could keep one as a Windows machine just to have around (occasionally I miss playing Elder Scrolls IV) and split the rest of the services across the remaining three.

I got the NAS today and so far I’m pretty happy with it. We’ll see how happy I remain once it’s done parity checking the drives, I create a volume and can start loading it up with my files.

1 Like

To be honest, if you do not want to play around with HA or clusters (though you could even form a cluster with 2 machines + Pi just to get a common management interface), I would keep it simple.
Performance wise I would look at your most needy service and go from there, does it need/benefit from a lot of RAM? Can you upgrade the i5 or AMD with more RAM and its enough?
In my case CPU power was never the issue (running one i5 and multiple 1220 v2/1230 v2 which are comparable to i5s), it got more down to RAM usage.

RAM has been my limitation too and I’m keeping that in mind. However, occasionally CPU has come into play, though I can usually kick something off and let it run over night in those cases the same as I do now.

a twosome of i7 will consume less electricity then a foursome of i5. If the total price is the same and you don’t need 4 physical machines, then choose the two i7.

2 Likes

The problem is they are not the same. The 4 AMDs cost roughly half what two i7s would cost. And electricity is really cheap here so the $450-500 cost difference would take a long time to make up. But wire management is always a problem and having two fewer wires to deal with is an advantage.

One more update. First thanks to all on this thread. It was a good discussion and I learned a lot.

My home system has stabilized quite a bit since moving my storage to the NAS. Also, higher priority things have happened that means budget for new machines may be some months away. So I’m going to stick to my current setup with the aging ESXi server and storage moved to the Synology.

I’ve also decided to abandon PiHole in favor of AdGuardHome. Advantages for me include:

  • runs on the OPNsense box meaning there’s only one box that that needs to fail to break my network instead of one of two
  • because PiHole is no longer running on a separate box I don’t need to worry about running two of them to make up for the fact that one might fail or reboots and the like. I was always only going to have the one OPNsense box so my overall MTF reliability is unchanged
  • it supports application blocking
  • it can enforce safe search on most of the popular search engines
  • management of different policies for different clients is a little more straight forward (subjective opinion I know)

The graphs are not as pretty but it makes my setup over all more simple and simple is good.

Anyway, I’ll probably not post another reply unless someone asks a question.

Thanks again!

1 Like