Any HW recommendation?

I view my RPi3 (which serves as my print server) as my hardware backup for openHAB. If the RPi4 died, I’d just have to take the SD card out and put it into my RPi3 to be back up and running in a few minutes.

I had just assumed I’ll do the same thing with an RPi5 in the future, but it occurs to me that an m.2 SSD can’t be moved to the older RPi4.

This is really it. I’ve learned just enough to get by in Linux, but it’s not really where I want to focus my time and energy. So, I’m better off sticking with openHABian on an SD card. I might feel differently if I had experienced a lot of SD failures in the past.

My 2cents:

  1. For everyone, who knows hardware and OS in deep: use whatever you want, you’ll get it sorted anyways.
  2. For those, who don’t know hardware and OS: use openHABian with SD card

ad1:
you’re a pro. but don’t expect much support here in the forum as there’s a bazillion of possibilities, which can - and probably will - interfere in installation and integration into openHAB. openHAB is a web application - but the app is dependend to OS-level bindings or execs, or …

ad2:
if you don’t want to make a big head on how to run things - download openHABian and use it. It will take care of all the hassle of installation, integrations and operation of the web app as is. including solutions for monitoring and other helpers.
Plus: it comes ootb with backup/restore and migration tools.

which brings me to the “redundancy”-problem. If you’re a pro and use solution 1 - you should know how to deal with problems on hardware, OS and perhaps a bit of application level. You’re good. openHAB leaves you enough room for that. Install it on bare metal, in docker, kubernetes or proxmox - you’ll deal with it. Then you can deploy your new openHAB including restoring your data in no time if there’s a problem or even have HA-solutions in place for that automagically.

if you don’t want that - fine. openHABian helps you with that. you can clone your SD card regularly to a spare SD and plug it in the next Pi if there’s a hardware problem (I’ve got a bunch of Pis from the beginning - and still not a single one died on me since years!) or if the SD Card wears out. You can pro-actively change SD cards every bunch of years - or rely on your backup strategy (also openHABian functionality).

So, my 2cents. and most importantly my 3rd cent: Make up your mind (and possible free it!) on how you use a smarthome in principle. My definition e.g. is: my home mustn’t be dependend on my software. It works without it. openHAB (as any smarthome-software btw) is only there for convenience. If for some reason or another I can’t have my openHAB for days - I’m still be able to heat my house, turn on/off light or anything. Of course, it’s not as comfortable and I have to manually (eeeeh) press light buttons and stuff - but it works. no need for “booting from a SSD because I heard, SD cards only last a month” or so. :wink:

8 Likes

I dont think the world is that black and white. Sometimes stuff is just incompatible, and sometimes even if you are a pro then things are just not worth it, i.e. too much effort. And no i dont want a SSD because i think a SD card dies in 1 month. In the end, life it one big trade-off. How much effort vs how many rewards.

What I personally want to avoid is the HW/SW dying on me and then losing my config&database. Next to that stability is essential. Speed is nice to have, and low cost is good. So this is gonne be a trade-off :slight_smile:

You mean just to flash it, or is it also workable as final solution?

It works for both solutions.
After you flashed the SSD on your PC you can simply plug it into Pi‘s USB connector or you can plug it into Pi‘s NVMeBase.

That wasn’t a black-and-white comment, though. If pros don’t want to put in the effort to get openHAB running in Docker/Windows/etc. (someone’s even running it on Android), they can still choose to use openHABian. It’s just that they have more “easy” options due to their expertise.

At this point, we’ve probably given you too much to think about. All you really need is a solid backup strategy, regardless of your storage type. An RPi4 is probably the easiest and least expensive “drop in and keep going” solution to replace your RPi2, but I can see the appeal of an RPi5.

1 Like

Not black & white at all. Just a description on what the openHAB ecosystem provides. :wink:
You can pick and chose yourself. Of course it could be grey.

That’s exactly the point.

  1. Backup your config/persistence (openHABian does that also)
    And/or
  2. Send your states to an external database and use its backup strategy. They’re here the minute you restore your config to a vanilla openHAB or mirrored SD

That’s it. :+1:

Unless you’re about to put a bunch of other stuff, services and software on the same physical machine, an RPi4 is more than enough. Way more.

1 Like

I have spare servers just in case.
I run proxmox and back them up across each other.

2 Likes

All, for your reference, here’s the official hardware and OS recommendation for openHAB.

There’s also an official recommendation how to best operate your smart home server for best availability. (read the latest update here, for technical reasons, replacing the official docs pages can take some time)

Disclaimer: it’s just recommendations. Use whatever hardware you like in whatever way.

1 Like

Firstly, even though I am not an openhabian or rpi user, I appreciate all the work you’ve done on it, @mstormi.

On that page it says:

Running Raspberries off the internal SD card only may result in system instabilities as these memory cards can degrade quickly under openHAB’s use conditions (infamous ‘wearout’). When you choose to deploy openHABian, it’ll use the ZRAM feature to mitigate.

That might need to be rephrased a bit. It sounds to me like using an SD card is indeed a bad idea, and that’s contrary to the actual recommendation to use it instead of any other medium.

Perhaps it’s also worth mentioning that javascript is slow to load (but not to execute) on an rpi, and such issue doesn’t exist on an amd64 architecture. It would probably save people’s time if they had the choice of using a server instead.

This has always been one of the key ideas behind openHABian, now it’s also in the docs.

Ultimately however, it’s all about maximum service availability (i.e. your home working at all times).
That’s not the result of getting the best server hardware or by “cleverly” selecting SSD over SD/ZRAM.
Service availability is only ever the result of comprehensive system design and congenial operations that fit the hardware choice, see 2nd link in my previous post and and also the rationale behind explained here I already referred to earlier.

Probably, but people take away from reading that what they want to take anyway.
Either way, that’s the genuine docs, i.e. not part of openHABian docs. These also apply to e.g. generic Raspi OS on RPi 5. I know it’s not easy to understand the subtle differences and implications but if you read it like that it’s not an argument against (plain SD) Raspis but a strong argument in favor of openHABian in fact (as it provides ZRAM and SD mirroring).
Plus, that writing hasn’t been updated in a long time so yes time for an adequate update there, too.

EDIT: There’s an update on that already.

I like to call this the “build escalators, not elevators” principal. See https://www.brainyquote.com/quotes/mitch_hedberg_401954.

4 Likes

that’s one hell of a quote. I’ll screenshot this to make a T-Shirt and use it as a background on all the Sprint Plannings! :wink:

4 Likes

Hi Ramon,

this is my personal experience.

First Step was using an old 12V lead battery with a dropdown regulator and an old battery charger to ensure stable power supply. This resulted in uptimes > 1 year.

Second step was using an external SSD. Now the RPi is only down when I power it down.

Upgrading the RPi to a later model seems to be a good idea, too - depending on your setup.

Good luck!

ok so i ended up getting a rpi 5 8 gb, with a argone one v3 nvme case and installed a samsung 980 1TB into it. im almost done transferiing everything. I had some trouble with the mysql database to get that transferred to mariadb and then actually installing the jdbc mariadb driver. But after clearing the cache that seemed to work now. What is left to make a new copy of my db with the latest entries, then move my zwave and zigbee dongles over, and hope everything is then functional.

Thx for all the help and all the suggestions. The only thing left maybe is to get a UPS working with autoshutdown.

1 Like

Just adding my two cents to this thread: About a year ago my pi3 started going wonky. I’ve since discovered that it was a a power supply issue, but at the time I was really annoyed with it and looking for a new SBC. Of course, the pi shortage was still in full effect at that point, and I was having a hard time finding an SBC with decent specs for a reasonable price.

And then I discovered the world of used 1-litre mini PCs. I scoured eBay, and picked up a Lenovo M910Q for $120. Cheaper than many of the SBCs I’d been considering!

Knowing that this little computer was going to be massive overkill for just openHAB, I looked for other things that I could run on it, and soon discovered Proxmox, and other self-hosted services like Nextcloud, Paperless-ngx, a Minecraft server for my son… And the backups! I’d struggled with figuring out Amanda and never really felt confident that it was correctly set up. I’d tried other things, like periodically manually duplicating the SD card (which worked until I got lazy and didn’t bother doing it frequently enough.

But the backups and backup pruning available via Proxmox has been incredible, and has already saved me multiple times from buggy updates (not openHAB, luckily!) and user error.

I got bit by the homelab bug.

Now, a little over a year later, I have added two additional miniPCs to play with High Availability redundancy, a separate NAS running TrueNAS, and I just picked up a cheap Topton box with a 6-port 2.5G NIC to use as a router running OPNsense.

All of that it obviously overkill for openHAB, but even if you don’t want to get into the homelab hobby, I’d highly recommend finding a used miniPC and running Proxmox or something similar. The ease of backups alone is worth it, for me. And the ease up spinning up additional containers to separate MQTT, nodeRED, or anything else that you might want to experiment with, without putting your core openHAB in danger, is great. I can play with all sorts of services and then quickly wipe them out without worrying about conflicting dependencies or configurations.

2 Likes

I’ve got a small Intel N100 Box but I am not sure what I should install. I’ve read about proxmox but it might be overkill. I thought about ubuntu server and just run everything as docker containers.
Do you run all your services in separate lxc containers? What do you do if you want to e.g. run just a docker container?

I spin up a separate LXC container for any totally new service. Makes it easy to wipe it out if I mess it up or decide not to use it. The additional overhead for the multiple containers really isn’t much. My nodes are running well below capacity.

If I ran them all as full VMs, that would be a different story.

I also have one of my Proxmox containers running alpine linux, and put all of my docker containers in there. I don’t tend to like docker, so I only have a few running at the moment.

If you decide to set up a Proxmox node, here’s a really great resource for “easy mode” set up of a lot of self-hosted software, including openHAB:
https://tteck.github.io/Proxmox

He has a ton of scripts that automatically set up the services. As with any script you find on the web, do your diligence with checking through the script to ensure it’s not malicious… but in my experience tteck has been nothing but helpful.

I’ve recently done a lot of looking into Proxmox and the difference between Docker and LXC containers.

To start, I am running Proxmox with two Ubuntu VMs. All my services are running in Docker on these two VMs.

Were I to do it over again I would still choose Proxmox. It is indeed overkill for most homelab setups but it also provides a lot of nice features that are not overkill.

LXC containers are kind of different from Docker containers in their intended purposes. Docker images really focus on microservices and making the image provide just enough to run one service. LXC containers on-the-other-hand seem directed to be used more like a VM. LXC containers tend to have all the stuff necessary to run multiple services, ssh to them, etc.

Because there are far more options for Docker containers on DockerHub than there are LXC containers from any source I could find, it is still very attractive to use Docker. After going down a bunch of paths, the best recommended approach appears to be to spin up an LXC container, install Docker there and run your Docker containers there: containers within containers.

One thing I can say about the difference between VM and LXC containers on Proxmox is that the LXC containers will make better use of memory (I need to test this out but am pretty sure this is correct). The VMs will be running their own kernel and the kernel improves the performance of the machine by caching stuff to RAM. This is stuff it’s loaded but no software is actively using just yet. Normally that’s a really good thing. However, when it reserves that RAM for caching, it takes that RAM away from the host so it cannot be used by other VMS. I’ve two VMs and roughly half of the RAM they consume from the host is used for cache.

I think an LXC container will use the RAM better since everything is running in the same kernel so while it might use a bunch of RAM as cache, it won’t make that cache unavailable if one container suddenly needs more RAM.

Eventually I’m going to migrate off of full VMs to LXC containers for this reason. Though eventually I’d like to retire this big eight year old tower entirely and move to a cluster of SBCs and mini-computers and I’ll probably drop Proxmox at that point.

That’s my recent experience at least.

@Chad_Hadsell , thanks for mentioning Paperless-ngx. I used to use Neat Receipts (until inshitification set in) and am currently using Nextcloud for document management but I’ve been unhappy with NC’s full text search and it’s reliance on Elasticsearch. I’m definitely going to Paperless-ngx it a try.

2 Likes