Best hardware platform

I’ve said it twice and I’ll say it again. THEN DON"T USE LINUX. No one is forcing you to use Linux of openHABian. If learning a few of the basics of Linus is too hard or cumbersome then don’t run on Linux. OH runs quite well on Windows and Mac if you are more comfortable there.

But it is unreasonable to expect to run any program as complex as OH on ANY OS and eliminate the need for the user to have any knowledge of that OS. If you want that then buy a commercial hub which is more limited given you fewer options but trades that for a simpler user experiance.

It is not openHAB’s job nor is it openHABian’s job to “fix” Linux. It is outside the scope of what openHAB has power over. openHABian can reduce the amount of Linux you have to know to get started, but it is never going to eliminate the need to know at least something about the OS.

And so we set the boundaries pretty much in the same place that EVERY OTHER PROJECT out there does for backups. You provide the path and it is up to you to make sure that path is to a mounted file system from a USB drive or a network share.

This is certainly a pet peeve of mine. It DOES solve your problem. Your problem is Linux is too difficult to learn and perform some of the basic steps necessary to preform offline backups. Fine, that is a legetimate problem. But switching to an OS that you are more familiar with IS a solution to your problem. It lets you run OH on a platform where you already have the knowledge and skills to do everything you need.

It is just a solution you don’t like.

2 Likes

You are unwilling to accept that the (intentional) flexibility in HW and SW is already a big challenge to developers and that to automate deployments is even a lot harder. But to automate AND still keep/cover everything we want to maintain is impossible.
You are unwilling to accept that backup in general and specifically the “hard” step to mount a directory is neither a openHAB nor openHABian job. Not caused by OH, cannot be compensated for by OH (except maybe in very specific setups, but even you agreed that’s not what we want).
You are unwilling to bring yourself up to the basic knowledge that is required for the level of usage you want.
If a couple of people out of thousands to run openHABian (and even more to run openHAB without it, latest count I heard about was around 20k) are struggling this means there’s thousands that DO have managed it, and it’s certainly not because they’re all “Linux enthusiasts”.
So it’s a problem that only applies to a minority of people and by no means is too complicated to solve and clearly a manageable task. And - unlike all those things that can only be done inside OH - there’s many ressources available outside of openHAB to help you solve this.
Still, you are unwilling to invest your time to google to find out yourself just as does everyone else while at the same time you keep telling me you expect me to invest more of mine.

Pretty much everything is there (openHABian, Amanda) because there has been a focus on already. The only remaining task (of providing a storage dir) cannot “become less complicated” because it is just as complex as it is.
No, this was no “suggestion”. You were asking us to do your job - work that to invest everyone else accepted to be his own duty. Us, right those people to spend their spare time to make people like you a little happier. And you didn’t even offer anything in return.
And you continued telling us even after we told you we don’t want to do it because it cannot be done.
Yes now you got me angry.

3 Likes

I wonder if we could make a good backup by developing a Binding that does it?

Just brainstorming here…maybe it’s unnecessary; what would running inside the OpenHAB instance matter? It would take advantage of the rules and events bus:

  • It would monitor event bus to determine when config changes for other bindings and therefor what config needs to be backed up.
  • It would know from the OpenHAB config what modules are installed and pull config from them, as well as during Restore know what modules to re-install.
  • Backup would not have to include binaries or jars as long as it has enough bundle/module config to restore them from a binary repo…perhaps using SHAs to determine whats already backed up.
  • It would trigger events and rules when backups occured which might be useful.
  • It could encrypt and backup to cloud or dropbox, etc.
  • config entries could list extra file patterns to include in backups

Thoughts?

Sigh. Seems you didn’t read the previous posts.
There’s no point in having openHAB do the backup for various reasons, one of them being that for your smarthome to work you need to backup way more than just openHAB config.
For openHAB config-only backup/restore, there’s the openhab-cli builtin script. Trigger it whenever you’ve done a substantial amount of config changes.
For anything else there’s Amanda. It runs once a day per default.

Oh, and please open a new thread if you want to continue discussing backup.

I did. Including the part of you getting angry. I wasn’t implying you or core developers of OpenHAB need or should do it. I am very capable of writing code (already started binding dev) and I wasn’t trying to put this on your shoulders, just looking for feedback. This is obviously a heated topic so there is desire on at least one side to have something simpler to setup than Amanda. I’ve done kernel driver development and been on Linux for >20yrs so I dont even need any of this…but I do think newbies do and I like a lower barrier to entry.

This is the “Inter-mess of Things”. I’m very disappointed in all these companies’ “must use our or our partners ecosystem/protocol/hub/etc” hubris and nothing really works together which was the main premise of IoT/HA I though…perhaps I’m misguided. OpenHAB has finally bridged the gaps in my HA and I’m happy. I would like to see it more widely available to a bigger audience…so you’re a sustaining member, what is your opinion? same or would you like to keep it l33t? (and FYI I am being genuine, I won’t judge anyone for wanting to keep things l33t.)

Didn’t there use to be a way to reply and start a new thread?

Here is a new thread. Discussion on Built in Backup for OH

FWIW, that being said, there already IS a means to get along without Linux and mounting:
you can choose Amanda to backup to AWS S3. It’s available as an openHABian option today.
(although I feel to properly configure and debug S3 access is a lot more difficult than to mount a device).

Well, for completeness’ sake, when developing the Amanda setup, I was also trying to build a third variant to work with an external SD card writer and a set of removable SD cards.
My motivation was indeed to relieve a user from the need to understand Linux just to get the storage area mounted right.
I failed for a number of reasons, mainly because it requires a newer Amanda version to become available as an ARM binary package first.
But to setup that SD writer would probably still have been a manual or just semi-automated task that would have required another set of Linux knowledge.
With ever-changing device names depending on what you have attached to USB, bootup order, processor, bus and HW combinations and many more factors (moon phase probably, too :-)), you cannot automate that without taking a large risk that you end up accessing the wrong device, and with a fair amount of bad luck, this will ultimately be destroying your HA server setup.
For that very same reason I decided not to try automating mounts. It’s just too dangerous.

Agreed. This is why I really didn’t address the AWS option. If the user can’t mount a USB drive of nfs share, properly setting up S3 is going to be outside the realm of the possible for them. But the AWS option is really awesome and for those who can, I highly recommend that approach to get offsite backup. It works quite well.

My intention and suggestions was not to automate disk handling. As you state, this is not a good idea.
My intention and suggestings was to make things easier

In Windows Disk management the user do have to know something, (ie, which drive to activate, which drive to format, which drive to partitionate, how to partition etc). And yes, the user probably can destroy Windows, if he/she choose the wrong drive, wrong partition etc…
Whenever you plug in an external drive to the USB port of a windows box, Windows will promt you for what to do with it. Even if it hasn´t been activated, Windows will tell you, that it needs to be activated. This is called ‘Plug& Play’. And there is an obvious reason for this. Plug&play=User Friendly. I guess this goes for Mac OS as well.

I know Windows and Mac is a GUI OS´s, and Linux (openhabian) is not. But just because it´s not the same, it is not a good explaination, why it couldn´t be any better. Or at least, thats what I fail to understand.

I know, and I agree with you and Rich, this suggestion should have been pointed at the basic Linux structure and openhabian in the first place, not Amanda. But this is Openhabian, and this is in fact a hasslefree setup of Openhab, where someone wrote:

“A home automation enthusiast doesn’t have to be a Linux enthusiast!”

I agree installation and configuration of Openhab using the hasslefree Openhabian do not require the user being a Linux enthusiast. But - Maintaining (including backup up the system) is not hasslefree, in my opinion. In fact, it does indeed require quite some hassle and Linux knowledge, (unless you want to struggle with a manual procedure like deattach the SD card and make an image copy of it everytime).
As mentioned a few times - In my opinion, this is wrong. Backing up a system should be at least as easy as getting it up running. If not, then things will go wrong or they´re not beeing used.

My best suggestion is to develope some kind of Disk management into Openhabian-configuration tool. I can´t understand why it should not be possible. But, i´m not a Linux-enthusiast user, so my suggestion can be wrong.
I do not agree with Rich or anyone else suggesting staying away from Linux and use Windows, is a valid option. It wont make openhabian any better. And it´s direct opposite to the quote: “A home automation enthusiast doesn’t have to be a Linux enthusiast!” and to it´s intention of beeing seamlessly/hasslefree.

I´m not asking, or demanding you, @mstormi or anyone else, to develope this.
I´m suggesting someone who really do care about this quote: “A home automation enthusiast doesn’t have to be a Linux enthusiast!” One who also have the capable knowledge to develope, to think about the part - How to maintain the system without beeing a Linux enthusiast, as well.

If it´is not possible, if no one listens, if no one cares, if no one have got the capabilities. Then, this is how it has to be. But at least I tried.

1 Like

You might not be aware of this but your “suggestions” effectively mean to automate disk handling. Either you (= the user) read up on enough of UNIX to understand which commands you need to individually, manually apply to your system (to work with your specific HW and SW environment). You can’t and/or don’t want to do that.
Or - that’s the alternative - you need someone to write down all the rules in UNIVERSALLY applicable form (applicable to ALL environments of every openHABian user, not just yours !). Now if someone was able to properly and comprehensively do that in written docs form, he can probably as-easily automate it as a script. There’s no big difference. The major problem remains with the complexity of the “write down the rules” task (a.k.a. a comprehensive specification and verification in terms of software engineering) plus the imminent risk of using this everywhere. It is just to a minor extent with actually implementing it (doing the programming).

As already replied in private communication:
What’s the point in raising “suggestions” if not that you wanted someone to jump in and implement them ?
No, wait, since you stated you’re no native speaker (btw neither am I) let me rephrase this because you apparently didn’t understand, and I do NOT want you to answer this rhetorical question:

What you call “suggestions” are demands.
And the ONLY point in raising demands is that you want someone to jump in and implement them.

Someone who can’t even mount a disk is unlikely to be able to give a proper judgement if such a thing can be done at all, how difficult it would be, how risky and if it is at all a good idea to try.
To build things is not a good idea all by itself. Doing things comes at a price: the risk of damaging people’s HA servers and the amount of work to invest, just to name the most important ones.
Rich and I have been trying to tell you in several posts but I feel you don’t want to understand that.

Use a dictionary to lookup the meaning of “enthusiast”, please. The slogan is a pretty concise description of what openHABian is and also of what it is not. You totally misunderstood that from the very beginning and you still do.

You tried ?
No comment. As already replied in private communication: Let’s stop here, ok ? I don’t want to waste any more of my time. If you still want to comment or justify your position more, please open a new thread but leave me out of this, I will not participate there. Thank you.

2 Likes

Well, did as you said:
Enthusiast - “a person who is very interested in a particular activity or subject.

very interested” is a relative defintion. Notice the synonyms. Perhaps words like, “Fanatic”, “devotee” ring a bell. They´re all still relative, but the meaning in it´s context is getting stronger.

In your opinion, one doesn´t have to be very interested in Linux to know how to add and mount a drive and path. Thats your opinion, which I accept, but I dont agree. Others might have another opinion, which you should be able to accept as well, without throwing this debate to a stupid and personal level.

You keep on telling me, what I meant!
I know what I meant, You failed, even though I told you several times. Manipulation and stupidity will not change my opinion and what I meant.

Which makes me wonder, why you decided to quote me, afterwards. I guess, no one is perfect!

I was going to let this go but decided to make one more reply.

I agree with Markus. What I think you don’t realize is that what you are asking for is exactly this. The only way to make this easier is to automate the disk handling. And that is something that is very challenging and requires way more effort than I think you realize to achieve and do so in a way that covers the vast range of Linux platforms that OH can run on, completely ignoring all the other platforms that are officially supported.

And we really are not asking anything more for those who want to use Linux. The user does have to know something.

OK. We are running on a headless machine (i.e. no display, keyboard, or mouse attached). Exactly how is this system supposed to prompt you for what to do with it? If you are running a Desktop Linux with a GUI then yes, indeed, when you plug in an external drive it pops up a dialog to ask what to do with it. But we don’t have that.

And this was a decision made purposfully. The windowing environment requires massive amounts of resources. Running a remote desktop server like VNC requires event more. The amount of resources required to support this GUI environment would make it such that openHABian can only work on the most powerful SBCs, if that, or full up computers. Seems like a pretty high price to pay to make it so the user can get a nice little popup in those rare cases where a new USB drive is plugged in.

I’m sure it can be made better. But the point we are trying to get across is it’s not our job to fix Linux. Given there is no way to get that nice little pop-up on a headless command line only server our choices are:

  1. write something that only supports a very limited combination of file systems and devices and completely automate it
  2. turn openHABian into a software appliance
  3. continue to require the user to know a little bit about Linux to maintain the system

1 will require a lot of work and leave us with a vastly reduced set of options that can be supported. To support everything will require us to fix Linux.

2 will require an unbelievable amount of work, probably more work than OH itself. And all this work would probably take effort away from areas which are a high priority IMHO like adding the ability to add tags to Items in PaperUI and other OH specific usability improvements.

3 is what we are left with.

First you say

My intention and suggestions was not to automate disk handling.

Then you say the above. So at least at some level you agree, we are talking about automated disk handling.

It is a solution. It is just a solution you don’t like.

The problem is to make openHABian better in this way requires us to either make severe limitiations to what is supported or make fundamental changes to how Linux works. Neigther are acceptable.

Just to provide a little bit about the scale of the problem, on Windows there are maybe four file system types supported: CIFS (i.e. SAMBA), NFS, FAT16, FAT32. Mac has a similar number of supported file systems (note there is not much overlap which is why it is so hard to use a Mac formatted drive in a Windows Machine).

Linux supports 15-20.

And that is just dealing with the file systems. Then we have fundamental problems caused by the different ways network file systems and physical devices work which often boil down to permission problems.

OK, lets go down this path for a bit.

This problem is WAY bigger than backup. If you really want to be able to install and maintain an OH system without being a Linux enthusiast which, fankly, in this context means no Linux knowledge at all then we need to build a software appliance.

What is a software appliance? If you have ever used something like pfSense, DD-WRT, Tomato, OpenMediaVault, openELEC and a number of other systems out there. A software appliance comes as a full OS image and a very robust web based or GUI based user interface that allows the installation and configuration of everything that is allowed on that platform.

So what would that look like for openHAB? It would mean throwing out pretty much everything that has been done in openHABian and starting from scratch. Then we would need to built a robust web application that can:

  • configure everything necessary in the OS
  • install and uninstall software
  • configure said software through a web based UI which means WE have to write these UIs for all the third party applications that OH can work with (which ranges in the dozens)
  • secure the web based UI

Once we have that then we can use an approach like pfSense uses and compile and return a big XML file containing the backup of everything or add some sort of UI that allows you to find and mount a plugged in external drive to back up to or something like that.

And even once this is done, there will be tons and tons of features that many users of OH depend on that won’t be supported or won’t be supported in the same ways. And the long therm effort required to keep up with all the UIs and plug-ins and such necessary to allow us to continue to use the latest versions of all the third party programs is huge.

Even if everyone dropped what they were doing now and only worked on this I think it would be a couple of years before we say anything even remotely useful. Look at PaperUI. It has been two years (I think)

And this is just to get us up to something minimally useful. I’m not sure we would ever get to the same capabilities.

It’s an open source project. If a group of developers want to take this sort of thing on they are free to do so. It would be welcomed by the community.

4 Likes

Moving on from the petulant bickering…

Having hosted my OH install on various platforms, I would say that for small / started implementations, the quick and easy solution is a RaspberryPi / Openhabian install.

Having gone a long way past that, I have now discounted Docker - since the additional learning curve was steep, especially when I tried using local access bindings like exec.

My answer to the question “which is the best hardware platform” is actually this:
The best hardware platform is not a hardware platform at all - it is a Virtual machine.
I’d suggest spinning up a Linux VM (or even downloading a ready baked Linux image from one of the very many free source online) and running it as a VM using your favourite free VM provider (VirtualBox, VM Workstation ESX etc)
This gives multiple benefits:

You can quickly and easily create a copy of your environment for a dev area
You can backup your full environment between upgrades by simple file copy
You can up the lever of resource issued to your OH by simply changing VM properties
You can shift the hardware platform you are using whenever you like (My current OH has run on a MacBook, a Windows Servers and ESX)
Any resource unrequited by your OH install can be used for other VMs etc

One think I was wondering (which helps simplify the above discussion) - is why we have not got a published Openhabian style VM Image - if we did, we could have alleviated the above bunfight and have a very quick intractable to get people going that says:

Download Image File
Load VMWorkstation
Add VM to Workstation
Bask in the awesomeness of home automation

No it’s not that simple.
It’s not about creating an image - it’s about maintaining and supporting it (including providing help on the forum).
openHABIan is targeted at non-(UNIX) experts. Any experienced IT guy can and probably will setup his own HW, eventually a hypervisor, and OS and finally openHAB, and will know how to operate all of these.
But this is what the typical openHABian user cannot do. He’ll quickly end up frustratedly dropping openHABian or even OH if there’s noone to support him on the first barrier to encounter that he is unable to overcome on his own (and we all know these there always still are multiple of these barriers).

Any openHABian VM image would need to include a Linux distribution, to work on various HW and hypervisors. Maintaining and supporting this is an incredible amount of work (there’s even companies to fail on this).
That’s why Linux, Docker, OpenStack and all those things are not part of openHABian except for the very specific case of RPi image where the effort of going those extra miles is worth it since a) no virtualization layer needed, b) we reuse an existing Linux distribution and most noteworthy c) there’s so many users of this combo that there’s a large benefit in providing it.

Particularly in home automation, going VM will mean an increased need to deal with HW issues on all levels. We need ZWave sticks, USB and RS485 interfaces and all other sorts of peripherals to work with the hypervisor as if they were locally attached.

Last not least, very few people run data center grade alike server HW 24/7 at home and can use VMs for the purpose of home automation, so the cost-benefit-ratio of providing and supporting such an image is a pretty bad one. While we would welcome someone to have the capabilities and willingness to join in to provide VM images, noone has, and if he did, he likely would be surprised how much work that turns out to be. Note that recent bickering was essentially do to the fact that someone who has never* implemented such a solution (*no blaming intended here) was underestimating the complexity and efforts of providing that solution to an extreme extent.
Essentially openHABian is what it is because that’s what we as a community are capable of providing in our spare time.

I’m happy you’ve found your VM solution to be the optimum for you.
But I would object if you were to claim that VMs are ‘best hardware platform’ for all (or many) OH users out there, for the reasons I gave here (and yet some more). ‘best’ is an overall judgement. It is not exclusively on user benefits, it is a tradeoff in multiple dimensions and you mustn’t forget about the downsides and implications such as the efforts and cost to provide and support such a solution.

PS: and being a virtualization architect/engineer in my professional life, I sort of know what I’m talking about
PPS: and no I don’t have any 19" rack in my basement. I still run my (pretty large) home automation on RPi on openHABian. It’s even fun to keep optimizing it.

Markus,

I too used to be a Virtualisation Architect by trade - I now focus on Infrastructure automation instead.

I was not having a pop at you at all - I was trying to walk away from the silly argument that this thread had devolved into and was suggestion that I felt that the best ‘hardware platform’ is not a physical piece of tin at all, but rather a solution that allows for easy capacity increase, shared resourcing, quick in place backups / snaps for dev / test purpose.

In addition, I was suggesting this as MY opinion of the best approach.
I do understand that there are many people who struggle to get OH up and running.
In some cases, it’s familiarity with the new toolset. Adding a RaspberryPi, or Linux, or even a VM Image into the mix further complicates the matter.
A Raspberry Pi was super for me - until I got to the point where I was running InfluxDB, Grafana, Mosquitto, OH and attempting some on demand ffmpeg encoding for photo notifications of my front door when motion was detected.
I moved to a VM as a test initially (hosted on my MacBook) and very quickly saw how easily I could port my existing Raspberry PI backup to the VM (I used a Linux 18 server image off the web, followed the quick install on the openhabian install guide - with a few manual repo adds…ran a restore and badabing)
Right now, my vmdk is hosted on a Windows 10 NUC using VM Workstation, while I rebuild my ESX - and it is performing well enough that I don’t even know if I need to bother moving it.

FYI - so far I have hosted my OH (starting with version 1 implementations) on:
Atom based netbook
Windows 10 - Lenovo Core2 Duo laptop
Ubuntu desktop on a shared i5 desktop
Raspberry Pi 2
raspberry Pi 3b
Docker
Ubuntu server 18.04 on dedicated laptop
Ubuntu server on VM (running on a MacBook, then ESXi and temporarily on a Windows 10 box running VM Workstation)

I’ve been around the block - someone was asking for opinions, these are them:
For smaller implementations, raspberry Pi is great.
If you need a little more grunt and have the technical knowhow to run a Linux based VM - the nominal effort increase at setup time is worth it.Even if your initial hardware is relatively low specced and dedicated to OH - you can load a lightweight hypervisor and host the VM - making future hardware migrations etc so much easier.
Finally, if neither of the above applies, an old laptop running (with the screen off), tends to be both cost effective to get hold of AND run. With the added advantage of a built in UPS and terminal when needed.

I run a VM on an old Intel NUC - ESXi as my hypervisor and Ubuntu Server 18.04 with Openhabian as my platform,

1 Like

i also switched to VM ubuntu , i am still using my old windows run till it dies
but, i am trying to import all the VM now
i agree the ability to change hardware , it the most important thing for me
as i am doing this only for hunbby its nice to find old pc and run OH on it

I have thought about this solution. I have a few old laptops laying around, (Lenovo W520´s with Intel i7s). My main headache is, how to get from my Rpi, with InfluxDB and grafana running, to this laptop without starting all over. There are probably solutions out there, but I fear it´s timekilling options.
Also I´m using an zigbee Rpi shield, which obvious wont be ported. But since this is just a test case for zigbee, it´s not that important.

I’m sure that would be a welcome addition to the options available for getting started with OH. Can we expect you to create and make available a VM image and a PR to the Docs to describe how to do it?

Personally, I don’t think running OH as a server in a type 2 hypervisor is a great approach for most users, but I’m sure it is a fantastic approach for some users. Given that OH runs on pretty much OS that will also run Java I don’t really see what it buys you by adding the additional complexity of running OH in a VM when you can run it natively on the host OS.

But the biggest issues for many/most home automation users to use a VM include:

  • They don’t have a server machine they have running all the time upon which to host such a VM and shelling out $100 (much less if you already have SD cards and charging cables and don’t need a case) for a complete RPi setup is a lot easier to afford than $150-$300 for a Nuc.

  • The are energy conscious and want any always on machine to use a little power as feasible.

  • Markus’s previously mentioned additional complexities and problems that arise with interfacing hardware with the OH VM.

That isn’t to say I’m against VMs. I personally run OH in Docker container on a VM on a type 1 hypervisor.

I’m not even against someone building and offering a VM preconfigured with OH. More options == better. But I would hesitate to recommend that solution to any user who doesn’t already have the skills necessary to set one up on their own.

Shouldn’t be too hard really assuming you are running a Debian based Linux on the other machines/virtual machines.

I run in Docker, InfluxDB, and Grafana in Docker so pretty much every time I upgrade anything I’m migrating to a new OS.

My migration from RPi to VM was pretty simple using the details provided by Riko - the laptops you have knocking about are quite beefy for the requirement, so you’re in the fortunate position of being able to test any of the scenarios that I presented.
I’d have been ok staying with the RPi if I wasn’t having performance issues - I’ve defitneitly seen more stability on the VM (and before you ask - yes I tried both suggested java versions)
It may well be the stability was due to to the rebuild - I’ll never know - good luck whichever way you go - there is no wrong solution

Hmm sounds really simple. And yes it´s a Debian on on of my W520 laptops.
Maybe I should try one of these days. Thanks Rich!