Someone already has done this, at least they did it using a RPi compute module as the “computer” part. I can’t find the original postings right now. I’m not sure how much it cost though.
One if the challenges is zwave is proprietary and needs to be licensed by device manufacturers which drives up cost quite a bit.
On the other hand, building something like this is exactly why the core of OH was split out into a separate project under Eclipse so that companies get assurances that the IP had been properly transferred and they face no down stream risk of an IP violations. So what you propose is right in line with where we want to go.
This would be very hard to do in a then key system like you propose without not supporting persistence and not preserving lids across the reboot.
Dealing with corruption caused by power failures will be challenging as well, see Matt’s posting above.
That’s an idea. They to an approach that a lot of people take (e.g. OpenSprinkler) and you can sell it BYOP or with the RPi. Lots of people already have RPis lying around.
One challenge will be managing interference. I do know lots of people have issues trying to enable BT and the Razberry zwave hat at the same time. I’ve had problems running the BT in scanning mode with the wifi on RPi0w and RPi3.
Another form factor that would be even more useful to me and those who don’t run on a SBC would be a stand alone device that I can access over the network with all of those radios. Then I can put it in the ideal location and host my OH in the basement, for example.
Making something like that turn key might be a bit of work but I bet enabling some Discovery over the network and scripts to set up socat would be doable.
Thanks Rich. I found the other thread with the RPi Compute3. I forgot about the RPi compute sticks and they are actually cheaper than the Octavio SoC and more powerful.
I was hoping to keep the IP/proprietary stuff in the Ember chips so there are no violations. Really I would like to just make the Embers like “wired in usb devices” without usb. I.e. the dongles seem to be Ember+FTDI device so I can just skip the FTDI device. I’m not certain of this yet. Also, perhaps being integrated means it then violates Zwave licensing, I dont know yet. How mature is the Ember coordinator? I hear it works great with zwave but lots of issues with zigbee.
I believe there is a way to prevent corruption if the OS was prepared for this hardware. Perhaps partitioned into a read-only root/OS, config and data parts. Config part can be configured for immediate sync/flush and the once in a blue-moon writes would rarely coincide with power loss, etc. The data part would be the only risk, mostly logs and state data. This could corrupt but should be easily recoverable or just wipe if you dont care about history. The config part could be backed up to cloud. It also helps to pair down the Linux OS to minimal services. I’ve had these same issues on my RPi converted treadmill which I cant wait to control via HAB+Alexa “Alexa, double time!” haha
I was thinking this puck would be stand alone like you see. I would still have built-in Ethernet too but I tend to favor Wifi for most things.
I am now thinking Hat or Compute board is the ticket. Maybe I will just wait for Sergio to come out with his board though I am not seeing mesh radios on there.
I think this thread seems to concentrate all to much on hardware failure (specially regarding SD/writing to disc). And maybe this is correct. What worries me is, this is a rather general matter on all platforms and all kinds of storage systems. And there is really only one solution - Backup.
You can not have a 100% fail-save system, and specially not a system like OpenHab running on an Rpi from a SD card. So while it might be a good idea to use an external HD for your system, you still need to backup the system.
Now… I doubt many disagree with that. Well, what really makes me wonder - Where is the backup option in OpenHab? Heck, we´re building automating systems here, and yet, for backup, there doesn´t seem to be any good and easy automatic solution…(easy - It has to be easy for people to use it).
I have notices in openhabian-config there is something called Amanda backup. To be honest. If thats what it takes to do a backup. I´ll rather take the chances, make some manual copying from a SSH client once in a while, or otherweise start all over… This is NOT a motivating backup solution for anyone else, than people who already knows everything about liunx. I´m not one of those! Untill I started of with Openhab several months ago, I knew nothing about Linux. Today i know some, but thinking about it. First thing to know should have been a easy backup solution for “dummies” likes me!
I wonder why there isnt an option inside PaperUi to configure and run somekind of backup system. A simple scipt which add all important files into a zip file to download. Maybe even an advance option to do it automaticly and save to a network share path, cloud, whatever. And a super advance option, running in a crone job.
My bet is, this would motivate users to do more backups.
But backing up Openhab may only be half the need thing to do. Reading about the Amanda backup it also mentioned someting like z-wave networking. I admit, I never thought about this untill I read it. But one actually have to make a backup of the z-wave network as well in case the controller fails. Well, now things gets a bit complicated. And then I wonder again - I have the z-wave binding installed. Why isn´t there an option to backup the z-wave network from the binding as well? This leave me to search and install some 3.part utility to do this backup… But why? Is it a limit in the Rpi where my z-wave controller is connected to? I doubt that, to be honest. My windows PC can do it. But my controller is connected to my Rpi. So it makes not sense to remove the controller from the Rpi, place it in my PC to do backup, and then put it back to my Rpi. It does not make the over-all backup procedure an easy to remember thing. And it most probably ends up with not beeing done.
My guess is, if developers of systems (software in general) first thoughts was, “What to do when things goes wrong” (again note, it´s not an “if” question). Things like bad SD cards, hardware failures etc would be short term discussions in general. (And I would have saved myself lots of time writing this )
I’m no expert nor an a lawyer, but my understanding is that if you sell a hardware device that claims to be zwave compatible, it has to be certified (i.e. go through testing) and you have to pay a license. It is the terms required to use the zwave trademark and name. I don’t think using the Ember chips will get you past that hurdle unless Ember has already paid the dues and done the testing for you, in which case that cost is already included in the cost of the Embers.
I’ve never heard of it so I can’t say.
See Matt’s posting above. If they are on the same SD card then a write to the data partition can corrupt the read only partition on a power failure because of the way the SD technology works. Using a separate partition won’t avoid this. Only using a separate physical SD card will.
And honestly, if it were me I’d more concerned about corruption on the data partition than the OH partition. I can rebuild the OS but if I lose my database that data is gone for good. And if you are looking for a turn key system, you must find a way to prevent corruption of the database. You can’t just throw it out on every reboot.
Have a look at the Docker Images.
However, be careful pairing down too far. One of the more powerful workarounds for integrating a lot of technologies that do not have a binding is through the Exec binding or executeCommandLine. If you pair down the OS too far you render both of these useless. This is actually a pretty big problem with the Docker containers and is a reason why I have a small python service running that I can send an MQTT command to to execute command line stuff for me (I run in Docker). The commands I need don’t exist in the container. Heck, the Network binding is neutered in Docker because there is no ping or arping command available. Heaven help you if you want to use some Python script you found that implements the protocol for your alarm system or something like that.
I’m intrigued. Bet it would make a great write-up.
You will see Markus and I push this very strongly all over the forum. If you have a good backup AND tested restore procedure then it doesn’t really matter when/if you have an SD card failure. And if you have a good backup and restore procedure, then that really is the solution and you don’t really need to spend any time on reducing writes and the like.
However, when an SD card wears out, you still have the problem that it does so silently. There is no SMART for SD cards to tell you when it is about to fail. Weird stuff just starts to happen. And once you finally notice it you don’t know how long it has been wearing out and you don’t really know the extent of the corruption. Consequently, you don’t know how far you have to go back to get to a good backup.
openHABian has Amanda backup and recovery built in. Since OH 2.2 OH ships with backup and restore scripts in the bin directory. A lot of us use software configuration management tools like git to backup and restore our configurations. Even more people take images of their SD cards using WinDiskImager or Etcher or whatever.
The JSONDB where all the configurations done through PaperUI or the REST API gets backed up automatically.
The disk image approach is by far the simplest for non-technical users. Just shutdown the RPi, pull the SD card, make the image on your Windows machine, plug it back in and reboot.
The problem is backup to what? How? Everyone’s system is different. They all have different
Because OH runs on just about everything and the nature and method of the backup would be different for each and every OS that OH can run on. That isn’t to say that it is impossible but it isn’t easy and no one has taken it upon themselves to implement something. The fact that Benjy has implement the backup and restore scripts that ship with OH to make them work across all OSs is honestly pretty awesome already.
And then you have to problem of what to backup. A simple script like you mention and what Benjy wrote pretty much already does that. However, what about your Mosquitto configuration? What about the contents of your InfluxDB where all that data you have been saving for the past year resides? There is no way that stuff can be backed up from PaperUI because all of that stuff is third party and it may or may not even be there. That is why openHABian uses Amanda as that backups up the whole system.
This is another thorny problem. Not everyone uses zwave and not every zwave controller supports being backed up and restored.
Because it hasn’t been implemented. OH is an opensource project. People donate their time to work on those things they want to work on. There is no way to force anyone to work on anything in particular.
Usually that 3rd party tool comes from the manufacturer. I can’t say for certain but I wouldn’t be surprised if the method and API calls necessary to do a backup and restore of the SD card controller varries from vender to vender and even potentially from model to model. I don’t think there is a standard way to do it. I know for sure the Aeontech Gen 2 doesn’t even support it. That’s one of the reasons I recently switched off of it.
So this is just one binding. Multiply that by 300+ bindings and it really starts to become an untenable problem. Perhaps someday it will be addressed but to do so will take extensive addtions to the core of Eclipse Smarthome.
So your best bet is to backup the whole damn system which means:
cloning the SD card like described above
running a tool like Amanda
If all you care about is OH, then you just need to grab /etc/openhab2 and /var/lib/openhab2, which is what Benjy’s scripts that ship with OH does.
It spounds so simple. But how simple does it sound to produce a backup and restore system that:
works on Windows, openHABian, other Linux, QNAP, Synology, and Docker
backups up not just OH stuff but backups up the configuration of the data stored on external hardware (e.g. the zwave controller)
backups up the configuration and data stored by third party service including one or more of five separate database servers not counting the near infinite number of database servers that support JPA or JDBC (JPA and JDBC are standard interfaces impemented by databases so OH doesn’t even know what the database in if using these)?
backups up other externalities like that one Python script you found on Github that lets you talk to your alarm system using the Exec binding
backs up the operating system itself
Oh, and that hardware or those external services that you are trying to back up may not even be running on the same machine as OH
I have been testing the Telegea Smart Hub for some time now and while it was a bit of a learning curve to get OpenHABian loaded onto the RPi compute once I did it’s been pretty solid.
It’s funny though because I’m now comfortable with imaging the SD card in my main raspberry pi regularly as a backup and now I’m nervous as to how I’d backup the compute board. I’m sure there is a way though.
But the smart Hub does have plenty of functionality so do take a look at it! I was lucky enough to get a review unit to test out so not sure of the final cost though.
I’ve got OH2.2 running on a Ubuntu in the virtual machine together with couple of others (plex server, openvpn…)
It’s been running great aver a year now. The hardware is a small barebone PC with i3 I think. So it consumes minimum power. Z-Wave usb paththrough is also stable.
Had some issues installing ESXi as the network adapter was incompatible but with a custom distro works really well.
My path looks like this:
Single Windows 7 OS
Single Ubuntu OS
Raspberry Pi2, SD card failed
Raspberry Pi3, SD card failed
ESXi with Ubuntu 16.04 LTS as a VM with Z-Wave Aeon get5 stick.
I have an off-site server with Veeam and site to site vpn tunnel, so it is backing up all VMs twice a day, in case I mess it up I just revert to the backup snapshot.
I’ve also was experimenting with iSCSI and a datastore on my Synology NAS, but as I only got 1Gb/s switch decided that SSD will run the VMs much faster.
This is not said to criticze the developers. It´s mainy said to put more focus into this matter. It´s an general speech regarding all kinds of software and computer systems, wether or not you´re a developer or a user.
When you look at OpenHab in it´s perspective, you´ll notice how much developing have been done to make it easy and simple to get started. In short: 'download the image, copy it to the sd card, insert the sd card, and you´re good to go. Simple, easy, anyone can do this.
But when it come to backing up the system, you´re facing a major challenging, just as you mentioned.
I took the challenge with Amanda the other day, sat down a read about it. (Because my OpenHab is acting up lately). I only got half way through the docs before I start to think, “It will take me forever to understand this and get it to work in a proper way”. Two seconds after, I found myself searching for new features/bindings in Openhab. I simply gave up on Amanda.
I can´t help wonder, is it me beeing too stupid to understand things like this, or is it the task itself which is too challenging. Maybe it´s a combination of both. I´m challenged even from not knowing anything about linux. But does that disqualify me? I knew nothing about linux/OpenHab untill a few months ago. Yet I managed to get “something” to work
In an ideal world, a backup procedure should be as simple as the procedure to even get started with the software itself. Meaning, if I can get the system up running, I should be able to backing up the system just as simple.
I know it sounds easy, and I know in real life it isn´t. But if we keep on telling it´s not easy, it will never get easier. This is why I adress the developers to put some more focus into this matter, as well as I try to adress the users (myself) to spend some more time on this matter. Both parts needs to face this challenge.
Actually, it IS that easy.
To install openhabian, you download an image to your PC and write it to your SD-card.
To backup, you put your SD-card in your PC and make a copy of the image.
Both can be made with for example win32diskimager. That’s how I make my full backups, plus I make regular backups of openhab’s conf-folders and the openhabian user’s home folder where i have all my custom scripts.
And there is a backup approach that is just so simple. You had to write an image to an SD card to get started right? You can use the same tool to make a backup copy of that SD card. The draw back is you have to take the system down to do it.
But this is a procedure which require repeating manual operations over and over again, unlike installing openhab (software/OS/whatever) which is a one time operation.
The procedure itself of making an image of the sd card is easy. But the repeating manual operation is, in my opinion, the reason why many users wont get it done, in time.
So, we know at least one good and easy way to do backup. Now turn the focus into handling the challenging with the “repeating manual operation” part of it, and making it an easy, simple and automatic operation.
I´m not saying it´s a easy challenge. But it should be the way to think, when you are developing something in a data and computer environment. Just as well as it should be a feature users are focusin on.
See me reply to pacive.
I do not agree with taking down the system is a draw back. The challenge is, the repeating manual operation.
But its depending on how big the system is and how long the backup procedure will take, ofcouse. A huge system with several TB of data will require fare to much down time to do such a backup. A small system, like Openhabian (OpenHab) should be possible to backup in just a few minutes.
Any other procedure is going to require you to set up a place to back up to since you must backup everything on the drive to catch everything. And that is the hard part of setting up Amanda. Once set up, Amanda takes a single command to run or can be set up to run automatically.
In short, automatic backups so not really get any simpler than Amanda because they all will require you to configure a place to back up to.
We’ll, Amanda is difficult for some but not impossible.
But I will argue that any backup system that:
works across operating systems
supports restore (in which case there is no point in backing up)
Openhab/openhabian itself require something.
It´s not the requirements itself which is the major hassle. If the Rpi had a second SD card reader, it would be obvious. Rpi doesn´t, then move on to next step. USB disk, NAS, network path etc…
When moving on to possible requirements, make sure the procedure to include these requirement is as easy and simple as installing the software (openhabian/openhab). Meaning, I should not have to study linux for decades to include these requirement, to be able to do a simple, easy and effective backup prodcedure. It breaks the idea of beeing hasslefree.
This leads to your next sentence.
So if we can agree setting up Amanda (or Amanda´s requirements) is the hardest part, then developing Amanda should focus on making that part easier. I´m not asking for procedure a monkey can deal with it. But some where in between.
This leads me back to my frst suggestions, PaperUi (or infact something simular Ui system like openhabian-config, or perhaps an amanda-config) for setting up and running Amanda and it´s requirements. In PaperUi there could be an option to make this an automatic crone job by rules or whatever, perhaps even setting up Amanda as well.
I know this sound simple. But I fail to understand why this would be close to impossible, no matter what enviroments we´re speaking of. Ofcouse the user has to get the necessary hardware requirements, (backup storage/cloud service/whatever). Connect it to the Rpi (or whatever hardware beeing used) and then configure Amanda. Anything else, it´s in controle of the developer to make possible.
Amanda (the backup procedure) will end op creating an image backup file, which the user can start over with using the same procedure as the first starting point, writing the image file to an SD card, insert the SD card into the Rpi. And the system is up running from the date the image bakup file was created.
(This is a shot story. I know there will be more questions to answer going through the setup. But I hope you get the point).
Now, where is the problem?
I know the above procedure will not deal with everyone of us. Some have even less knowlegde. But in a principal, it´s a matter of making this most hasslefree as possible, despite of the users knowledgement.
And OH would have to support backup to ANY of those options. And most work differently and have lots of options in how they can work. Just setting up and maintaining an automated system to work with all of these potential external devices is a project almost as big as OH itself.
So let’s assume we leave that up to the user to provide a path to a folder that represents some external drive or network shared drive. Now we are back in Amanda territory that apparently you “have to study linux for decades” to figure out how to mount a USB or network file system. We’ve solved exactly nothing.
I’m honestly not sure what could be done to make the setup of backup and restore on openHABian any easier. Assuming a USB drive for backup:
Choose 50 Backup and Restore
Choose install and configure
Answer the questions asked (password for backups, etc)
Say OK to back up to Locally attached or NAS storage
Provide the path to the external drive. Apparently the hard step
Answer the rest of the questions
If your main complaint is that it is hard to ssh in to the RPi to do the initial setup then perhaps you should be running on Windows or something more familiar. If the main complaint is step 6, well ALL backup systems will require this of you. If projects that are nearly as big as the OH project do not automate this for you what chance is there that OH would have the people with the skills and willingness to implement it?
From an OH perspective the problem is that Amanda only runs on certain Linux distros and Windows. That leaves out Mac and the NAS systems (QNAP, Synology) which means adding it to PaperUI would make at least that part of OH incompatible with those platforms. Based on the standards of contributing to the project as I understand them, that means it would be not allowed. A solution that works across them all would have to be provided.
So now the problem has become bigger and we can no longer rely on Amanda. In fact, I know of no backup system that supports all the platforms that OH is supported on so we have to figure out multiple backup and restore solutions. Then we have to somehow automate all the dozens of different ways that someone may choose to mount an external file system to backup to on each of those platforms.
Then we have the problem that some of the platforms like Windows don’t allow complete backup of the system without making very deep hooks into the OS to prevent certain things from being written to while the system is running. Now we have a huge amount of code to write that is specific to only one platform. Oh, and that code can’t be written in Java so we have to either have a subsystem that exists outside of OH for Windows backup or we need to rewrite all of OH in another programming language.
Because of this line of reasoning.
No changes will be allowed to the OH UIs that are platform specific
Your home automation system consists of more than OH
OH cannot know exactly what other services you are using in your home automation system
Therefore to backup your full home automation you must backup the full drive that OH is running off of, operating system and all (NOTE: this still won’t catch everything as many if not most OH users run other services on other hardware separate from the OH server).
Every platform works differently so almost every platform will require it’s own backup system. At a minimum Linux (Amanda), Mac, and Windows (Amanda won’t backup the full drive, just selected files).
Certain platforms like Windows will require code written in some other programming language, I’ve no idea what would work for something like QNAP or Synology though perhaps those won’t matter since they should be running on RAID anyway.
Once you get to 5 we are looking at at least three different subsystems , at least one of which will be as large as OH itself in terms of the amount of code required, to solve a problem for which there are already third party software, both FOSS and commercial, to solve.
Cool, that is mostly how Amanda on openHABian works. But what about OH on a Mac, Windows, etc? There is no SD card to copy. The drive is likely to be too large to fit on an SD card to back up. What about those who have a NAS and want to backup to that?
This solves the problem nice and easy for your specific configuration and deployment. Your’s is not the only way to configure and deploy OH.
Guys, just some quick input on this.
It’s obviously out of scope for openHAB(ian) and therefore not mentioned in the openHABian Amanda Readme, but you can run Amanda on MacOS and almost any UNIX (including Synology and QNAP NAS).
And there’s even a Windows client available, too, so as long as you have a UNIX backup host to run Amanda on you can even use this to backup your Windows openHAB or even Windows based gateway or desktop PC.
@rlkoshak : any Amanda setup can backup (and restore) both, directories as well as complete raw devices (such as SD cards or HDDs). It’s even the openHABian default to do both in parallel, and the Readme recommends to use this to create a SD clone as your first action after you ran your first backup.
@Kim_Andersen, I honestly don’t understand what you feel to be difficult about setting up openHABian Amanda. Rich quoted the steps. It’s as simple as it can get given the range of HW and functionality we want it to cover.
Yes you need to read a single somewhat lengthy doc but doing so does not seem to be a problem given the time you already kept spending on this thread alone. Yes it’s easier for UNIX people but there’s quite a number of Windows people who successfully made it and I’ve spent quite some effort to incorporate their findings into the Readme.
If you have questions or problems, put them up in this Amanda thread and I’ll try to give an answer.
If you feel the Readme is missing anything essential or it’s incomprehensive at any point, let me know where exactly and why.
As far as I can read from others, this IS the hardest step. And I guess for most, they would say this is where things mostly goes wrong.
Now - focus on that one… Try think of somekind of easier procedure for this specific matter. It seems to me you´re says 1 - 5 and step 7, is easy. Then it doesn´t matter step 6 is a hard one.
It doesn´t make any sence, as step 6 is probably the most important part.
Openhabian (Linux) PaperUi is used to add/remove things/items/devices. Then tell me, how is a USB device different from anything else, wether or not it´s a z-wave dongle, USB storage device etc??
I know Linux has to know the USB device ofcouse to be able to spot. This is the only requirement the user should be dealing with. And it´s the sam eon all platforms and all devices.
But if I plug an USB storage device into my Rpi, I do not have the option to configure this device, just as well as all others devices I configure in OpenHab…
Whenever you have the USB storage device configured, you´re free to use it as you please, wether or not it´s for backup or anything else.
This is where configuration of Amanda or any other backup procedure comes in, and you now have made the hard step 6 easier. Agree? You could even have several kinds of backup procedures to choose form, within Openhab (PaperUi etc) just as well as you have bindings/connections to every other stuff.
My main compaint is not, wether I find it hard to use ssh.
It´s a principial matter, to me. why there is a need to do ssh at all, when you´re setting up a system, which is suppose to be hasslefree an have it´s own kinda configuration itself.
As said, I could live with beeing inside the openhabian-config panel, though I still don´t think it´s the best option. But´s it´s an acceptable alternative, for me.
Either there is a language barrier or you simply misunderstand.
Amanda is included as an option within the hasslefree setup op Openhab from inside openhabia-config, right? It leaves out every other platforms for this matter. It doesnt matter if Amanda don´t not run on any other platform. It´s there, in Openhab. Why bother about the other platforms then?
Amanda for other platforms either have to support it (somehow) or leave it. I can´t see how setting up the requirements has anything to do with this. Mac´s, Windows etc. can use USB storage devices or even mapping to a network device. So where is the catch?
Lets twist this from another point of view.
I assume you know Windows.
Imaging setting up connection to an external storage device would require you to go through a Dos promt (command promt), even though you have the Windows Explorer? My bet is, if this happens today lots of people would think this is a wrong way to do things, and even more people would not be able to deal with it.
My mother which is 72 yeas old is able to connect an external USB storage device to hers windows box and make use of it wether or not it´s for backup or anything else.
Remeber, you said it yourself - step 6 “Provide the path to the external drive.” is the hard step.
Keep focusing on this one. And tell me, where is the catch doing the same with Openhab (openhabian) ?
Difficult or Unnecessary, from a user perspective?
Remember, you as the knowledged, will always believe its easy. You may even find it easy to read the docs, due to your experiences and understanding of the system itself. Not all have the same knowledge and experiences as you (or me).
I dont mind reading stuff. I have been in this “game” for decades now. But my situation is not really the best way to target on this matter due to my experience. I started off in 1986 with Dos, moved to OS´s with graphical interfaces (Atari Tos) and Windows.
Now I find myself beeing send back in time, over and over again ssh´íng to the OS to do stuff from a command line, I have been doing alot easier in the last couple of decades. And I simply fail to understand why it has to be this way, specially if we can agree upon:
Providing the required path and making it avaible to Amanda is the hardest part.
I have a feeling this is due to, “thats the way it has always been in Linux/Unix”. And like you said yourself in a quote from the Amanda thread. “You can´t teach everyone”. And, “this require some Linux knowledge”.
I understand you see this as two different task´s.
First - connecting and setting up path of where to store the backup.
Second - the backup procedure itself, (Amanda)
The first task is the one which you believe is not part of Amanda. And you wont teach people about this.
I accept your opinion, and I partly agree, In an ideal world, it should not be part of Amanda.
But in real life it is, cause nowehere else in the hasslefree setup of Openhab (openhabian) is this part available, (as far as I know). But it´s a need requirement for Amanda to be working. Agree?
So infact my opinion is not really just about Amanda itself. It´s about the structure of the whole enviroment.
If the system itself provided an easy procedure to connect an external device, setting up the required path. Most people would probably work through the rest of Amanda setup without much hassle. (ie blindfolded).
I´ll rest my case on this matter. I have provided my opninion about this. Hopefully some day someone will understand this from my perspective. Untill then, ssh´ing and using commandlines is what it takes = Not hasslefree unless you have linux experience.
Like I said above, if mounting a file system on a Linux system is too hard, then run on a platform you are more familiar with like Windows.
openHABian does a lot to make installing and configuring OH on a Linux system easier but it does not eliminate the need for you the user to possess or be willing to learn the basics of navigating and working on Linux. That is why we also support running on Windows and Mac and more.
Perhaps someday someone will sell a turnkey black box OH or ESH based hub, but I agree with Markus, that is outside the boundary of what the OH project can or should be responsible for.
Because a USB zwave dongle appears as a serial device so OH can talk directly to it (once you give the openhab user permission to do so). The file /dev/ttyUSB0 means that you are performing raw reads and writes directly to that device. So it is a pretty simple matter to supply the path to the device to the binding and the binding can just start reading and writing.
Drives are different. Not only do you have the device location you also have the file system. A program like OH doesn’t communicate directly to the drive, they have to go through a part of the operating system called the file system. NOTE: there are over a dozen different types of file systems. In order for that to happen one must tell the operating system what type of file system it is and where you want to mount that files system.
These are all operating system operations pretty much always outside the boundaries of what a typical program like OH would implement. Also, these are operations that require root to perform and from a security perspective you never want to run something like OH as root.
And this just focuses on one potential way to back up: USB thumb drives. What about those who have a NAS and want to back up to that?
But most of the time on a Linux platform all you have to do is plug in the USB drive and it will automatically be mounted and will appear under /media.
So your step 6 would be:
plug in the USB drive
configure Amanda to use /media/nameofthumbdrive for the backup, or if you are on Windows use d:\ (only you don’t know ahead of time what drive letter the thumb drive will be assigned).
Because that isn’t OH’s job. It is the operating system’s job.
I think the major point that I’m trying to get across and failing is that yes, OH could potentially implement something like this assuming the problem space were drastically reduced (i.e. “we only support backup to USB drives” which would be unacceptable) but:
that isn’t OH’s job
the procedure would either have to be so limitied as to only be useful to a tiny fraction of OH users or would have to be as complex as doing it through the operating system in the first place
openHABian IS NOT openHAB. PaperUI is not openHABian.
openHABian is a set of scripts that automate the installation and configuration of openHAB and related services on a apt based Linux Distro.
If you want to make the Amanda configuration in openHABian easier I’m all for it. You have just about everything you need right there already:
constrained to one type of Linux distro so you can eliminate whole swaths of possible configurations
runs as root so you don’t have to worry about permission problems doing things that require root to perform
But again, openHABian IS NOT openHAB. It is a support mechanism to openHAB. You don’t have the ability to install and configure Mosquitto from PaperUI. You don’t have the ability to install and configure InfluxDB, MariaDB, MySQL, etc through PaperUI. You don’t have the ability to install and configure SAMBA through PaperUI. And those features should not be there because they are not openHAB’s job.
Now if you want to argue that openHABian should implement some sort of web-based user interface that is something else entirely. I could see openHABian growing in that direction at some point. That would eliminate the need to ssh into the RPi some.
But right now, openHABian is not intended to replace the user’s need to ever access their RPi except through a web interface. It is a convenience that helps perform a common set of setup and configuration tasks for you.
What you are asking for is a turn-key system that never requires the user to get dirty on the command line. That is NEVER going to be implemented by OH itself. Something like openHABian can maybe grow into that because it has a much smaller set of parameters it needs to deal with. And that would be kind of cool. But that is beyond what I think was ever envisioned for openHABian.
No. We haven’t made anything harder. Step 6 is as easy or as hard as it ever was with or without OH.
No one ever claimed that openHABian was hassle free. No one claimed it was a turn-key system. No one ever claimed that you do not and never need to have some basic Linux skills to use openHABian to setup, configure, and maintain a system to run openHAB. You don’t have to be a “Linux enthusiast” but that doesn’t mean you can remain completely ignorant of all things Linux. You still have to know some basics.
Because you want to make this openHAB’s job through PaperUI.
openHAB != openHABian
Because you want to have this set up so the user can set up this USB storage or network mapping without any knowledge of how these operating systems work and how to do it. Mapping a network drive for you is simple enough because you know how to do it. But you want openHAB to do that for you.
Or, if you want parity then you need to expect the same of Linux users as you expect from Windows users: i.e. figure out how to map a network drive through the OS. Then tell the backup software where to save the backups.
I’m equally comfortable on most OSs including Windows, Mac, most Linux, BSDs, AIX, and Solaris.
Then don’t run on Linux. You have other choices. But at the end of the day:
dealing with mounting file systems is the job of the operating system not a program like openHAB
this could be something that openHABian could help more with, that that is outside of openHAB. openHAB != openHABian
openHAB, even with openHABian, is a LOOOOOONG way away from being something your 72 year-old mother would be able to handle, unless they were technically and computer inclined. We’d like to get there someday, but I’m skeptical it can ever happen without neutering or excluding whole swaths of supported technologies.
I would argue that OH is not yet ready for use by someone who is unable or unwilling to read the docs and examples to configure and make a system out of OH. A much more limited system like SmartThings, Wink, or Vera would be a more appropriate platform for such a user.
The reason I say this is because:
the number of technologies OH supports is overwhelming
there is a whole lot of variability in the comprehensiveness and quality of each of the 300+ APIs and technologies that are supported
major portions of OH config still require text-based configuration
OH will ALWAYS require at least a little bit of ability to think like a coder (if this then do that type thinking)
Lots of work is being done to make all of this better but we still have a very long way to go. And I’m personally not convinced that rules will ever be simple enough.
But the key points are:
openHABian is not openHAB. This is NOT an openHAB job.
it is something that openHABian can do better but it will require someone to build and contribute a web-based front end to openHABian
if using ssh and other standard ways to interact with openHAB, there are plenty of other platforms upon which openHAB will run. Your main complaint seems to be you don’t like how Linux does things (at least the stripped down version of Linux openHABian sets up and configures. If you use a desktop version of Linux then it is as easy to deal with USB drives and network mounts as it is in Windows). It is not openHAB’s job to fix Linux. It could be the job of openHABian, though I think that is beyond the vision of what openHABian was intended.