Daily Backup Solution

There’s no ultimate tutorial I know of, but most of the time it is because S3 all by itself can be tricky to setup.
Do you know this link ?
Please try again getting that to work and let me know if there’s anything with the openHABian Amanda setup that needs fixing or documentation.

Thank you for the link. Its not the same it use from the amanda wizzard.
I tried to use the amanda wizard and think I entered everything correctly. I get the following erros in log:

2018-10-01_20:46:15_CEST [openHABian] Checking for changes in origin... OK
2018-10-01_20:48:12_CEST [openHABian] Setting up the Amanda backup system ...
$ apt -y install amanda-common amanda-server amanda-client
Reading package lists... Done
Building dependency tree
Reading state information... Done
amanda-client is already the newest version (1:3.3.9-5).
amanda-common is already the newest version (1:3.3.9-5).
amanda-server is already the newest version (1:3.3.9-5).
The following package was automatically installed and is no longer required:
  triggerhappy
Use 'sudo apt autoremove' to remove it.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
mkdir: cannot create directory ‘AWS/slots’: No such file or directory
/bin/chown: cannot access 'AWS/slots': No such file or directory
ln: failed to create symbolic link 'AWS/slots/drive0': No such file or directory
ln: failed to create symbolic link 'AWS/slots/drive1': No such file or directory
Reading label...
Error reading volume label: While creating new S3 bucket: The specified bucket is not valid. (InvalidBucketName) (HTTP 400)
Not writing label.
Reading label...
Error reading volume label: While creating new S3 bucket: The specified bucket is not valid. (InvalidBucketName) (HTTP 400)
Not writing label.
Reading label...
Error reading volume label: While creating new S3 bucket: The specified bucket is not valid. (InvalidBucketName) (HTTP 400)
Not writing label.
Reading label...
Error reading volume label: While creating new S3 bucket: The specified bucket is not valid. (InvalidBucketName) (HTTP 400)
Not writing label.
Reading label...
Error reading volume label: While creating new S3 bucket: The specified bucket is not valid. (InvalidBucketName) (HTTP 400)
Not writing label.
Reading label...
Error reading volume label: While creating new S3 bucket: The specified bucket is not valid. (InvalidBucketName) (HTTP 400)
Not writing label.
Reading label...
Error reading volume label: While creating new S3 bucket: The specified bucket is not valid. (InvalidBucketName) (HTTP 400)
Not writing label.
Reading label...
Error reading volume label: While creating new S3 bucket: The specified bucket is not valid. (InvalidBucketName) (HTTP 400)
Not writing label.
Reading label...
Error reading volume label: While creating new S3 bucket: The specified bucket is not valid. (InvalidBucketName) (HTTP 400)
Not writing label.
Reading label...
Error reading volume label: While creating new S3 bucket: The specified bucket is not valid. (InvalidBucketName) (HTTP 400)
Not writing label.
Reading label...
Error reading volume label: While creating new S3 bucket: The specified bucket is not valid. (InvalidBucketName) (HTTP 400)
Not writing label.
Reading label...
Error reading volume label: While creating new S3 bucket: The specified bucket is not valid. (InvalidBucketName) (HTTP 400)
Not writing label.
Reading label...
Error reading volume label: While creating new S3 bucket: The specified bucket is not valid. (InvalidBucketName) (HTTP 400)
Not writing label.
Reading label...
Error reading volume label: While creating new S3 bucket: The specified bucket is not valid. (InvalidBucketName) (HTTP 400)
Not writing label.
Reading label...
Error reading volume label: While creating new S3 bucket: The specified bucket is not valid. (InvalidBucketName) (HTTP 400)
Not writing label.
/bin/chown: cannot access 'AWS': No such file or directory
/bin/chmod: cannot access 'AWS': No such file or directory

The S3 console (online) tells me, that my credentials have been used. But I have no idea where the files are (if there are any).

Bonus question:
The wizard has no restore option, right?

Unfortunately I am quite busy this week but I will be working on it the next week (hopefully).

Duh, you got me red-handed. Seems the Amanda config generated for S3 backup didn’t work but noone noticed except you.

I refactored Amanda install routine in openHABian to now create a (hopefully) working Amanda config.
Please help with testing: download https://github.com/mstormi/openhabian/blob/patch-12/functions/backup.sh and put it into /opt/openhabian/functions/ to replace the backup.sh there, then try again.

It doesn’t need any. It’s generating the config files from scratch, i.e. it overwrites the existing files in /etc/amanda/. You can also edit or save/copy those directly.

1 Like

Awesome. Thank you! It seems to work. I got no errors and 15 files in my S3 bucket “slot-1special-tapestart” etc.
I have read about amanda, openhabian and S3 several times in the forum and nobody has ever actually tried it? :wink:

The script seems to work now. Thank you very much.
I have to take a closer look how the restore works, when the backup is triggers. What files are saved etc.

One thing:
My first bucket had anoher location (us-something-1). Entering this location into the wizard resulted in an error. I changed it to the suggested (and my nearest) location and it worked fine.

Apparently so. And frustrating. But people don’t think about backup until in need of restore for the first time.

Whatever string you enter on setup will just be put into Amanda config so whenever this results in an error on backup it is related to S3 config on either Amanda or AWS side.
I noticed there’s some differences among S3 sites.
I, too, had a US located bucket and created a nearer European one this time. But I had to change the authentication mechanism as the old V2 auth version isn’t supported over here. Maybe that broke using US buckets now although those are supposed to support V4 as well. If someone’s to complain I can make auth configurable, too. You can also selectively edit amanda.conf after you ran setup.

One thing in return:
Attention, openhabian will update itself on startup, that would overwrite your recent changes (until that code is accepted as an openHABian change and everybody gets it).

Well, yes and no.
Sure it’s a complex system but the openHABian setup is supposed to hide all that complexity from a user.
Backup is fully automated and it’s just one command to restore files right to that client system where I want them to be restored (plus another one if you also backup/restore partitions).
It also features a shell-like interactive mode for restore so browsing actually is quite as simple as with G**gle Drive - see README. I’d even dare to claim that - all aspects considered - it’s easier to use in total because it is all-integrated to include versioning, compression and the restore function.
Ok, Google is for free and S3 no longer is after 12 months. But I’d say that’s rather negligible. I had missed that in the beginning and have been wondering about some billing mails why Amaz*n kept charging me $0.01 a month, now I found out it’s my Amanda bucket on there. Well, that I feel to be a fair price :slight_smile:
Last not least it’s on reliability and support … with someone’s self-written solution you’ll always be on your own (noone to blame if things go wrong but yourself :wink: ). It’s ok if you are aware of that and capable of fixing and willing to take that risk, but that just applies to those IT-savvy power users like ourselves and is not the case for most home automation users.

Ok @mstormi, I fully agree.
I had gone through the setup and also got it working. I switched the openhab installation and lost the setup.
It’s on my bucket list to do it right. This backup.sh is just to “partly” cover the ass.

I have a setup of pi’s running:
openhab-pro - production istance OH
openhab-vsc - shadow of production for connceting visual studio code
openhab-tst - test instance of OH
openhab-utl - utility server e.g. grafana, influxdb, mqtt, ebusd, etc
homematic-pro - raspberrymatic server
homematic-tst - raspberrymatic test instance

for all of these I need a backup solution and restore from a defined installation base line.
Much to do yet.

You can actually use one single Amanda installation to be the host to backup all of your servers, Pis and even PCs, in one go, to a single storage destination. While providing that isn’t in scope of openHABian that’s really what Amanda was built for… essentially you just need to install the Raspbian client packages on those further boxes of yours to get backed up, allow for them to connect to the backup host (~backup/.amandahosts) and add the dirs/partitions to backup to your disklist file.
There’s many options to eventually select from later on (a holding disk, compression, adjusted backup cycles etc), but any openHABian Amanda install should already provide you with a properly working system to start with.

That’s what happened to me. :wink:

The “problem” is, there are several tutorials and copiable examples for everything. Even the installation with openhabian is cheap and easy. So nowadays is really easy to start a project with openhab.

But there is no good tutorial for creating a backup. There is sudo openhab-cli backup and amanda. There is no tutorial for amanda and it is/was obviously not working correctly. (no finger pointing!)

If everything works correctly I will start writing some kind of tutorial. But there a too many things I don’t understand correctly. How to restore the backup, when/how is the backup process triggered, what are the files in the bucket [there were 15 files yesterday and 40 now, and the new files are significantly bigger], …

That is another “problem” I see. The pricing model from S3 is not easy to understand at a glance. A lot more people would use amanda (or anything else) if they could use something they know and understand the pricing model of, like google drive e.g.

1 Like

As we were chatting the AWS charging bill came in

And there I have a little issue with AWS. I gave my credit card with open bill. You know we are all self confident, but if I make mistake and start consuming much more then the 1ct. I have a home fiber optic connection with such a speed that I even would not recognise things go wrong.
I will create an own NAS with RAID1 in future and for sure go the amada way.
For the moment I am defensiv.

The solution could be having a non redundant local server storing amada output and periodically sync the files to google

Thats what I was thinking, it scared me off (even so I am using it atm) and I am 100% sure it scares of other users too. The hole thing is not clear enough and in a way to devious for many users. And even IF you recognise that something went wrong, no where is written hwo to stop amanda from uploading data (without stopping openhab).
There are just too many open questions.

I agree … I’m no Amaz*n advocate either, that Bezos guy for sure is rich enough. Got even rumor that he’s making a lot of money with AWS right because companies to order AWS services keep forgetting about cleaning up storage and server instances they don’t need any longer … but hey that what it’s like with almost any Internet service - you have to be careful and thorough in setting up.
See my also rather defensive wording in the README.

Then again, the same applies to any other solution to use S3, and in Amanda there’s quite some options to safeguard you from this:
You effectively always define the maximum storage size to be used in every Amanda config by defining the number and size of the slots (“slots” in a generic sense to apply to tapes, directories, S3 buckets) to store to. Amanda will overwrite older slots, effectively deleting all old files there first.
That’s sort of a benefit of Amanda’s ancient roots to be originally built to run on magnetic tapes (and these always are of limited capacity and number).
You get a daily mail report and you can even limit network usage.

Of course the only certain thing in life is death, but this provides you with at least some implicit safety you won’t be having in a homegrown system by default.

Sure. S3 is just one storage option of openHABian/Amanda that I’ve rather thrown in as a bonus because there might be people that feel this to be the right choice for them, but of course the main deployment mode is to have some local storage (USB, NAS). That’s how it is meant to be used in the first place.
I myself am only using backup to my RAID5 NAS.
S3 adds that additional data safety benefit that off site copies provide you with.
You might feel that to be worth it or not.

I’d recommend to combine both, i.e. setup 2 Amanda configs to run in parallel:
the primary, full-blown daily backup of all your servers (including raw partitions) to NAS
the minimalist one to (double)backup essential data only to S3
This would be something like “/etc/openhab2 of your production server” only so that by definition is just a small amount of data to ever get sent to S3. And it does not have to be run daily or automated at all.
You can run it once a month only or remove from cron and start manually when convenient to sit and watch … … and as it’ll just take a couple of seconds that’s no irony but in fact a viable option [for all you disbelievers out there :wink: ]

If you like it a bit even more on the paranoid side you can also setup Amanda to be a two phase backup:
Create two configs, one to use a holding disk but no upstream and the other to use the same holding disk directory and S3 as upstream.
Then write a script wrapper to check how much data was generated running the first config and only ever call the second config to “flush” the holding disk to S3 if you’re ok with that.

Please do, you’re welcome, I’ll be happy to add it to the docs.
But don’t you feel the README is comprehensive enough ? Agree, it’s not on S3, but as far as the rest is concerned …

Yes there is. Amanda and openHAB are independent of each other so you just have to stop Amanda (kill amdump process).

I have read (!=fully understand) the readme, but I think this a good example for the “problem” I mentioned first (second being S3 priceing model). It is really easy to start creating a smart home solution (with PaperUI etc.). But creating a backup is (for most people) not. The readme says nothing about the wizard.
And you simply can’t SEE the actual files, most people won’t be familiar with the tapes/slots/drive/level1 concept. For most people a backup is “copy and maybe zip the files to a save location”. Amanda seems(!) to work different.

Killing the process (manually) should not be the best option, even more if the process was setup through a (good and simple) wizard. Same goes for restoring the files you backed up.

As much as I would love to use Amanda I am considering to use openhab-cli backup and copy everything manually to another location. I dont have daily data that needs to be backed up, so maybe that will work for now.
Maybe I try Amanda with a USB stick if I can find one I can use.

I don’t think so because …

Huh? It even tells you what to enter in response to the wizard questions …
… and it’s displayed when you install Amanda from the openHABian menu.

Ok, it’s become quite common nowadays, I know a lot of people have become too lazy to read and understand anything that is longer than a page on their smartphone… no pun intended, but it is all written and explained in there.

[quote=“Belgadon, post:56, topic:52374”]
Killing the process (manually) should not be the best option
[\quote]

That was in response to your

and there’s of course more options (this one just being the quickest to be used in case of emergency).

If you meant to deactivate (rather than “stop”) running backups then you can do so by editing or removing /etc/cron.d/amanda .

You are right. I just remembered that the readme did not help me with the wizard in the first place. Because the wizard wasn’t working. For S3 the only questioned mentioned is the size of the backup.

My bad sorry, should have checked it again before I answered. Sorry!

:+1:

Continuing the discussion from Daily Backup Solution:

Hi,
I have tried to setup amanda with amazon s3.
A few hours later In my buckect see new folder with 15 files.
now the system seems to work, but not clear when the backup is scheduled, and if there is a way to configure amanda with Paper UI.

I’m do you have to configure “crontab” to schedule the job “amdump” every day?

No.

Yes Amanda is started via cron. It’s already in there (see /etc/cron.d/amanda).

Read the documentation it’s all in there.