openHABian hassle-free openHAB Setup

As I wrote before, I’m confused. :thinking:

Do you mean the screenshot I posted above: “Do not Upgrade, do not Reboot!” ?
So does it mean, I should not follow the tutorial to boot from a USB Mass Storage Device? (Because upgrade would be necessary.)

Notice to all openHABian users: Please apply a fix asap!

I did this in May.

Then I would expect not to see that message anymore.
But I’m not using openHABian, so cannot tell…

In General, it should not make any difference if you are using openHABian or Raspbian …

Thank you sihui.
I think I need to sleep on it.
Maybe tomorrow I just try and if I fail, then I start from scratch.:weary:

No need to, just make a backup, do a new install and put your config files back …

Yes, but with a new installation at least all z-wave devices need to be added again.

Then think about using the development version of the binding …

Sounds good. Thank you for that hint.

I just made an update and upgrade via opnehabian-config and get the following result

$ apt --yes upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
You might want to run 'apt-get -f install' to correct these.
The following packages have unmet dependencies:
 libraspberrypi-doc : Depends: libraspberrypi0 (= 1.20170515-1) but 1.20170703-1 is installed
E: Unmet dependencies. Try using -f.
FAILED

Is this a major problem?

I think you have the same problem as me, two weeks ago. Please note openhab is now on version 2.1, you have to do this “major upgrade” i think.

Thank you for your reply.

But I allready updated to release openHAB 2.1.0-1

If I relogin there are 0 apt-get updates available. And everything at the moment seem to be running.

So I hope with the next updates the error will disapear.

I have the same problems with amanda as @boob and @HFM.

[07:35:16] backup@openHABianPi:/var/log/amanda/openhab-dir$ amcheck openhab-dir
Amanda Tape Server Host Check
-----------------------------
slot 1: Error checking directory /mnt/openhab-backup/slots/drive0/data/: No such file or directory
slot 2: Error checking directory /mnt/openhab-backup/slots/drive0/data/: No such file or directory
slot 3: Error checking directory /mnt/openhab-backup/slots/drive0/data/: No such file or directory
slot 4: Error checking directory /mnt/openhab-backup/slots/drive0/data/: No such file or directory
slot 5: Error checking directory /mnt/openhab-backup/slots/drive0/data/: No such file or directory
slot 6: Error checking directory /mnt/openhab-backup/slots/drive0/data/: No such file or directory
slot 7: Error checking directory /mnt/openhab-backup/slots/drive0/data/: No such file or directory
slot 8: Error checking directory /mnt/openhab-backup/slots/drive0/data/: No such file or directory
slot 9: Error checking directory /mnt/openhab-backup/slots/drive0/data/: No such file or directory
slot 10: Error checking directory /mnt/openhab-backup/slots/drive0/data/: No such file or directory
slot 11: Error checking directory /mnt/openhab-backup/slots/drive0/data/: No such file or directory
slot 12: Error checking directory /mnt/openhab-backup/slots/drive0/data/: No such file or directory
slot 13: Error checking directory /mnt/openhab-backup/slots/drive0/data/: No such file or directory
slot 14: Error checking directory /mnt/openhab-backup/slots/drive0/data/: No such file or directory
slot 15: Error checking directory /mnt/openhab-backup/slots/drive0/data/: No such file or directory
 volume ''
Taper scan algorithm did not find an acceptable volume.
    (expecting a new volume)
ERROR: No acceptable volumes found
NOTE: host info dir /var/lib/amanda/openhab-dir/curinfo/openHABianPi does not exist
NOTE: it will be created on the next run.
NOTE: index dir /var/lib/amanda/openhab-dir/index/openHABianPi does not exist
NOTE: it will be created on the next run.
Server check took 2.229 seconds

Amanda Backup Client Hosts Check
--------------------------------
Client check: 1 host checked in 4.535 seconds.  0 problems found.

(brought to you by Amanda 3.3.6)

My amanda.conf file (which I haven’t touched):

org "openHABian openhab-dir"                            # Organization name for reports
mailto "openhabian@openhabianpi"                                                # Email address to receive reports
netusage 10000 Kbps                                     # Bandwidth limit, 10M
dumpcycle 2 weeks                                       # Backup cycle is 14 days
runspercycle 7                                          # Run 7 times every 14 days
tapecycle 15 tapes                                      # Dump to this number of different tapes during the cycle
runtapes 1
tpchanger "chg-disk:/mnt/openhab-backup/slots"    # The tape-changer glue script
autolabel "openHABian-openhab-dir-%%%" empty
changerfile "/etc/amanda/openhab-dir/storagestate"                      # The tape-changer or SD- or disk slot or S3 state file
tapelist "/etc/amanda/openhab-dir/tapelist"                             # The tapelist file
tapetype DIRECTORY
infofile "/var/lib/amanda/openhab-dir/curinfo"          # Database directory
logdir "/var/log/amanda/openhab-dir"                    # Log directory
indexdir "/var/lib/amanda/openhab-dir/index"            # Index directory
define tapetype SD {
    comment "SD card size"
    length 65536 mbytes                                 # actual Bucket size 5GB (Amazon default for free S3)
}
define tapetype DIRECTORY {                             # Define our tape behaviour
        length 65536 mbytes                             # Every tape is 100GB in size
}
define tapetype AWS {
    comment "S3 Bucket"
    length 65536 mbytes                                 # actual Bucket size 5GB (Amazon default for free S3)
}

amrecover_changer "changer"                             # Changer for amrecover

# don't use any holding disk for the time being
#holdingdisk hd {
#    directory "/holdingdisk/openhab-dir"
#    use 1000 Mb
#}

define dumptype global {                                # The global dump definition
        maxdumps 2                                      # maximum number of backups run in parallel
        holdingdisk no                                  # Dump to temp disk (holdingdisk) before backup to tape
        index yes                                       # Generate index. For restoration usage
}
define dumptype root-tar {                              # How to dump root's directory
        global                                          # Include global (as above)
        program "GNUTAR"                                # Program name for compress
        estimate server                                 # Estimate the backup size before dump
        comment "root partitions dumped with tar"
        compress none                                   # No compression
        index                                           # Index this dump
        priority low                                    # Priority level
}
define dumptype user-tar {                              # How to dump user's directory
        root-tar                                        # Include root-tar (as above)
        comment "user partitions dumped with tar"
        priority medium                                 # Priority level
}
define dumptype comp-user-tar {                         # How to dump & compress user's directory
        user-tar                                        # Include user-tar (as above)
        compress client fast                            # Compress in client side with less CPU (fast)
}
define application-tool app_amraw {                     # how to dump the SD card's raw device /dev/mmcblk0
        plugin "amraw"                                  # uses 'dd'
}
define dumptype amraw {
        global
        program "APPLICATION"
        application "app_amraw"
}
# vim: filetype=conf

Also, running sudo ln -s . drive0 from within /mnt/openhab-backup as suggested just returns
ln: failed to create symbolic link ‘drive0’: Operation not supported

Has anyone found a solution for this?

Edit: in amanda.conf there is a reference to /etc/amanda/openhab-dir/storagestate which contains:

$STATE = {
           'drives' => {
                         '/mnt/openhab-backup/slots/drive0' => {}
                       }
         };

I’m surprised to keep seeing this ‘drive0’ all the time as it’s definitely no part of the config I created. That’s why I assumed that you must have put this into the config yourself, but since it now happened to a number of people, I’m obviously wrong there. Sorry @boob for blaming you in the first place.

Will see to rework the openHABian config but first I need to know if it would be sufficient to create the link or if I need to introduce that ‘drive0’ part into the config. So let’s try to get your setup working first.
(on my box it’s working without, so it’s difficult for me to simply reproduce the issue).

Note you needed to run ln -s . drive0 in /mnt/openhab-backup/slots directory (note the …/slots) or you use absolute pathes: ‘ln -s /mnt/openhab-backup/slots/mnt/openhab-backup/slots/drive0’.
Although in the wrong place, it’s also strange that you cannot create that link. Does there exist any file or directory called drive0 in the directory where you tried to create the link in ? Get me the ouput of a ‘ls -l’ in that dir, please.

There wasn’t any drive0 folder, so I created it and tried, but no luck:

[18:19:25] openhabian@openHABianPi:/mnt/openhab-backup/slots$ mkdir drive0
[18:19:42] openhabian@openHABianPi:/mnt/openhab-backup/slots$ ls -l
total 0
drwxrwxrwx 2 openhabian openhabian 0 Jul 25  2017 drive0
drwxrwxrwx 2 openhabian openhabian 0 Jul 23 18:39 slot1
drwxrwxrwx 2 openhabian openhabian 0 Jul 25 08:56 slot10
drwxrwxrwx 2 openhabian openhabian 0 Jul 25 08:56 slot11
drwxrwxrwx 2 openhabian openhabian 0 Jul 25 08:56 slot12
drwxrwxrwx 2 openhabian openhabian 0 Jul 25 08:56 slot13
drwxrwxrwx 2 openhabian openhabian 0 Jul 25 08:56 slot14
drwxrwxrwx 2 openhabian openhabian 0 Jul 25 08:56 slot15
drwxrwxrwx 2 openhabian openhabian 0 Jul 25 07:54 slot2
drwxrwxrwx 2 openhabian openhabian 0 Jul 25 07:54 slot3
drwxrwxrwx 2 openhabian openhabian 0 Jul 25 07:54 slot4
drwxrwxrwx 2 openhabian openhabian 0 Jul 25 07:54 slot5
drwxrwxrwx 2 openhabian openhabian 0 Jul 25 07:54 slot6
drwxrwxrwx 2 openhabian openhabian 0 Jul 25 07:54 slot7
drwxrwxrwx 2 openhabian openhabian 0 Jul 25 08:56 slot8
drwxrwxrwx 2 openhabian openhabian 0 Jul 25 08:56 slot9
[18:19:47] openhabian@openHABianPi:/mnt/openhab-backup/slots$ ln -s . drive0
ln: failed to create symbolic link ‘drive0/.’: File exists

You cannot create the dir first and make a link next.
Remove the dir (rmdir drive0) and ONLY create the link.
While you’re there, also do ln -s /mnt/openhab-backup/slots drive1 and insert a line taper-parallel-write 2 after the tpchanger line in amanda.conf

Then I just get:

[18:41:21] openhabian@openHABianPi:/mnt/openhab-backup/slots$ rmdir drive0
[18:41:37] openhabian@openHABianPi:/mnt/openhab-backup/slots$ ln -s . drive0
ln: failed to create symbolic link ‘drive0’: Operation not supported

Tried with sudo also, but no difference.

Edit: the openhab-backup-folder is a cifs-share. Does that prevent making symlinks?

Possibly. Try a hard link (no -s).
As you’re the third to come up with this strangeness this week, why the heck do people keep using CIFS to mount a share from a UNIX server (your NAS) to a UNIX client (your Pi) ? Use NFS instead.

It’s shared from my windows-machine, so not much choice for me…

[18:54:11] openhabian@openHABianPi:/mnt/openhab-backup/slots$ ln /mnt/openhab-backup/slots drive0
ln: ‘/mnt/openhab-backup/slots’: hard link not allowed for directory
[18:56:05] openhabian@openHABianPi:/mnt/openhab-backup/slots$ sudo ln -d /mnt/openhab-backup/slots drive0
ln: failed to create hard link ‘drive0’ => ‘/mnt/openhab-backup/slots’: Operation not permitted

Does not make much sense as you won’t be running your Windoze 24x7, always ready to take nightly Amanda backups, will you ? Better put in a USB stick. Or try the AWS S3 variant.

1 Like

Will try that and report back.

Have you tried ln -s /mnt/openhab-backup/slots drive0. Maybe CIFS can’t cope with the “.” special name for the current dir.