Help with restoring a partition Amanda [SOLVED]

Tags: #<Tag:0x00007f6173d9db78> #<Tag:0x00007f6173d9da10> #<Tag:0x00007f6173d9d8d0>

hi everyone i’m having some major trouble

I noticed i had alot of updates (44) when logged into ssh. i do keep my system upto date but these ones have piled up

So i used the openhabian-config too update and now my system is pretty much entirely unresponsive UI’s starting and closing constantly , SSH refusing connections ,

what should i do ?

restore from backup before i updated?
Start fresh?
try too fix the current setup?

screenshot of the SSH connection during update

I had some similar issues when updating from 2.4, I resolved by downgrading back to 2.4 stable and haven’t had the time to troubleshoot. Will hopefully get a chance this weekend to try again and get it working.

Hi again @H102

I didn’t upgrade OH itself just all the installed packages

Ah, then I would restore from the backup.

Can someone give me some pointers on restoring a partition this is the first time i have needed to do this its taken days too even get this far

i’m receiving an error while trying to use amfetchdump

[20:24:26] backup@openHABianPi:/media/OHBackup/slots$ amfetchdump -p openhab-dir openHABianpi /dev/mmcblk0 > /media/OHBackup/temp/openhabianpi-image
WARNING: Fetch first dump only because of -p argument
1 volume(s) needed for restoration
The following volumes are needed: openHABian-openhab-dir-009
Press enter when ready

Volume '' not found

Insert volume labeled '' in chg-disk:/media/OHBackup/slots
and press enter, or ^D to abort.

As the docs say: open another terminal and use amtape to find the slot to contain the volume.
Use the show command to find the volume amfetchdump asked for, then use the slot command to select it, e.g.

[22:46:31] backup@openhabianpi:/volatile/backup/slots$ amtape openhab-dir show
amtape: scanning all 15 slots in changer:
slot   3: date 20190209010001 label openhab-dir-018
slot   4: date 20190210010002 label openhab-dir-019
slot   5: date 20190213010002 label openhab-dir-005
slot   6: date 20190211010002 label openhab-dir-006
slot   7: date 20190212010002 label openhab-dir-007
slot   8: date 20190214010001 label openhab-dir-008
slot   9: date 20190215010002 label openhab-dir-009
slot  10: date 20190216010002 label openhab-dir-010
slot  11: date 20190217010002 label openhab-dir-011
slot  12: date 20190218010002 label openhab-dir-012
slot  13: date 20190219010002 label openhab-dir-013
slot  14: date 20190220010002 label openhab-dir-014
slot  15: date 20190221010002 label openhab-dir-015
slot   1: date 20190222010002 label openhab-dir-016
slot   2: date 20190223010001 label openhab-dir-017
[22:46:42] backup@openhabianpi:/volatile/backup/slots$ amtape openhab-dir slot 3
slot   3: time 20190209010001 label openhab-dir-018
changed to slot 3

Hi @mstormi thanks for the reply i’m realy stuck here and its driving me crazy

i have already tried the second terminal thing but trying again get same results

[20:28:50] backup@openHABianPi:/home/openhabian$ amtape openhab-dir show
amtape: scanning all 15 slots in changer:
slot   9: date 20190211010002 label openHABian-openhab-dir-009
slot  10: date 20190212010002 label openHABian-openhab-dir-010
slot  11: date 20190212010002 label openHABian-openhab-dir-011
slot  12: date 20190213010002 label openHABian-openhab-dir-012
slot  13: date 20190213010002 label openHABian-openhab-dir-013
slot  14: date 20190214010002 label openHABian-openhab-dir-014
slot  15: date 20190214010002 label openHABian-openhab-dir-015
slot   1: date 20190215010002 label openHABian-openhab-dir-001
slot   2: date 20190215010002 label openHABian-openhab-dir-002
slot   3: date 20190219010002 label openHABian-openhab-dir-003
slot   4: date 20190219010002 label openHABian-openhab-dir-004
slot   5: date 20190220010002 label openHABian-openhab-dir-005
slot   6: date 20190220010002 label openHABian-openhab-dir-006
slot   7: date 20190221010002 label openHABian-openhab-dir-007
slot   8: date 20190221010002 label openHABian-openhab-dir-008

openHABian-openhab-dir-009 is in slot 9

[21:55:56] backup@openHABianPi:/media/OHBackup/temp$ amtape openhab-dir slot 9
slot   9: time 20190211010002 label openHABian-openhab-dir-009
changed to slot 9

and then pressing enter on the first terminal brings up the same error

[21:55:15] backup@openHABianPi:/home/openhabian$ amfetchdump -p openhab-dir openHABianpi /dev/mmcblk0 > /media/OHBackup/temp/openhabianpi-image
WARNING: Fetch first dump only because of -p argument
1 volume(s) needed for restoration
The following volumes are needed: openHABian-openhab-dir-009
Press enter when ready

ERROR: Volume '' not found

Don’t know that error sorry.
You can try the -a option to amfetchdump
Or without -p but in that case ensure you’re in a directory to have enough space.

There’s extensive logfiles in /var/log/amanda/, see if you find any hint there.
Read manpage (man amfetchdump) and if that still doesn’t help you could look for this
in the Amanda resources on the Inet such as

Both return the same value volume “” not found

Two more ideas:

  1. the error could mean that the partition you’re trying to restore isn’t on that volume.
    Try restoring an older version (see manpage, I think you need to append the date to the amfetchdump command). You can use amadmin openhab-dir find to list all dumps.
  2. manually search the files in your storage directories. See last paragraph of Amanda README for how to decode them.

Using this command seems too have done more im not sure what but its doing something looks like it needed a filename

amfetchdump -p openhab-dir openHABianpi /dev/mmcblk0 20190219010002 > /media/OHBackup/temp/openhabianpi-image


it seems too have restored quite an old version of the system strange as the date on the file was only a few days ago

need too put some work in but system might be back online now

I had the same thing happen to me last week, updated linux not openhab using apt manually on my Ubuntu minimal Odroid C2 setup and I could no longer use SSH and Openhab failed to start. Quick to fix for me as I had a good backup for this exact reason, see here…

As per default Amanda should also be backing up /etc/openhab2 and /var/lib/openhab2 you might be able to restore newer versions of those and extract the files you need.

my setup seems too be running again now and i have learned how too restore from amanda im just having some persistence errors maby you can help

2019-02-28 20:01:08.625 [WARN ] [pse.smarthome.core.items.GenericItem] - failed notifying listener 'org.eclipse.smarthome.core.persistence.internal.PersistenceManagerImpl@197838b' about state update of item PLUG9_Current_MBED_EBlanket: null

java.lang.IndexOutOfBoundsException: null

	at java.nio.Buffer.checkIndex( ~[?:?]

	at java.nio.HeapByteBuffer.getLong( ~[?:?]

	at org.mapdb.DataInput2.readLong( ~[?:?]

	at org.openhab.persistence.mapdb.internal.MapDBitemSerializer.deserialize( ~[?:?]

	at org.openhab.persistence.mapdb.internal.MapDBitemSerializer.deserialize( ~[?:?]

	at org.mapdb.BTreeMap$NodeSerializer.deserialize( ~[?:?]

	at org.mapdb.BTreeMap$NodeSerializer.deserialize( ~[?:?]

	at org.mapdb.Store.deserialize( ~[?:?]

	at org.mapdb.StoreDirect.get2( ~[?:?]

	at org.mapdb.StoreWAL.get2( ~[?:?]

	at org.mapdb.StoreWAL.get( ~[?:?]

	at org.mapdb.Caches$HashTable.get( ~[?:?]

	at org.mapdb.EngineWrapper.get( ~[?:?]

	at org.mapdb.BTreeMap.put2( ~[?:?]

	at org.mapdb.BTreeMap.put( ~[?:?]

	at ~[?:?]

	at ~[?:?]

	at org.eclipse.smarthome.core.persistence.internal.PersistenceManagerImpl.handleStateEvent( ~[?:?]

	at org.eclipse.smarthome.core.persistence.internal.PersistenceManagerImpl.stateChanged( ~[?:?]

	at org.eclipse.smarthome.core.items.GenericItem$ [102:org.eclipse.smarthome.core:0.10.0.oh240]

	at java.util.concurrent.ThreadPoolExecutor.runWorker( [?:?]

	at java.util.concurrent.ThreadPoolExecutor$ [?:?]

	at [?:?]

Im getting it for quite a few items only since restoring the partition

That has nothing to do with backups or Amanda so please move your question to a new thread.


Ok thats fine

it did start after the restore so i thought it was best posting under this thread

Thanks for the help on restoring the partition