SD image via Amanda's backup

Tags: #<Tag:0x00007f617db1de20> #<Tag:0x00007f617db1dbf0>

Hello to All,

Since I understood that the SD can get corrupted, I would like to create an image of my SD, based on the latest Amanda’s backup.

  • Platform information:
    • Hardware: Raspberry Pi 4 Model B Rev 1.2, 4GB RAM,
    • OS: Linux 5.4.51-v7l+
    • openHAB version 2.5.8-1

This is the full memory system picture:

[20:39:02] backup@openhab:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 1 28.9G 0 disk /storage/usb-drive *(USB FOR THE BACKUP)
sdb 8:16 1 28.9G 0 disk *(USB FOR THE IMAGE)
mcblk0 179:0 0 29.7G 0 disk *(SD CARD)
├─mmcblk0p1 179:1 0 256M 0 part /boot
└─mmcblk0p2 179:2 0 29.5G 0 part /
zram0 254:0 0 600M 0 disk [SWAP]
zram1 254:1 0 500M 0 disk /opt/zram/zram1
zram2 254:2 0 500M 0 disk /opt/zram/zram2

Via the openhabian-conf, the Amanda server was installed and it is up and running. Every day, at 1:00 it starts to refresh the backup of the following folders (on a USB of 32GB):

  • /boot
  • /dev/mmcblk0
  • /etc
  • /opt/zram/persistence.bind
  • /var/lib/openhab2

Since the SD can get corrupted, it could be useful to create an image of the SD, out of the latest Amanda’s backup.

If I understood correctly, the entire SD is the /dev/mmcblk0, so it means that an image of it contains an exact copy of the running system. In case of SD corruption, with the balenaEtcher, this image can be dropped into a new SD card and the system can restart as if nothing happen.

Let me share some details of the specific configuration:

  • openhab is the name of the host
  • /storage/usb-b/ is the folder where to store the new image
  • openhab-image is the file name of the image

By running the following command, an error appears at the end:

amfetchdump -p openhab-dir openhab /dev/mmcblk0 20200831 > /storage/usb-b/openhab-image

I get the following outcome:

[20:43:13] backup@openhab:~$ amfetchdump -p openhab-dir openhab /dev/mmcblk0 20200831 > /storage/usb-b/openhab-image
Warning: no log files found for tape openHABian-openhab-dir-013 written 2020-08-30 01:00:02
Warning: no log files found for tape openHABian-openhab-dir-012 written 2020-08-30 01:00:02
Warning: no log files found for tape openHABian-openhab-dir-011 written 2020-08-30 01:00:02
Warning: no log files found for tape openHABian-openhab-dir-010 written 2020-08-30 01:00:02
Warning: no log files found for tape openHABian-openhab-dir-001 written 2020-08-25 22:04:56
5 volume(s) needed for restoration
The following volumes are needed: openHABian-openhab-dir-014 openHABian-openhab-dir-015 openHABian-openhab-dir-016 openHABian-openhab-dir-017 openHABian-openhab-dir-018

Press enter when ready

Reading label ‘openHABian-openhab-dir-014’ filenum 5
split dumpfile: date 20200831010003 host openhab disk /dev/mmcblk0 part 1/UNKNOWN lev 0 comp N program APPLICATION
1687456 kb

Reading label ‘openHABian-openhab-dir-015’ filenum 1
split dumpfile: date 20200831010003 host openhab disk /dev/mmcblk0 part 2/UNKNOWN lev 0 comp .gz program APPLICATION
3713664 kb

Reading label ‘openHABian-openhab-dir-016’ filenum 1
split dumpfile: date 20200831010003 host openhab disk /dev/mmcblk0 part 3/UNKNOWN lev 0 comp .gz program APPLICATION
5810048 kb

Reading label ‘openHABian-openhab-dir-017’ filenum 1
split dumpfile: date 20200831010003 host openhab disk /dev/mmcblk0 part 4/UNKNOWN lev 0 comp .gz program APPLICATION
7826368 kb

Reading label ‘openHABian-openhab-dir-018’ filenum 1
split dumpfile: date 20200831010003 host openhab disk /dev/mmcblk0 part 5/UNKNOWN lev 0 comp .gz program APPLICATION
8741600 kb

filter stderr:
filter stderr: gzip: stdin: unexpected end of file
8741600 kb
Error writing to fd 7: No space left on device

I assume that this error is happening because the USB memory is too small.

If there a way to set Amanda to write the no-null information only?
or is there any mean for limiting the size of the final image?
or would it be possible to resize the main partition of /dev/mmcblk0 without losing information?

In the end, the size of the real needed information stored on the /dev/mmcblk0 is much less than what Amanda is backing up so the majority of the backup is for nothing.

Is there a mean to extract the information of the /dev/mmcblk0 from the Amanda backup by using a Windows based computer without a linux installation?

Yes. Get a larger destination disk. If you have a NAS just mount it.

No as it does not know to interpret the contents of the raw device.

Well, in principle yes … but that’s quite a hack. There are people who did it but to shrink an ext4 fs is not really a supported action. If you really want to, you need to g**gle for this.
The safer way is to get an as-small-as-appropriate SD card (8 or 16 GB) and reinstall on there.

restore

To restore you could try to directly write the restored file to the destination raw device such as

amfetchdump -p openhab-dir openhab /dev/mmcblk0 20200831 | dd bs=1M of=/dev/sdb

Untested so beware.
If SD cards are not equal in size the filesystem might be corrupted so use fsck eventually.

You might be interested in the forthcoming new auto-backup feature of openHABian as well.

EDIT: just merged that into the master branch.

1 Like

Thanks for the advice. I’ll try the new auto-backup feature.

However, I am still not fine with these points:

  1. The size of the SD card is 32GB and the real used space of the SD card is around 10GB.

  2. The USB memory is of 32GB,

  3. and the USB memory is physically smaller than the SD card (even if theoretically both are declared at 32GB).

Because of this small gap, it is not possible to make an image of the 10GB in the 32BG! Not easy to accept.

In addition, today, in the auto-backup feature explanation it is stated: “The second card needs at least twice the size of your internal card”. Even if I understand the technical reason, I will try to find a different solution. I would like to make use of the same (theoretical) memory size as a backup since the main SD is almost empty.

I did try the dd command (directly from if /dev/mmcblk0), but it was ending with the same resalt: not enough space.

As next step, I launched the command:

sudo dd if=/dev/mmcblk0 | gzip > /storage/usb-b/openhabian-backup.img.gz

with this outcome: gzip: stdout: File too large

So I need to dig more on it. By searching on the web, I found several interesting links that explain how to make an image even in presence of a different memory sizes:

Why don’t you just get a 16GB card then? Just install openHABian there and use your 32GB one as the backup.

There is a tuned solution that it is up and running (after several actions). I would like to make a copy in order to avoid to restart from scratch.

That’s inevitable when the destination card is smaller than the original one.
But as the “tail” of the source card usually is (mostly) NULL, it usually works out.
You can also try using the count= and bs= parameters to dd to match your destination card (okay you’ll still get an error probably about a pipe being closed)…
Try with a 16GB card, fsck it afterwards and try booting on your spare RPi.

You can still enable auto-backup through openhabian-config menu 53.

What needs to be tuned about an openHABian installation ? Let me know.
Asking because quite some stuff people tune isn’t needed or even counterproductive, and on the other hand side if it’s something reasonable, we could put it into openHABian for everyone to benefit.

The biggest problem is remembering all the twicking actions done during the installation process. However, let me try to resume the steps (that at least I still remember):

  1. via balenaEtcher, prepare the SD card
  2. via openhabian-conf, disable the Bluetooth
  3. remove the raspbee start and reinstall the raspbee card
  4. via openhabian-conf, disable the serial port
  5. install deconz
  6. change the deconz service port to 8090
  7. mount a new USB device
  8. via openhabian-conf, install Amanda
  9. via deconz-Phoscon, load all my sensors and actuators
  10. via openhab, add and configure all my sensors and devices
  11. personalize the openhabian interfaces

The most time-consuming and annoing steps are: 9,10 and 11.

My comments related to your nice remark for the integration of those steps into a standard solution:

  • step 1), it is a must, but nothing to do more.

  • steps 2) 3) 4) 5) and 6), they are necessary because of the raspbee hardware, they are not a general all users need. Still not sure if 4) is really necessary. I would like to exploit the integrated Bluetooth, but I am still not able…topic for another discussion.

  • steps 7) and 8), they are already offered in openhabian-conf.

  • steps 9) 10) and 11), they are linked to my particular solution/needs (not a general all user need)

In conclusion, based on those 11 steps, I do not see functions that should be added in the openhabian-conf.

In order to continue on this topic, I followed this advice:

this is the command executed (limiting the number of sectors and setting the sectors to the size given by the fdisk -l command):

amfetchdump -p openhab-dir openhab /dev/mmcblk0 20200901 | dd of=/storage/usb-b/openhabian-image bs=512 count=15000000

the outcome is still an error after writing 4.3 GB on a SD card of 8GB

[22:20:32] backup@openhab:~$ amfetchdump -p openhab-dir openhab /dev/mmcblk0 20200901 | dd of=/storage/usb-b/openhabian-image bs=512 count=15000000
Warning: no log files found for tape openHABian-openhab-dir-001 written 2020-08-25 22:04:56
7 volume(s) needed for restoration
The following volumes are needed: openHABian-openhab-dir-019 openHABian-openhab-dir-020 openHABian-openhab-dir-006 openHABian-openhab-dir-007 openHABian-openhab-dir-008 openHABian-openhab-dir-009 openHABian-openhab-dir-010

Press enter when ready


Reading label 'openHABian-openhab-dir-019' filenum 5
split dumpfile: date 20200901010002 host openhab disk /dev/mmcblk0 part 1/UNKNOWN lev 0 comp N program APPLICATION
1716992 kb

Reading label 'openHABian-openhab-dir-020' filenum 1
split dumpfile: date 20200901010002 host openhab disk /dev/mmcblk0 part 2/UNKNOWN lev 0 comp .gz program APPLICATION
1839136 kb    dd: error writing '/storage/usb-b/openhabian-image': File too large
8388608+0 records in
8388607+0 records out
4294967295 bytes (4.3 GB, 4.0 GiB) copied, 712.855 s, 6.0 MB/s

filter stderr:
filter stderr: gzip: stdin: unexpected end of file
1873856 kb
Error writing to fd 7: Broken pipe

I see 2 unclear behavious:

  1. why did it generated an error of file too large at 4.3 GB if the SD card size is 8GB?
  2. why the Amanda backup is growing? after few days it requires 7 tapes and it started with 4?

Any advice?

I don’t know. Is it a FAT filesystem ? 4GB is the maximum file size there.

I actually meant to pipe the amfetchdump output to dd of=/dev/sdX so it skips creating a file as an intermediate result (hence no problems with that) and moves right on to restore stage.

That’s normal. It starts with a level 0 dump. On next runs it adds one or more level 1 dumps while keeping the L0. Depending on how long you want to be able to go back in time you need usually 2-3 times the storage space of what you want to backup.

good remark!

it was my problem for getting the commnad DD working with an upper limit.

Meanwhile I was working on my personal solutin, I ended up in a stability issue. I’ll come back and share my solution as soon as the new challange is fixed and my idea is tested.