Corrupt FileSystems every 2-3 month?

Because it’s the only improvement they know. Yes SSDs are better than SDs because they have a buffer but it is still flash RAM, too, and there’s further drawbacks these come with (knowledge needed to config, cost, proper setup to boot etc). Yes wear levelling doesn’t always apply. If it truely did, we wouldn’t be having that many people to have problems.

1 Like

SSD´s are alot faster than SD cards, even connected to the Rpi.
Ofcouse there are drawbacks, (there always are, not matter what you choose). My main reason for recommening an SSD rather than an SD card is, that the SSD is way faster. And the chances of getting a drive crash is less than with the SD card. (cant say how much though).

Even though my SSD is connected using the USB port of the Rpi, it´s still significan faster than the SD card. Some weeks ago I started a new Rpi (3B+) with a SD card due to some testing. My main Rpi 3B is running with an SSD and is quite busy. The new Rpi 3B+ got a absolut minimal install with only one binding.
My main and quite busy system boot up to finish more than a minute faster than the Rpi 3B+ with SD card and minimal install.

A fresh install of openhab 2.4 (hassle-free openhabian) takes aprox 15 minutes using an SSD. Including creating the image copy on a windows 10 using win32diskimager.
Using an SD card it takes what feels like forever. My guess is about 1 hour 15 minutes, including the image copy.

I personally would never fall back to SD cards again. I´ll rather pay some more, and then have more time to play some more :slightly_smiling_face:

1 Like

So in openhabian-config the option to move root to usb device serves only for better speed and has not much to do with reliability of the system?

Mostly yes. Sure it’s distributing write load across multiple media so helps somewhat but it clearly is not the aspired solution to provide high reliability.

2 Likes

Hello, I am new here and I really want your opinion about somehing?
Do you have Discord?
Mail?
What ever you like I just have a few question.
Thank you so much for all your help on the forum !

I’m not doing 1:1 support or discussion.
This is a public forum so post your question in a new thread.
And please follow the rules.

1 Like

Okay I am sorry I didnt know that.

I’ve been using a powerbank since almost one year. I noticed that a couple of led does not work anymore but except that it continues to operate reliably. Nevertheless, i’ve recently found a blog entry on the ravpower website, stating that pass-through is not meant to be used continuously.


In any case, even an ups will eventually require a replacement of its internal battey after some time. When my powerbank will finally fail i will consider an ups, whose advantage is clearly that it can keep up and running also my switch.

I’ve offloaded /var/log and /var/lib/persistence to an USB stick. While all the log files after some times are split by the logrotate, the same does not apply to rrd4j persistence files that do not grow in size by design and are continuously written. Would it be worth periodically copying persistence files in order to let wear leveling work?

You don’t need to do this because of the way that wear leveling works. The storage is broken up into sectors. Each sector can have part or all of more than one file in it. When and file that is stored in that sector changes, the entire sector is copied to a new sector with the changes made. The old sector is now added to the pool of available free sectors.

Every time rrd4j writes to the DB, the file gets moved around the SD card anyway. There is no need to copy files around for wear leveling.

Also, the above is why you should never pull the power on an RPi if you can help it. If it happens to be writing a sector that also contains, for example, part of the kernel, the next time you start up it won’t be able to boot because part of the kernel was lost.

1 Like

@mstormi @rlkoshak

Is there a way to turn off all logging? Ie. have nothing being written to the SD card. I have a couple of products in my house like the ISY99 that have an SD card. They are several (>3) years old and are doing just fine. I don’t really need logging. Nor persistence. I don’t mind backing up my PI when I make changes, but I don’t want to spend the time setting up and maintaining storage on a USB drive or a NAS and managing the sym links etc.

1 Like

I think you can disable logging for OH in the config file. But that only addresses OH. All kinds of stuff logs too from sshd to the kernel. It will be way less work in the long run to configure the logging to go to a ramdisk than to figure out how to so everything from logging.

Following on what @rlkoshak said, this article might be of interest to you. I bookmarked it earlier this week as something to try in the future. If you get it working, I’d love to hear about it.

As @rlkoshak said there still will be system logging, so reducing OH logging helps but won’t disable writes altogether.
Try commit=300 (or a higher value) in /etc/fstab for the filesystems your Pi logs to, that should greatly reduce the number of writes (but be cautious with the / filesystem, I haven’t tried it there yet).

@rpwong - thank you for the link, will investigate.

My goal is to reduce the complexity of my setup. I am not a linux expert and I have found the above steps are doable but difficult. But the real problem for me is when things are not working and trying to remember how things were setup and how to troubleshoot them…

I found this in the discussion about log2ram. Looks very interesting.

This is exactly why learned Ansible. I am proficient in Linux and yet I can’t remember everything I do to set up my RPis and VMs. Using Ansible all the setup becomes scripted. The scripts get checked into my source control, like I do with my OH configs. I don’t have to remember what I did because it’s all captured in the YAML. I can go back and look at the history and see how my config has changed over time. And if I have a worse case scenario and lose everything, setting everything back up the way it was is just a matter of setting up the logins on the machines and running one command.

Here is my Ansible role for moving /var/log to a tempfs file system (i.e. logging to RAM).

---
# tasks file for min-writes
# http://www.zdnet.com/article/raspberry-pi-extending-the-life-of-the-sd-card/
- name: Mount /tmp to tmpfs
  mount:
    path: /tmp
    src: tmpfs
    fstype: tmpfs
    opts: defaults,noatime,nosuid,size=100m
    dump: 0
    state: mounted
  become: yes

- name: Mount /var/tmp to tmpfs
  mount:
    path: /var/tmp
    src: tmpfs
    fstype: tmpfs
    opts: defaults,noatime,nosuid,size=30m
    dump: 0
    state: mounted
  become: yes

- name: Mount /var/log to tmpfs
  mount:
    path: /var/log
    src: tmpfs
    fstype: tmpfs
    opts: defaults,noatime,nosuid,mode=0755,size=100m
    dump: 0
    state: mounted
  become: yes

#- name: Mount /var/run to tmpfs
#  mount:
#    path: /var/run
#    src: tmpfs
#    fstype: tmpfs
#    opts: defaults,noatime,nosuid,mode=0755,size=2m
#    dump: 0
#    state: mounted
#  become: yes

- name: Reboot
  include_role:
    name: reboot

There are ways to collapse the above into one task but I haven’t bothered to go back and update my old playbooks yet with new things I’ve learned about Ansible.

I don’t remember why I commented out linking /var/run to a tmpfs.

The script that Russell links to does the same thing using Bash scripting, with some additions to preserve the logs periodically. I just let my logs and tmp folders disappear on a reboot unless I’m actively debugging a problem. Then I’ll enable a cron job like the script linked to above does. If I wanted to write the logs to disk periodically, I’d add a task to create a cron job to do that to the above.

For anyone overwhelmed with needing to maintain more than just a couple a Linux machines, I highly recommend spending the time to learn Ansible. It’s not hard to learn and the little bit of time you spend up front will pay huge dividends in the long run. If you do it right (I don’t yet) and build it to be idempotent (i.e. no changes are made if no changes are needed) you can use the same scripts that build the system to update/upgrade as well. I’ve a separate set of upgrade playbooks. But that means I can upgrade apt, docker images, git cloned software that needs to be built, pulling and deploying updates of my own code, etc to all of my machines with one command. The amount of time this has saved me far outweighs the amount of time I invested learning it.

4 Likes

Note the recent update on zram in my main post and this new post.
I encourage anyone to have a look at https://github.com/openhab/openhabian/pull/576 and help with testing by deploying this on boxes of yours and get me some feedback. See last comment on Github for how to install. Standard disclaimer applies: use at your own risk.

2 Likes

Markus,

I read your post regarding “maximizing” resiliency. I do not currently have the necessary infrastructure (or means) to pursue everything you recommend.

In the interim, I have used a “better quality” (for what that’s worth) USB stick drive and moved root onto it.

I corresponded with you a few weeks ago regarding zRAM and have seen the recent posts regarding getting that fully incorporated and awaiting the PR to be merged. I plan on turning zRAM back on as soon as that’s merged.

I am just confused overall about what all these moving parts are actually doing. I understand that both moving root to USB and using zRAM are intended to minimize the cycles on the SD card to postpone its inevitable failure. But what pieces are where? Particularly, what do I need to do to back up what I need to back up so that I can restore if I have a catastrophic failure.

  • I have moved root to USB
  • I will be using zRAM
  • I have a fairly simple setup. About the only thing besides openHAB and some of its add-ons is mosquitto. I was using an encrypted broker but have transitioned to using myopenhab.org instead (so no certificates to worry about).
  • I have backups of the mosquitto configuration.
  • I regularly back up /srv/openhab2-conf and /srv/openhab2-userdata (“openHABian” references)

So…

  • What remains on the SD card? Should I make image backups of it regularly or is conf and userdata enough? Is there any other recommended approach to back up the information on the SD card?
  • Do I need to make a backup of the root I moved to the USB stick? If so, what is the recommended means?

Many thanks in advance!

Mike

It now is merged.

Historically, “move to USB” was built long before ZRAM (and not by me), and both features are not designed to work with each other so the effect of combining both is unknown even to me. So all I can do is to recommend not to mix them and go with ZRAM only.

I would setup another system from scratch (openHABian image), enable ZRAM and then openhab-cli restore ‘anything OH’ plus whatever you changed beyond OH (mosquitto etc, but there’s no per-application backup/restore for all of these so you have to do that manually).

Yes, use Amanda to backup your SD card. Read the README.