It is, and you can use it to backup the raw SD card. You just need some storage space, but not that much actually so that can well be flash based (as you write there just once a day, wearout is no issue for that).
Going through all these now! Thanks great tips. My sad card was toast last week, so I am doing a restart and gonna setup this way.
I want to use persistence and show my sensordata with grafana. At the moment I still have a microSD card in my Raspberry B+. So copying the root system (like in the openhabian-config tool) to an SSD will not be the fix?
Why are so many suggesting to use an SSD? You say that wear leveling will not apply because files will not get created and deleted like on servers?
Because it’s the only improvement they know. Yes SSDs are better than SDs because they have a buffer but it is still flash RAM, too, and there’s further drawbacks these come with (knowledge needed to config, cost, proper setup to boot etc). Yes wear levelling doesn’t always apply. If it truely did, we wouldn’t be having that many people to have problems.
SSD´s are alot faster than SD cards, even connected to the Rpi.
Ofcouse there are drawbacks, (there always are, not matter what you choose). My main reason for recommening an SSD rather than an SD card is, that the SSD is way faster. And the chances of getting a drive crash is less than with the SD card. (cant say how much though).
Even though my SSD is connected using the USB port of the Rpi, it´s still significan faster than the SD card. Some weeks ago I started a new Rpi (3B+) with a SD card due to some testing. My main Rpi 3B is running with an SSD and is quite busy. The new Rpi 3B+ got a absolut minimal install with only one binding.
My main and quite busy system boot up to finish more than a minute faster than the Rpi 3B+ with SD card and minimal install.
A fresh install of openhab 2.4 (hassle-free openhabian) takes aprox 15 minutes using an SSD. Including creating the image copy on a windows 10 using win32diskimager.
Using an SD card it takes what feels like forever. My guess is about 1 hour 15 minutes, including the image copy.
I personally would never fall back to SD cards again. I´ll rather pay some more, and then have more time to play some more
So in openhabian-config the option to move root to usb device serves only for better speed and has not much to do with reliability of the system?
Mostly yes. Sure it’s distributing write load across multiple media so helps somewhat but it clearly is not the aspired solution to provide high reliability.
Hello, I am new here and I really want your opinion about somehing?
Do you have Discord?
What ever you like I just have a few question.
Thank you so much for all your help on the forum !
I’m not doing 1:1 support or discussion.
This is a public forum so post your question in a new thread.
And please follow the rules.
Okay I am sorry I didnt know that.
I’ve been using a powerbank since almost one year. I noticed that a couple of led does not work anymore but except that it continues to operate reliably. Nevertheless, i’ve recently found a blog entry on the ravpower website, stating that pass-through is not meant to be used continuously.
In any case, even an ups will eventually require a replacement of its internal battey after some time. When my powerbank will finally fail i will consider an ups, whose advantage is clearly that it can keep up and running also my switch.
I’ve offloaded /var/log and /var/lib/persistence to an USB stick. While all the log files after some times are split by the logrotate, the same does not apply to rrd4j persistence files that do not grow in size by design and are continuously written. Would it be worth periodically copying persistence files in order to let wear leveling work?
You don’t need to do this because of the way that wear leveling works. The storage is broken up into sectors. Each sector can have part or all of more than one file in it. When and file that is stored in that sector changes, the entire sector is copied to a new sector with the changes made. The old sector is now added to the pool of available free sectors.
Every time rrd4j writes to the DB, the file gets moved around the SD card anyway. There is no need to copy files around for wear leveling.
Also, the above is why you should never pull the power on an RPi if you can help it. If it happens to be writing a sector that also contains, for example, part of the kernel, the next time you start up it won’t be able to boot because part of the kernel was lost.
Is there a way to turn off all logging? Ie. have nothing being written to the SD card. I have a couple of products in my house like the ISY99 that have an SD card. They are several (>3) years old and are doing just fine. I don’t really need logging. Nor persistence. I don’t mind backing up my PI when I make changes, but I don’t want to spend the time setting up and maintaining storage on a USB drive or a NAS and managing the sym links etc.
I think you can disable logging for OH in the config file. But that only addresses OH. All kinds of stuff logs too from sshd to the kernel. It will be way less work in the long run to configure the logging to go to a ramdisk than to figure out how to so everything from logging.
Following on what @rlkoshak said, this article might be of interest to you. I bookmarked it earlier this week as something to try in the future. If you get it working, I’d love to hear about it.
As @rlkoshak said there still will be system logging, so reducing OH logging helps but won’t disable writes altogether.
commit=300 (or a higher value) in
/etc/fstab for the filesystems your Pi logs to, that should greatly reduce the number of writes (but be cautious with the / filesystem, I haven’t tried it there yet).
@rpwong - thank you for the link, will investigate.
My goal is to reduce the complexity of my setup. I am not a linux expert and I have found the above steps are doable but difficult. But the real problem for me is when things are not working and trying to remember how things were setup and how to troubleshoot them…
I found this in the discussion about log2ram. Looks very interesting.
This is exactly why learned Ansible. I am proficient in Linux and yet I can’t remember everything I do to set up my RPis and VMs. Using Ansible all the setup becomes scripted. The scripts get checked into my source control, like I do with my OH configs. I don’t have to remember what I did because it’s all captured in the YAML. I can go back and look at the history and see how my config has changed over time. And if I have a worse case scenario and lose everything, setting everything back up the way it was is just a matter of setting up the logins on the machines and running one command.
Here is my Ansible role for moving /var/log to a tempfs file system (i.e. logging to RAM).
--- # tasks file for min-writes # http://www.zdnet.com/article/raspberry-pi-extending-the-life-of-the-sd-card/ - name: Mount /tmp to tmpfs mount: path: /tmp src: tmpfs fstype: tmpfs opts: defaults,noatime,nosuid,size=100m dump: 0 state: mounted become: yes - name: Mount /var/tmp to tmpfs mount: path: /var/tmp src: tmpfs fstype: tmpfs opts: defaults,noatime,nosuid,size=30m dump: 0 state: mounted become: yes - name: Mount /var/log to tmpfs mount: path: /var/log src: tmpfs fstype: tmpfs opts: defaults,noatime,nosuid,mode=0755,size=100m dump: 0 state: mounted become: yes #- name: Mount /var/run to tmpfs # mount: # path: /var/run # src: tmpfs # fstype: tmpfs # opts: defaults,noatime,nosuid,mode=0755,size=2m # dump: 0 # state: mounted # become: yes - name: Reboot include_role: name: reboot
There are ways to collapse the above into one task but I haven’t bothered to go back and update my old playbooks yet with new things I’ve learned about Ansible.
I don’t remember why I commented out linking /var/run to a tmpfs.
The script that Russell links to does the same thing using Bash scripting, with some additions to preserve the logs periodically. I just let my logs and tmp folders disappear on a reboot unless I’m actively debugging a problem. Then I’ll enable a cron job like the script linked to above does. If I wanted to write the logs to disk periodically, I’d add a task to create a cron job to do that to the above.
For anyone overwhelmed with needing to maintain more than just a couple a Linux machines, I highly recommend spending the time to learn Ansible. It’s not hard to learn and the little bit of time you spend up front will pay huge dividends in the long run. If you do it right (I don’t yet) and build it to be idempotent (i.e. no changes are made if no changes are needed) you can use the same scripts that build the system to update/upgrade as well. I’ve a separate set of upgrade playbooks. But that means I can upgrade apt, docker images, git cloned software that needs to be built, pulling and deploying updates of my own code, etc to all of my machines with one command. The amount of time this has saved me far outweighs the amount of time I invested learning it.