Corrupt FileSystems every 2-3 month?

Not quite right. Re-read my post. There’s 2 ingredients to a long-life SD card: UPS and reduction of writes.
If you already have a UPS (power outages would affect ALL connected devices at the same time), going for a SW RAID might be more reliable than to use a single device (even if that’s a SSD).
Then again it’s adding complexity and if you also have applied the write tweaks AND have backup in place it’s probably not worth/required any more.

I’m doubtful. SD cards don’t implement SMART. There is nothing there to report that a card is failing. With SD cards they work until they don’t and the only way you can tell that they are not working anymore is when weird stuff starts to happen (e.g. you try to delete a log file but the file remains or comes back at reboot). And in cases of corruption caused by loss of power, it isn’t a case where the SD card failed. It is a case where the SD card doesn’t have protections for power loss to prevent file system corruption.

So you will be in a situation where file system corruption can occur even on perfectly fine SD cards and a situation where the drives fail silently giving no information to the RAID controller that anything is wrong. Linux and the RAID will continue to happily write and replicate corruption across the RAID because there is nothing telling it that what you intended to write didn’t actually get written to the SD card.

This will be far more expensive, far more complex, and likely far less reliable than just a single external SSD or HDD so I can’t recommend it. I’m not certain it would address the SD card corruption and wear out problem at all actually.


As others have said, just get yourself an external SSD case, and run the Pi from that. My most recent install, I used Etcher to burn the openhabian image directly to the SSD. Worked great.

As with any system you want to be reliable, put it on a UPS. Make backups of the important files.

thanks for clarifying. wasn’t aware that there was no SMART or anything whatsortsoever.
How about SD then with dd once a “day” (night) to another SD card of the same size with previos error check?
or dd to a magnetic disk with erasion of files when new ones are being deployed and disk capacity >80%

there was a possibility to write 0’s to the free space on a SD and then compress the image?

These sound like a lot of write operations. Writes are what wear out the SD card. Each memory space can only be written to a finite number of times.

Also, with flash memory it really isn’t possible to write zeros to the free space. The technology just doesn’t work that way. You can write zeros a million times but you have on control over what cells those zeros get written to. There may be many if not most of the currently marked as unused cells that never get written. And the “free” cells may not be wholly free. This sort of thing simply does not work with flash memory.

Over all, honestly, this sounds way more complicated than just setting up a good backup and restore procedure and dealing with restoring to a new SD card when the SD card you are using goes south. You will need this anyway. And configuring the machine to use an SSD or HDD would be even simpler and in the end probably more reliable.

OK thanks for clarifying, I am still thinking about magnetic discs.
I read that amanda seems to be a pretty descent tool?

I will follow your suggestion and go with an hdd and do daily backups, now I need to get this SD card running with the least amount of writes.
I’ve got a 5V 6A power supply and will attach an USB hdd and do backups with amanda and probably one dd every couple of weeks.
As UPS I will look into getting a power-bank.
See here where I am stuck at the moment

It is, and you can use it to backup the raw SD card. You just need some storage space, but not that much actually so that can well be flash based (as you write there just once a day, wearout is no issue for that).

Going through all these now! Thanks great tips. My sad card was toast last week, so I am doing a restart and gonna setup this way.

I want to use persistence and show my sensordata with grafana. At the moment I still have a microSD card in my Raspberry B+. So copying the root system (like in the openhabian-config tool) to an SSD will not be the fix?

Why are so many suggesting to use an SSD? You say that wear leveling will not apply because files will not get created and deleted like on servers?

Because it’s the only improvement they know. Yes SSDs are better than SDs because they have a buffer but it is still flash RAM, too, and there’s further drawbacks these come with (knowledge needed to config, cost, proper setup to boot etc). Yes wear levelling doesn’t always apply. If it truely did, we wouldn’t be having that many people to have problems.

1 Like

SSD´s are alot faster than SD cards, even connected to the Rpi.
Ofcouse there are drawbacks, (there always are, not matter what you choose). My main reason for recommening an SSD rather than an SD card is, that the SSD is way faster. And the chances of getting a drive crash is less than with the SD card. (cant say how much though).

Even though my SSD is connected using the USB port of the Rpi, it´s still significan faster than the SD card. Some weeks ago I started a new Rpi (3B+) with a SD card due to some testing. My main Rpi 3B is running with an SSD and is quite busy. The new Rpi 3B+ got a absolut minimal install with only one binding.
My main and quite busy system boot up to finish more than a minute faster than the Rpi 3B+ with SD card and minimal install.

A fresh install of openhab 2.4 (hassle-free openhabian) takes aprox 15 minutes using an SSD. Including creating the image copy on a windows 10 using win32diskimager.
Using an SD card it takes what feels like forever. My guess is about 1 hour 15 minutes, including the image copy.

I personally would never fall back to SD cards again. I´ll rather pay some more, and then have more time to play some more :slightly_smiling_face:

1 Like

So in openhabian-config the option to move root to usb device serves only for better speed and has not much to do with reliability of the system?

Mostly yes. Sure it’s distributing write load across multiple media so helps somewhat but it clearly is not the aspired solution to provide high reliability.


Hello, I am new here and I really want your opinion about somehing?
Do you have Discord?
What ever you like I just have a few question.
Thank you so much for all your help on the forum !

I’m not doing 1:1 support or discussion.
This is a public forum so post your question in a new thread.
And please follow the rules.

1 Like

Okay I am sorry I didnt know that.

I’ve been using a powerbank since almost one year. I noticed that a couple of led does not work anymore but except that it continues to operate reliably. Nevertheless, i’ve recently found a blog entry on the ravpower website, stating that pass-through is not meant to be used continuously.

In any case, even an ups will eventually require a replacement of its internal battey after some time. When my powerbank will finally fail i will consider an ups, whose advantage is clearly that it can keep up and running also my switch.

I’ve offloaded /var/log and /var/lib/persistence to an USB stick. While all the log files after some times are split by the logrotate, the same does not apply to rrd4j persistence files that do not grow in size by design and are continuously written. Would it be worth periodically copying persistence files in order to let wear leveling work?

You don’t need to do this because of the way that wear leveling works. The storage is broken up into sectors. Each sector can have part or all of more than one file in it. When and file that is stored in that sector changes, the entire sector is copied to a new sector with the changes made. The old sector is now added to the pool of available free sectors.

Every time rrd4j writes to the DB, the file gets moved around the SD card anyway. There is no need to copy files around for wear leveling.

Also, the above is why you should never pull the power on an RPi if you can help it. If it happens to be writing a sector that also contains, for example, part of the kernel, the next time you start up it won’t be able to boot because part of the kernel was lost.

1 Like

@mstormi @rlkoshak

Is there a way to turn off all logging? Ie. have nothing being written to the SD card. I have a couple of products in my house like the ISY99 that have an SD card. They are several (>3) years old and are doing just fine. I don’t really need logging. Nor persistence. I don’t mind backing up my PI when I make changes, but I don’t want to spend the time setting up and maintaining storage on a USB drive or a NAS and managing the sym links etc.

1 Like