Excessive writes - what can be safely moved to a RAM Drive

, ,

As my SD Card suffered an early death I’d like to know what files in OH2.1 can be safely moved to a RAM drive, and also what else can be ran from TMPFS but needs persisting between reboots. I already run a read only filesystem with OH and have moved all the logging to TMPFS.
To avoid compatibility problems I have left the contents of /var/lib and /etc/openhab2 on the file system and I’m looking to see whether this is causing repeated writes that have worn out my SD Card.
Looking at the file access dates it seems that /etc/openhab2 isn’t being written to, and that once stable only the persistance in /var/lib/openhab2/persistence/mapdb seems to be hit all the time.
However on reboot a lot off files get their modified dates changed:
All the jar and cfg files in the tree further down /var/lib/openhab2/kar/openhab-addons-2.1.0/org/openhab/ most of these are for addons I DON’T HAVE INSTALLED
a few files in /var/lib/openhab2/cache and /var/lib/tmp

The total writes if my assumptions are correct are 939 files and 279 MB per Reboot which would go some way to explaining my SD Card’s early death.

Assuming I’m correct is there a way of stopping all the JAR’s being recreated after every reboot?

That is because OH runs a chmod on all the files everytime it reboots. It isn’t recreating any jar files.

Personally, I think you would be better off spending your effort on establishing a bulletproof backup/restore approach (which you should be doing anyway) and maybe running from an external HD than trying to eek out a few more months out of an SD card.

Since you have moved your logs to a tmpfs file it is most likely the mapdb that has worn out your SD card. And since the entire purpose of persistence is to persist the data when OH restarts you can’t put that into a tmpfs.

The chmod that is causing all the files to have their permissions changes is not going to happen often enough nor write enough data to cause any problems.

Thanks for the insight, I’ve just moved persistence to tmpfs with a script to move it back on reboot. Can you think of anything else that causes writes? I’m keen not to add a conventional hard disk as it’ll likely use more power than the pi, and then mechanically fail when I’m not expecting it. I’m guessing a SSD will suffer from wearing out just the same as a flash drive?
I’ll add an automated backup to memory stick.

Which is why my first suggestion, which you should do no matter what, is develop a fool proof backup and restore approach. ANYTHING can fail when you’re not expecting it.

SSD’s use a slightly different technology and have a WHOLE lot more room to deal with writes. In commercial tests they last as long or longer than HDDs.

I don’t think that is going to buy you much and you may as well not use persistence at all if that is your approach. You can’t always control when or how your system reboots nor can you guarantee that it will close down cleanly. Therefore you run the very real risk of losing the latest version of your MapDB persistence. And since it only saves the most recent values, any backup you have will be out of date and you will need to write System started rules to figure out what the true state of your OH is. And if you have to do that, what do you even need the persistence for in the first place?

If you are only using it to get the time of the last update, then you will get incorrect values and your rules may make the wrong decision based on stale information.

Memory sticks are the same of SD cards and they make the absolute worst medium to back up to because:

  • they wear out from writes just like SD cards
  • they do not always tell you they are wearing out until files start corrupting
  • so you may end up with a corrupt backup without even knowing it and bye bye backup.

I had no idea they were considered that good. Is there anything that you know of that can bring the life remaining attribute from the SMART statistics into the Pi, ideally in OH? Seeing the writes running out would give advance warning of a wear related failure. I have a couple of Intel 80GB SSD’s that are a bit small for Windows as a boot drive that I could recycle and they have a readable wear attribute.

That’s close enough for what I’m doing, the scheduler items are the most important and they seldom change, the rest can mainly be derived

Maybe backing one Pi up to another on a separate SD Card Partition is a better way, and then periodically archiving that to a PC? So far I’ve just imaged the cards after every large change and never got around to doing something more regular.

No, maybe the snmp or sysinfo binding.

But if you have a reliable backup and recover solution is it really worth the effort? Everything fails at some point so you need to do this anyway. Anything else you do is only maybe going to delay a but how often you have to deal with a dead drive. When you are managing a cluster it probably is worth it because you could be seeing a drive die one a week or so. But for a stand alone system you would be doing all this extra work, but in the end the really is the same and you have to recover from backup.

Unless you get a bad drive, I don’t think you ever have to worry about an sdd failing on you, even on a small 80 g one.

That would be better but why not back up straight to the PC? You are still using a fragile media that can fail and not tell you in your backups. I can’t say I recommend it.