We need to work out WHY you have serious problems with it. First work out why and then create a solution that fixes the root of the problem as my changes will not address the problem if you are writting to a persistance file 100 times a second or you have a fake SD card, are overclocking, or the power supply and usb cable you use is not good enough for some examples. Run the inotifywait commands in my first post and work out what files are getting written as a first step.
To answer the question, the rc.local file does have some lines that should not be disturbed as they have IF, FI and THEN statements. Place the code either at the top after the last of the lines starting with a hash #, or directly above the exit 0 line at the bottom. To make it easier to read it is fine to press ENTER a few times to create space from what your adding to what was there before. The hash lines are comments and you can add your own to help you work out what you have changed. Make a IMG of the SD card before you start, or backup the files you change so you can reverse it should something go wrong.
In a terminal type the command ‘free -m’ and it will tell you how much free memory in MB you have to use as a ram drive aka tmpfs (temporary file system). NOTE: It is my understanding that linux works by never clearing out the memory until you run out of memory. Linux keeps things in ram in case they are needed again and does not clean up so don’t be surprised if the amount of free mem is dropping the longer your setup runs. Look at the swap lines and if 0 is used then all is good. Actual Free Memory = Free + Used for Buffers + Used for Cached.
My background was doing hours of daily testing on a device similar to a PI long before they were invented. For years I would pull the power on this device 100’s of times per day that was running flash storage and linux. Not once did it ever have an issue as the entire flash was setup as read only. Because I have first hand experience with this for years, I know how well flash can work and also how bad it can be if it is written to excessively and made worse by dirty shutdowns. I also have a rpi2 running kodi (osmc) that never gets powered down and is still fine after 2 years since I made changes similar to the above to it.
@matt1: Thank you for your detailed answer. I am using a Raspi 3 with no overclocking, nothing in its USB plugs, 16 GB Sandisk SDCard with 81% free (root), RAM 66% free, Swap 100% free, CPU load 1-2% typ.
My persistence is rrd4j, once a minute, two values.
Here is a log of accesses in about one minute.
My problem is as follows, as detailed here. Once in two or more months I cannot access openhabian by ssh port 22 any longer. Openhab is still running, but no more access to openhabian. Thus, I cannot do a soft shutdown. Disconnecting and connecting the power plug does not help, and installation soon fails completely and needs reflashing. Now, I use the systeminfo binding to monitor system ressources in my sitemap.
For my use case, lost logs or persistence data are not as critical as failed Openhab operation. Therefore, your approach possibly could help provided it is in fact a disk corrruption issue.
Give it a go and off load the persistance to a USB stick or a NAS as a trial. Are you using a UPS at all? if you get power cutting in and out that is very bad as linux may/will be writting to files during the boot process when the power is cutting in and out. When I get some spare time I will look into what files change on boot.
I’d suggest also adjusting the mount options for the SD card file system. I’ve been using commit=360 parameter with ext4 to reduce write access, yet still have everything written to card (eventually). The maximum delay in this case would be 6 minutes and the file system should be safe & consistent no matter power outages. I’ve been running with this setup for several months now. https://www.kernel.org/doc/Documentation/filesystems/ext4.txt
Thanks for suggesting will take a look when I get some time. So far my tweaks are working well and the only way I see them causing issues is if Samba gets upgraded and on a reboot the files are downgraded, so I keep an eye on the updates for any mention of samba which has not happened yet. Backing up is how I handle this which before doing updates it is a good idea to backup first anyway.
Logs appear to reach a certain size and then they get renamed which moves them to the sd card so never seen my RAM disk go anywhere near filling up. I could probably drop it’s size even lower but I have not had the need to.
EDIT: When openhab.log reaches approx. 17mb in size, it gets renamed and a new file is created. This breaks the symlink so the new file then writes to the SD card again, no crashes or issues it just keeps working fine besides the SD card starts to get the writes happening again until I reboot. So by rebooting before the log hits 17mb or by turning off or reducing the log outputs it is easy to keep the file from reaching this size.
I wouldn’t care to reduce processes to do more or less one-time writing which is what most demon processes do w.r.t their PID… quite some risk to break things and no real benefit.
If you’re really seeing “lots of” writes to that dir, something’s wrong (permanent startup loop ?) and you should find out what it is instead.
Great let me know what you find as I have been doing this for a long time on multiple projects not just Openhab with great results.
Sorry no idea, best asked on the Raspbian forum perhaps… But yes it is safe in the sense you can not damage your hardware in any way and if you have a backup of the SD cards contents it is very easy to reverse, so go for it and try it, that is the best way to learn. Linux has for years allowed itself to be loaded into RAM, non persistant USB loads, LIVECD’s, the list can go on not to mention every professional project that can not accept flash corruption…
This link may interest you as it mentions the lock files with a solution, some of what is written in that article I dont agree on for example data corruption does not damage the drive requiring you to throw it away… http://hallard.me/raspberry-pi-read-only/
It is hard to predict what a programmer does in a binding or other type of addon so your behaviour or any other type of extra writes could be from what is being done in an addon. Worth checking into to see if it is an issue as Markus is indicating or perhaps a binding is doing it. Have fun playing.
Newer writes that have started in my Ubuntu based setup since this guide was first written:
/run/samba/msg.lock/ (multiple files)
/var/lib/mosquitto/ (mosquitto.db and mosquitto.db.new)
/var/lib/ntp/ (ntp.drift.TEMP and ntp.drift)
/var/lib/openhab2/jsondb/backup/ (multiple files)
Since I am no longer using Openhabian I can’t confirm or test any ‘fixes’ for them. I now use my install script for my Odroid C2 found here to shut down the writes and this is done in a number of ways. You can open the install script in a text editor and read how I handle these files as I test and implement a work around to stop the writes.
Thanks for this post. By following this I was able to transfer my logging and persistance to an external flash drive. However initially mosquitto was not working. But after creating a “mosquitto” folder in log folder it started working.
It may not be worth all of the trouble now that SanDisk > Western Digital is now making MicroSD cards exactly for HDD replacement applications, including fast, high-endurance SLC, plus a wear-leveling controller all in one MicroSD package. About $25 per 16 GB.
There’s a lot more to HDD-class flash storage than Wear leveling, e.g. SLC among others differences in controller logic. I consider this cheap marginal improvement in availability no matter your setup.
All reasonable SD card optimisations comes out of the box, ZRAM logging, delayed writing … are present in the system for around 8 years now. For most demanding cases system can also be switched to fully (virtual/memory file system) read only mode (Ubuntu variants) which means absolutely no writings (damaging) to the SD card. Optimisations boost up the system since it doesn’t use slow media so much and also prolong their life.
SD cards were improved in past decade, but they eventually die. Also most expensive ones.