I’m concerned about long term reliability of my Smart Home solution. So far my SW+HW works quite stable, so that I think I will stay with this configuration for at least couple of years.
Knowing that I checked in internet what could be the cause of problems in the future and first thing I found - wear-ability of SD Card on RPi. Sure - writing logs to Flash with limited writing cycles is not good idea.
So my question is - I have a NAS with continuous backup in my network, so it would be a good idea if I could move everything from SD card there, and make RPI boot from NAS and write logs to it. I don’t want to move OH execution to NAS, even if this is possible, because of HW bindings.
Perhaps booting from SD card would be an option, but anyway I would like to have Read-Only access to it.
I also would like to have minimum changes in my configuration files, because except OH I have also couple of other apps, so I don’t want to go into each of it and correct paths or whatever.
Did anyone make such things already, or could recommend some easy guide for it?
Except this - any ideas what else could compromise long-term reliability of RPi + OH? I think also to add USB battery pack to avoid sudden reboots during short power fails (happens to me accidentally, when I shut off the breakers installing new lamp and forget to shutdown servers)
I messed with rpi early on and always looked elsewhere simply due to longevity concerns as you mention. One alternative is to use a banana pi. They support sata drives. You basically put the bootstrap on the sd card and then a sata drive has everything else.
Alternatively, I’m sure you could mount NAS shares on your pi and setup your services to write to them. I’m not sure how to work that all in with the boot process though.
Indeed, RPI’s eat sdcards for breakfast I have tried various setups with sdcards as well as with NAS connections (NFS & iSCSI) and they all proved to be rather unstable on the long run. With long run meaning it should be up 24x7, 365 days, resistant (not corrupting) to an occasional power drop etc)
The only way I found it to be relatively stable was to only boot from sdcard and use Sata SSD connected through USB to run the OS.
This doesn’t look promissing. I was given some guide to move everything to NFS NAS and the guy claims, that it works for him for years. So need to test it.
An SD card should not be a big problem. Modern SD cards use wear leveling. That means that writes are spread over the whole free space so repeated usage of nodes is minimized.
So the free space is crucial. Use an SD card that is a lot larger than you need. That should extend the lifespan considerably.
One other thing is to turn off swap but I guess the Raspi does not have enough memory to allow this.
I had the same problems with my Pi2, when I’ve tested a lot and let OH write huge log-files. What I have done is, like @marcel_verpaalen said before, moving the whole system to a USB Stick and use the sd-card only for the boot.
With this configuration it is now running very performant and stable. You can take a look in the Raspberry Pi Forum - HOWTO: Move the filesystem to a USB stick/Drive, there is the process explained.
Hope this helps.
I have moved the RPi OS/filesystem to an USB drive and it has been working without interruption (apart from a few power outages) for four months now. Whatever you want to do, I would strongly recommend you to look at the BerryBoot boot loader. It makes it very simple to move the operating system to an external drive or to the NAS if it supports iScsi.
Here is my experience, after one corrupted sdcard (after 3 months) i moved the logs files and temporary folders to the ram (search for tmpfs) and put the DB into an old usb stick backuped everyday.
The pi is connected to an UPS.
This configuration is running well since 3 months (crossing fingers)
I think, in general, any flash based storage (yes technically this includes SSD) will wear down with rapid writes. So a USB stick will suffer just as bad as an SDCard.
Buying an SDCard that does wear leveling and is far larger than you need is a good idea and will likely greatly increase your life. SSD does exactly this. I remember years ago when it was first coming out, details showed something crazy like a 10 to 1 reserve to reported capacity. Meaning if you bought a 100gb SSD it would actually have 1000gb of space, just that it used that extra space as standby blocks when the active ones died thereby extending its life.
No system is going to be perfectly reliable forever. The question is how long do you need/expect it to run and then build accordingly. I think you probably can build a pi to run reliably using NAS and things like that but why not just build a whitebox nas on real hardware and have it run OH as well.
Anyone looked at the BeagleBone Black Rev C since it has 4GB eMMC on board storage?
It only has 512mb of ram, and I would prefer 1GB.
Beyond that you can put together a NUC with an SSD.
I have made some experience with this as well. This was rather with bad performing SD cards than with broken SD cards. Keep in mind, there are SD cards and SD cards.
You may want to read this as it potentially could be help for your thoughts and next steps:
High Performance Raspberry for OpenHAB Hardware Server
I’ve been running Berryboot + iSCSI on a Qnap NAS for months now. Seems like it is working flawlessly. Had quite a few power problems, both in general and for the Pi itself.
I found instruction on Berryboot site, which is exactly my situation: http://www.berryterminal.com/doku.php/storing_your_files_on_a_synology_nas_using_iscsi
The only thing here I would like to change - I don’t want to reinstall everything after migration, but would like to copy my existing files to new location. Not sure if it works this way.
I have acron job that backs up my pi to nas
0 13 * * * /home/pi/backupPi.sh
… and if you need incremental backups with rsync instead full backups with dd, you could use:
This is one of the setups I’ve also tried in the past. My experience was that indeed it works… but for me it was less stable that the alternative approach of having the files on my synology via NFS.
The main difference is that when e.g.updates are done, it seems the iscsi has more chance on corruptions than the NFS approach.
Interesting… I wonder how that works - is it the actual files within the iscsi target that got corrupted or the actual iscsi disk? I need to get down to testing the snapshot and backup functionalities of the Qnap iscsi in order to have a tested plan for recovery when things go wrong
It’s been some time ago that I’ve than this, but I recall it is a disk corruption. As specifically when the NAS has reboots/unexpected powerdowns, As the iscsi is a block device to the user, the filesystem on it gets corrupted.
In the NFS, which is a file bases system, worse that can happen is that the open file closes.
there are some nice articles if you google for nfs vs iscsi
Observe that the benefits of asynchronous meta-data update in iSCSI come at the cost of lower reliability of data and meta-data persistence than in NFS. Due to synchronous meta-data updates in NFS, both data and meta-data updates persist across client failure. However, in iSCSI, meta-data updates as well as related data may be lost in case client fails prior to flushing the journal and data blocks to the iSCSI server.
Add a smartphone power bank as UPS. Simple, cheap and sufficient. It avoids the need for OH restarts every time you or someone else hit the breakers or earth (ground) cable. I stopped counting how often that happened in my house over the last year.
Moving away everything from SD card is possible but don’t focus on that, it has drawbacks, too, and even if done properly, it’s not sufficient.
What’s much more important is a plan for quick recovery. See my recent post here and also this old one.