I’m running openHAB 2.5 (milestone 4) on a Raspberry Pi 3B+ and recently enabled zram to reduce memory consumption and to reduce SD card wear-out.
I’d like to do a similar thing with influxdb which now writes to the SD card every time a new measurement has been received. From what I understood from influxdb, it’s apparently mainly the wal folder that gets lots of writes so I’d like to see if it could be placed in memory and dumped to file only once every X minutes.
Anyone tried this approach, or anyone recommends another approach?
There are even more points which need to be discussed about default and minimizing SD card wear-out.
At the user group openHAB Berlin someone pointed out that the default logging is to verbose and there was a lot back and forth about what default should be the right one.
I assume openHABian on SD card is the most common way to get started with openHAB (real statistics would help) the goal should be a minial footprint to minimize wear-out since there are many users which will run into problems with failing sd-card due to wear out or power outage and failing blocks. For example Raspberry wont boot / Kernel Panic !? which got caused either by an sd-card wear-out or power outage.
Are you using openHABian ?
If so, you have zram enabled already for /var/log and /var/lib/openhab2 which includes the persistence folder. Not sure where influxdb stores its data. If it’s elsewhere, you can add zram folder to /etc/ztab.
There’s no single right answer. It depends on your usage and what part you eventually need to debug.
Either way, reducing logging is no proper way to avoid SD wearout. ZRAM or some reliable external storage is.
Jesus NO! I did not want to start an argument about this topic right now and here again!
I know it’s depends on the use-case.
All I wanted to point out that there is interest in minimizing SD wearout, nothing else
I haven’t been with you in Berlin, and don’t see what to argument about.
All I wanted to emphasize is that to reduce logging will not be enough to avoid SD wearout. Anyone is free to take the risk and do that on his own box but anyone to tell others is giving dangerous thus bad advice.
I recommend against this approach. zram puts the data in RAM, as the name implies. That means that all of your InfluxDB database would need to reside in RAM all the time. Even with the compression that zram provides, An RPi doesn’t have that much RAM to spare. A more appropriate approach would be to move InfluxDB to some other machine not running off of an SD card or getting an SDD or HDD to run your RPi off of. Maybe if you you set up a retention policy to blow away everything that get’s too old. But if you keep everything forever your database will be in the hundreds of megabytes to gigabytes. You’ve only 1 GB of RAM to split between your running programs and zram.
The good news is InfluxDB doesn’t seem to compress the data so you should get a pretty good compression rate (e.g. gzip took my DB from 147 MB down to 50 MB).
I’d guess that with a standard openHABian setup, another 50MB of RAM would be available. You can also lower the /var/log zram size by that (just remember to adjust max log size). But that would be sort of living on the edge, and overall Rich is right it’s just not enough RAM.
I’d upgrade to a RPi4 or put in an USB stick for /var/lib/influxdb. You would lose your persistence data but not the whole system if that gets corrupted (plus you can back it up).