This article is to track and document the current status of the ZRAM feature in openHABian.
I’ll update it if there’s significant news.
What is ZRAM and how does it work ?
The ZRAM config utility for swap, directory & log is an OS enhancement for IoT / maker projects for reducing SD, NAND and eMMC block wear via write operations such as logging and persistence in openHAB. It uses compression to minimise precious memory footprint and extremely infrequent write outs and near RAM speed working dirs with memory compression ratios depending on your data and the compression algorithm you choose.
Uses a table in
/etc/ztab where any combination and number of ZRAM drives can be created. This branch uses an OverlayFS mount with ZRAM so that syncFromDisk on start is not needed. This should allow for quicker boots and larger directories as no complete directory copy is needed as it’s all the lower mount in the OverlayFS.
In openHABian implementation,
/var/lib/openhab2/* are moved into ZRAM.
There’s a log in
On proper service shutdown (
zram-config stop), zram’ed directories (OverlayFS) will be a) synced to the ‘lower’ filesystem (the same dirname on
/ filesystem located on default boot medium)
and b) the ‘upper’ filesystem (the part in memory) will be
Lazy unmounting means there can still be processes to have open files on this directory. That’s required to run (and keep running) system processes using dirs such as e.g.
Do’s and Don’t’s
Rebooting … the former recommendation not to use
haltis obsolete because all current OS seem to behave right. But nothing in life is 100% safe and it’s not wrong in principle so I keep it in the ‘known issues’ section for reference.
you must not switch off your openHABian server unless you have properly shut it down.
While this has been a requirement unrelated to ZRAM and essentially ever since UNIX exists, it’s amazing how many people still do this today. Put it on a UPS to safeguard it from power outages.
zram-config sync(even if available in your installed version) unless you know what you’re doing for test purposes.
This is known to not work in a number of situations and may cause data loss.
Make use of a backup solution such as Amanda to have daily backups of your zram’ed directories.
double-check your OH logging settings in
/var/lib/openhab2/etc/org.ops4j.pax.logging.cfgand make sure it won’t generate more logs than fit into that ZRAM directory as per
/etc/ztab(and note there’s system logging to also go there !)
you shouldn’t use ZRAM unless you’re on a SBC (small, “single board” computer such as Raspberry Pis) to run its OS off a medium based on flash memory such as SD card or (!) USB stick. USB sticks are no better or safer than internal SD cards - another good reason for ZRAM.
don’t run ZRAM on machines to have even less than 1GB of RAM such as a RPi Zero.
don’t run it on non-SBCs or modified SBCs to run off “safe” media such as SSD or HDD.
Well you can do, but there’s just a negligible benefit in doing that so the cost/risk-to-benefit ratio is bad.
If (and only if) you have a SBC system with more than 1 GB such as a Raspi 4 with 2 or 4 GB or an Odroid C2 with 2 GB, you may increase the amount of RAM assigned to ZRAM in
/etc/ztab, starting with
That’ll help staying away from the ‘cache’ issue mentioned below. Don’t worry ZRAM will not occupy that full maximum amount unless it really needs to.
you may change the list and size of directories to cache by changing entries in
For example if you consider it to be too risky to run all of
/var/lib/openhab2off ZRAM because you want say jsondb to not be on there, you can put a comment sign in front of that line or replace it with 2 lines for
/cache. But be aware that on the downside this will increase write load on your SD and thus the likelihood of getting hit by wearout.
there’s statistics available: run
For advanced stats, the file
/sys/block/zram<id>/mm_stathas all the info you need. For the complete set of statistics see the kernel docs.
You might run into a misbehavior. Please let me know when you do and let me know the relevant details, have you done anything latety which possibly was a trigger ? Edited a .items or .rules file ? Added a thing or binding that might come with a problem ? Changed openHAB version (incl. milestones, snapshot)?
Yes, this is somewhat overdone, but some Unices (plural of “UNIX”) including openHABian/Raspbian jessie(?) and older still may skip the steps to properly rundown your machine’s services including ZRAM and that can result in all changes since startup being lost.
Buster (the Raspbian release that the latest openHABian 1.5 image is built on) is known to work, but it only applies to boxes installed using this image or properly upgraded to buster which does not happen automatically if your box was installed before the 1.5 buster based image was available…
There’s reports of older OS versions to fail here although it can be misleading that even a proper shutdown kills networking before eventually syncing ZRAM to disk so you might believe it skips shutdown scripts although it does not. But you only can see that if you have attached a console.
By the way ’
shutdown -r'is the proper command to use.
If you want to be on the safe side: stop openHAB using
sudo systemctl openhab2 stop, wait for it to finish and then manually stop ZRAM:
sudo zram-config stop
You must use a proper procedure to shutdown/reboot your server to have it sync files to disk - otherwise they get lost.
/var/lib/openhab2and essentially applies to logs and persistence data.
Short of undiscovered bugs, the tool/script
zram-configwill take care of this sync when called with the
stopparameter. Note there’s a large number of alternative commands/options to reboot and how they work differs in Raspbian let alone among UNIX implementations.
We don’t neither know or could document all of them. If you don’t find “your” way of booting mentioned to be known to work then it possibly isn’t safe.
systemctl stop zram-configor
/usr/local/bin/zram-config stop, a ZRAM device
/dev/zramXmight persist (X is a number). You can try removing it via zramctl -r
/dev/zramX. If that fails, reboot to get rid of it. Eventually
systemctl disable openhab2before so it doesn’t start automatically after boot.
Running openHAB2 off zram’ed directories (which is what we want as it is the whole point about this feature), OH2 makes use of Karaf and that comes with a ‘cache’. Depending on your OH config that’s generating 200+ MBs of changed data that ZRAM needs to hold in RAM right from the OH start. Unfortunately the Karaf ‘cache’ size cannot be controlled to limit use to a specified maximum amount of RAM. There’s no choice to cache some parts while dropping others (so the name ‘cache’ is somewhat misleading here).
The ZRAM-RAM size assignments are defined in
/etc/ztaband they’re a tradeoff between what OH uses (usually ~500-700 MB on ARM, varies depending on OH config) and what Karaf needs for its cache (~200-250 MB, also depending on OH config).
You’ll quickly notice there’s not much headroom on any SBC to typically have 1GB of RAM, let alone running an OH version to have a memleak (although that may in fact not be a big problem w.r.t. media wearout as it typically will result in paging out RAM pages only once).
That means that if you start OH from an empty/cleared cache, it’ll use those 200+ MB in (Z)RAM right away, and any change in operations will be on top.
You might run into space issues if you need to clear the cache and OH regenerates it on start.
A (hopefully safe) method to work around such problems is to:
sudo systemctl stop openhab2
delete the cache:
sudo rm -rf /var/lib/openhab2/tmp/* /var/lib/openhab2/cache/* /opt/zram/openhab2.bind/tmp/* /opt/zram/openhab2.bind/cache/*
start openHAB2, have it complete initialization of items, rules etc:
sudo systemctl start openhab2
shutdown openHAB2 again:
sudo systemctl stop openhab2
stop ZRAM to make it sync to disk (notably the Karaf-generated files in /var/lib/openhab2/cache and …/tmp):
sudo /usr/local/bin/zram-config stop
start ZRAM again:
sudo /usr/local/bin/zram-config start
start openHAB2 again:
sudo systemctl start openhab2
4-7 should execute if you (see above) properly reboot (
shutdown -r) your machine but again if you want to be on the safe side do it manually.
zram-config syncis a feature to allow for “online sync” that is being worked on, but there’s no estimate as to when that will be available.
For now it is said to fail in a number of situations. I encourage everyone to help with testing but be aware that’s at your own risk of data loss. Drop me a note if you’re volunteering.
you might see
permission deniedmessages saying the system cannot write to disk into either of these directories or files below because they’re read-only.
This has not been reported by users but occasionally showed up during testing. There’s a lot of potential reasons for this and it is still subject to investigation as to what’s the most likely/most frequent reason to cause this.
ZRAM does never sync to disk unless you stop it (doing so syncs to and then unmounts the ZRAM directories).
Just like on disk, the amount of RAM that ZRAM needs to have available must be larger than the amount of data to store in there but it’s a little more complicated than that because the data also is compressed with a varying compression factor so the exact amount of raw data to fit in is unknown. The amount of memory ZRAM has available is static (defined in
Now if something happens to write more changed data to zram’ed directories than there fits into, ‘permission denied’ is (most often) how the system protects itself against even more severe impacts.
You can try deleting zram’ed data but usually you would need to (and the safer method clearly is to) shutdown your system.