/tmp to NAS

Shamefully it hasn’t worked doing that for for TMP as some of the files it won’t create which are needed.

If this is Pi related some have moved to SSD instead of the more fragile SD cards.

They said /tmp not swap. 2 totally different things.

Yes I agree they are 2 different things, although I thought I would try again. I may just buy an adaptor for SATA to USB as I have loads spare SSD.

Yes I did. Don’t. It creates more problems than to stay local. Don’t make yourself dependent on yet another box. BOTH have to work for your Smart Home to work.
Btw. your post makes us victims to the XY problem. So why do you want to do this ?

1 Like

Generally, both boxes are very stable, and I did know that would make it more dependant. Overall RAM is an issue and I am not wanting to drop in a Pi4 yet. I have spare SSD and I have ordered a SATA to USB adaptor now. As such I will copy the SD card to SSD and roll out the now swap and tmpfs and leave it back to normal disk. That way the ware levels won’t be an issue.

Thanks for everyone’s input but seems Pi on a busy system SSD only way to go.

Sorry I thought my post was clear as to why I have /tmp on tmpfs and now why I was trying to move it.

No, and to be frank you still have not answered what the problem is you’re trying to solve.
Is it SD wearout ? Or is it lack of RAM ? Or still something else ?
What HW do we talk about and what applications to use that much ? 1G is enough unless you’re doing anything extraordinary.

You’re still stuck in XY IMHO. Please take care of
How to ask a good question / Help Us Help You - Tutorials & Examples - openHAB Community
in the future.

Definitely not at all. openHABian has ZRAM for the purpose of mitigating SD wearout.
Simple, cheap and standard.

1 Like

Again, why? tmpfs is in RAM, not the SD Card.

https://www.man7.org/linux/man-pages/man5/tmpfs.5.html

Try again. What is your ultimate goal? What results are you trying to achieve? You may not have chosen optimal solutions.

When I first installed openhabian V2.2 there were recommendations to put /tmp on tmpfs and yes I know that is RAM to easy writes on the SD Card. However I starting to run low on RAM and due to the SD Card write cycles I though there was little else but to go back to SD Card or NFS.

So apart from explaining based on recommendation to move to tmpfs I thought my first post was clear about what I wanted to move and why.

Now if your saying that post to move to TMPFS is wrong and that SD Card writes are not longer an issue then I can reverse those changes out.

OK, now we have a picture of the situation you are trying to improve. Thank you.

Others are more experienced with Pi issues than I. My big experience is more UNIX / Linux related.

No problem, I have read my original post multiple times and in my head I thought I explained in enough detail but obviously not.

To further expand the issue with /tmp of tmpfs is that updates download to /tmp. On a 1GB RAM system there is just not enough room so updates fail.

In terms of the writes yes ZRAM is a significant change however I am not in the know enough to say if it was there when I deployed. Looking at the post date I would suggest not.

So Markus is it now that ZRAM can be used there is no more ware issues?

There were no separate recommendations I know of. Btw you confuse openHABian with openHAB. There’s no such thing as openHABian 2.2. And if you meant to say “by the time openHAB 2.2 was current” then that’s 2+ yrs old so no matter what it was I would no longer consider that a valid statement.

OS updates do not download to /tmp. There’s almost nothing in openHABian that would cause downloads (unless you sit in front of the Pi console which you shouldn’t be doing) so I can’t follow you there. Those few occasions that do are far away from filling standard /tmp which is 100MB.

Yes, read the link. I’m just about to make it the default for new installations.

1 Like

Ok I will install ZRAM and change it.

I was doing updates via the openhabian-config to do system updates and it downloaded java to /tmp. This is what made me review the tmpfs situation.

Yeah but even Java (the probably largest among downloads) isn’t THAT huge. Your problem probably was you were using /tmp for other stuff not to belong there (logs, presumably).

It needed about 80MB by the time it downloaded and wanted to decompress. With what I was running it just was to much space needed.

Logs can be a real problem on Linux systems if not periodically purged. Some usually live under /var/log/

Log are always a problem but also a god send when you need them.

So I have installed ZRAM with the default config. So to save any confusion of where this config now is:

df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/root        15G  3.2G   11G  23% /
devtmpfs        484M     0  484M   0% /dev
tmpfs           488M     0  488M   0% /dev/shm
tmpfs           488M  6.6M  482M   2% /run
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           488M     0  488M   0% /sys/fs/cgroup
overlay1        575M  2.7M  530M   1% /var/lib/openhab2
tmpfs           100M  248K  100M   1% /tmp
/dev/mmcblk0p1   42M   23M   19M  55% /boot
/dev/zram1      575M  2.7M  530M   1% /opt/zram/zram1
/dev/zram2      469M   18M  416M   5% /opt/zram/zram2
overlay2        469M   18M  416M   5% /var/log
tmpfs            98M     0   98M   0% /run/user/1000

/etc/fstab:

proc            /proc           proc    defaults          0       0
PARTUUID=529e9566-01  /boot           vfat    defaults          0       2
PARTUUID=529e9566-02  /               ext4    defaults,noatime  0       1
# a swapfile is not a swap partition, no line here
#   use  dphys-swapfile swap[on|off]  for that
      tmpfs           /tmp        tmpfs   nosuid,nodev,size=100m        0       0
#     tmpfs           /var/log/openhab2        tmpfs   nosuid,nodev     0       0
#     tmpfs           /var/tmp        tmpfs   nosuid,nodev              0       0

/usr/share/openhab2          /srv/openhab2-sys           none bind 0 0
/etc/openhab2                /srv/openhab2-conf          none bind 0 0
/var/lib/openhab2            /srv/openhab2-userdata      none bind 0 0
/var/log/openhab2            /srv/openhab2-logs          none bind 0 0
/usr/share/openhab2/addons   /srv/openhab2-addons        none bind 0 0

/etc/ztab:

# swap  alg     mem_limit       disk_size       swap_priority   page-cluster    swappiness
swap    lz4     200M            600M            75              0               90

# dir   alg     mem_limit       disk_size       target_dir              bind_dir
dir     lz4     200M            600M            /var/lib/openhab2       /openhab2.bind

# log   alg     mem_limit       disk_size       target_dir              bind_dir                oldlog_dir
log     lzo     150M            500M            /var/log                /log.bind

PLEASE NOTE THIS IS NOT ME SAYING THIS IS RIGHT.

It didn’t ring a bell right away but now I know why I was irritated … just checked, latest Raspbian/openHABian does not mount /tmp at all.
There’s not much writing to /tmp really so no big time reason to make that a ramfs/tmpfs/zram/whatever.

Did you actually install that manually or why is it 100MB on your system ?
So if you simply unmount that you will have all the space of /

On ZRAM, there’s an upcoming change to replace

dir     lz4     200M            600M            /var/lib/openhab2       /openhab2.bind

by

dir     lz4     150M            500M            /var/lib/openhab2/persistence   /persistence.bind

… you can edit that manually, reboot to activate.

You can also expriment with adding a ztab line for /tmp if you want.

The /tmp was on the guide I followed 2 years ago. To be honest I can’t remember what tmp had lots of writes as in /var/tmp /tmp which is why I implemented it. I had blue tooth scanning on and it was smashing the SD Card back then hence putting a load into tmpfs.

The 100MB limit was something I put on today.