Moving /var/log to NAS - How?

I have read a lot about SD-card corruption here and elsewhere.

Unfortunately I did not succeed with the transfer of /var/log to my NAS (it did work well to transfer with my jdbc-mariadb persistence).

This is what I did (found it somewhere):
Created a user on my NAS (openhab)
Created a shared NFS folder – e.g. openhab
Created a folder (mount point): sudo mkdir -p /mnt/nfs/var

Mounted the share: sudo mount -t nfs -o soft /mnt/nfs/var/
Checked whether it works with write access: touch /mnt/nfs/var/test
Enabled rpcbind during boot: sudo update-rc.d rpcbind enable
Added to fstab: /mnt/nfs/var nfs rw 0 0
Moved the logs to the mounted folder: sudo mv /var/log /mnt/nfs/var/
Created a symbolic link: sudo ln -s /mnt/nfs/var/log /var/log

Unfortunately my RPi was not accessible anymore after reboot through ssh.
I know that it might cause problems moving the logs during runtime, but I guess the files will only be messed up!?
Any suggestion?

I don’t know the sulution to your problem.
But consider only moving /var/log/openhab to your NAS.

So the critical logfiles will still be on the Raspi (and perhaps your problem gets solved this way).
But the constantly written openhab log files are stored on your NAS.

You seem to have a NAS and NFS ready. Have you considered moving the whole filesystem to the NAS? I haven’t had any issues since I have done this and the performance stays pretty much the same if you SD card IO usage isn’t 100 % all the time. If the raspberry should ever break, you swap the hardware and are ready to go, because your files are still on the NAS, which has a much much better data consistency than a single SD card :wink:

I wrote this blog article specifically for openHAB but of course it works with every Linux-based computer (my ambilight is also running from there). I also have InfluxDB and MapDB persistence running on that low-end NAS and performance is great.


That’s brilliant - didn’t think about it because I thought the RPi would not be able to boot from NAS.
(I am Windows obviously using Windows for too long already) :wink:
I will check this option out.

This is correct. The Pi itself can’t do it. But the Linux kernel can :slight_smile: This means you still have to stick in an SD card to put the kernel command line on it (it’s really just an instruction to boot from the NAS and where the share is on that NAS). But nothing will ever write to the SD card so it won’t get corrupted :blush:

The Raspberri Pi 3 can do it completely without an SD card. I think it uses U-Boot (the bootloader) and you can write into that somehow. But I don’t have a Raspi 3 and the SD card method will work everywhere, so I just did that :smile:

Alright - Thanks.

Because I have a RPi 3 I will consider to check out that option. :slight_smile:

booting from NAS (with PXE) is a pain in the ass - I did not succeed so far… :frowning:

I would love to help out with that, but unfortunately I have no Raspberry Pi 3. Maybe try the SD-Card way I described? At least to be sure your NFS is configured correctly and booting from it works.

Later on you could advance to PXE again.

Maybe that’s the better approach :slight_smile:

I just tried the SD-Card way, however I do not understand how to configure the cmdline.txt . Which are the needed additional parts and how do they relate to the created directories created on the raspi and the nas from your example?

In my example, I created a share on my NAS with the name “rootfs”. Within this share, I create a folder for each raspberry I use. In the example, this was raspberrypi. Sounds like you succeeded in copying all the data over to the nfs share.

Now to the cmdline.txt:

You probably have a cmdline looking like this:

dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=/dev/mmcblk0p2 elevator=deadline

The root option has to be changed to root=/dev/nfs so that the Pi knows, it has to boot from an NFS share. This is always /dev/nfs. Always.

You then have to add nfsroot= ip=dhcp

Where nfsroot is the IP of your NAS, a colon, and the full path to the folder, you copied your SD-card data into.

and ip=dhcp to instruct the kernel, that it does need networking, before it can boot.

I think it was a bit misleading to copy the card data to raspberrypi but then use openhab in my production example. I will change this and thank you for your feedback.

Did this help?

Thanks for the super fast reply, will inform if and when it worked

Tried it, but when restarting the pi I can’t get a connection to it??

I used this as my cmdline.txt:

dwc_otg.lpm_enable=0 console=tty1 root=/dev/nfs nfsroot= ip=dhcp rootfstype=ext4 elevator=deadline rootwait

with: being the ip of ther NAS
/mnt/HD/HD_a2/openHAB being the path to the folder on the NAS (Yes, all the files have been copied there successfully)
the part “rootfstype=ext4 elevator=deadline rootwait” was in the original file.

What is wrong??

I changed the cmdline.txt to its the original and rebootet, that way I can access again via SSH

I can’t find the error.

Sorry, I didn’t see your answer. Feel free to mention me in your upcoming answers, so I get an email.

This could be really wrong. The rootfs is not an ext4 mount, but an nfs mount instead.

Please ensure that your NFS server on the nas works by connecting from another linux machine or your sd-card-booted raspberrypi where openHAB is running on right now.

Then, try the following cmdline:

dwc_otg.lpm_enable=0 console=tty1 root=/dev/nfs nfsroot= ip=dhcp elevator=deadline

You will also get an error message on the screen, if it fails. Maybe you can carry the raspberry pi to a screen and try it with a connected screen. You can also take the SD Card out and put it into another (same model) rapsberry pi that is attached to a screen (like when you have a pi workstation or media center somewhere in your home).

Let me know how it worked! :slight_smile:

Thanks for the answer. I didn’t find the time to continue anyhow. I’ll report the result in any case.

Thanks again for the feedback, however I’ll move towards another solution.
My prime interest was to have a storage device that is more suitable for logging then the SD-Card.
Doing that logging on the NAS, in my case for a rrd4j database with its logging every minute, would keep the NAS actively running 24/7. I think an external SSD for such would be a better solution.

You are right. The NAS would be actively writing all the time. I thought it was running anyways as NAS are usually always-on appliances.

You can also think about using an external USB Stick just for your logging purposes (As opposed to the complete rootfs). The most reliable solution would definitely be a rotating magnetic disc, a hard disk. They are much more durable than flash memory in USB Sticks or SSDs. The small ones, in 2,5" USB enclosures can be powered from a single USB port and are usually so silent you cannot even hear them.

I recommend you to add max_usb_current=1 to your /boot/config.txt to increase the maximum USB current to 1,2 Amperes. This will ensure your Raspberry Pi won’t reset when powering up the hard disk.

Good luck! :slight_smile:

1 Like

That’s what i did a few weeks ago. Moved from an usb stick to an old 512gb hdd i had lying around. Still using the 16gb partitioning. Effective costs were 7.5€ for that cable.

I don’t need that extra bit of dependencies with putting partial filesystems on a host. (Although a media server with plenty of space is running 24/7.)

Iöhi i beleive I followed your nice guide bjt maybe faild somewhere. Do I need to do anything with dhcp server or could that still be in my firewall / router where my pi gets a static ip. I could have forgotten to make directory a parentsfolder could that be whats messing things up.

I have actually not tried any further to move the logs to my NAS.
However, I have moved my persistence to my NAS and this works fine.

In case it might be interesting for you: