Moving /var/log to NAS - How?

Maybe that’s the better approach :slight_smile:

I just tried the SD-Card way, however I do not understand how to configure the cmdline.txt . Which are the needed additional parts and how do they relate to the created directories created on the raspi and the nas from your example?

In my example, I created a share on my NAS with the name “rootfs”. Within this share, I create a folder for each raspberry I use. In the example, this was raspberrypi. Sounds like you succeeded in copying all the data over to the nfs share.

Now to the cmdline.txt:

You probably have a cmdline looking like this:

dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=/dev/mmcblk0p2 elevator=deadline

The root option has to be changed to root=/dev/nfs so that the Pi knows, it has to boot from an NFS share. This is always /dev/nfs. Always.

You then have to add nfsroot=192.168.21.1:/nfs/rootfs/raspberrypi ip=dhcp

Where nfsroot is the IP of your NAS, a colon, and the full path to the folder, you copied your SD-card data into.

and ip=dhcp to instruct the kernel, that it does need networking, before it can boot.

I think it was a bit misleading to copy the card data to raspberrypi but then use openhab in my production example. I will change this and thank you for your feedback.

Did this help?

Thanks for the super fast reply, will inform if and when it worked

[Edit]
Tried it, but when restarting the pi I can’t get a connection to it??

I used this as my cmdline.txt:

dwc_otg.lpm_enable=0 console=tty1 root=/dev/nfs nfsroot=192.168.178.35:/mnt/HD/HD_a2/openHAB ip=dhcp rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait

with:
192.168.178.35 being the ip of ther NAS
/mnt/HD/HD_a2/openHAB being the path to the folder on the NAS (Yes, all the files have been copied there successfully)
the part “rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait” was in the original file.

What is wrong??

I changed the cmdline.txt to its the original and rebootet, that way I can access again via SSH

I can’t find the error.

Sorry, I didn’t see your answer. Feel free to mention me in your upcoming answers, so I get an email.

This could be really wrong. The rootfs is not an ext4 mount, but an nfs mount instead.

Please ensure that your NFS server on the nas works by connecting from another linux machine or your sd-card-booted raspberrypi where openHAB is running on right now.

Then, try the following cmdline:

dwc_otg.lpm_enable=0 console=tty1 root=/dev/nfs nfsroot=192.168.178.35:/mnt/HD/HD_a2/openHAB ip=dhcp elevator=deadline

You will also get an error message on the screen, if it fails. Maybe you can carry the raspberry pi to a screen and try it with a connected screen. You can also take the SD Card out and put it into another (same model) rapsberry pi that is attached to a screen (like when you have a pi workstation or media center somewhere in your home).

Let me know how it worked! :slight_smile:

Thanks for the answer. I didn’t find the time to continue anyhow. I’ll report the result in any case.

@gersilex
Thanks again for the feedback, however I’ll move towards another solution.
My prime interest was to have a storage device that is more suitable for logging then the SD-Card.
Doing that logging on the NAS, in my case for a rrd4j database with its logging every minute, would keep the NAS actively running 24/7. I think an external SSD for such would be a better solution.

You are right. The NAS would be actively writing all the time. I thought it was running anyways as NAS are usually always-on appliances.

You can also think about using an external USB Stick just for your logging purposes (As opposed to the complete rootfs). The most reliable solution would definitely be a rotating magnetic disc, a hard disk. They are much more durable than flash memory in USB Sticks or SSDs. The small ones, in 2,5" USB enclosures can be powered from a single USB port and are usually so silent you cannot even hear them.

I recommend you to add max_usb_current=1 to your /boot/config.txt to increase the maximum USB current to 1,2 Amperes. This will ensure your Raspberry Pi won’t reset when powering up the hard disk.

Good luck! :slight_smile:

1 Like

That’s what i did a few weeks ago. Moved from an usb stick to an old 512gb hdd i had lying around. Still using the 16gb partitioning. Effective costs were 7.5€ for that cable.

I don’t need that extra bit of dependencies with putting partial filesystems on a host. (Although a media server with plenty of space is running 24/7.)

Iöhi i beleive I followed your nice guide bjt maybe faild somewhere. Do I need to do anything with dhcp server or could that still be in my firewall / router where my pi gets a static ip. I could have forgotten to make directory a parentsfolder could that be whats messing things up.

I have actually not tried any further to move the logs to my NAS.
However, I have moved my persistence to my NAS and this works fine.

In case it might be interesting for you:

I’m really trying to make this work because I have several thinclients running squeezelite around the hous and I had always had an idea of booting them from network Lubutu 16 running them now. The clients only have 1gb disk in them and now boots thru usb and have internal disk for swapfile. I have one little question about this since my boot folder are empty do I need to mount boot prtititon on my PI before I run Rsync?

I better answer myself since I found waths causing my problem, on boot everything lokked good except rootpath= nothing, made some googeling and found out that adding ,tcp,v3 to the nfs section would work and it did. My saturday couldn’t start in a better way.

I also tried to follow @gersilex 's guide, but without success.
Everything went fine (created the NFS share, mounted it, rsynced the image etc.) but the Pi (3) won’t boot.

Like @mackemot, I also found some articles mentioning the tcp and v3 options, but they didn’t help in my case.

The problem is there is absolutely no debug info whatsoever provided, so I have no idea what to change. Also, the nfsrootdebug option does absolutely nothing.

The only thing I did differently compared to the guide, is I used the OpenHABian image instead or Raspbian Lite. But it’s supposed to be based on the latter so I didn’t think this could be the problem. I might try with Raspbian at some point though.

I finally managed to get it to boot by changing v3 to v4 in the options! I thought that v4 was the default and you sometimes had to explicitly set it to v3 in case v4 didn’t work. But I hadn’t imagined I would have to explicitly tell it to use version 4!

In any case, I still don’t have a usable system though, because file ownership/permissions seem to be all mangled up. I can’t even run sudo because the user doesn’t have access to the sudoers file. I did the whole process again, this time using Raspbian instead of openHABian, but with the same result.

I could try to fix permissions manually by mounting the nfs partition to another machine, but I’d like to understand why I end up with a broken system in the first place. I’m afraid that trying to troubleshoot ownerships and permissions manually will not lead to a system that’s 100% OK.

If you used rsync to copy the files, be sure to use the option --numeric-ids to have rsync know, that you probably won’t have the same names of groups and users, but the same numbers (0 for root, 1000 for the pi user, and so on).

I’ve copy-pasted the command from your guide, which already includes --numeric-ids. So that’s not the cause. In any case I tried it again more carefully, did not work.

This must be somehow related to permissions, but I don’t know exactly how. I’ve seen other users say they had to chmod 755 the root folder (/) in the Raspberry, but this didn’t work for me. It was already 755 to begin with.

Maybe it’s because I’m using openmediavault on the NAS and it has a complicated way of handling permissions. The directory that contains the root partition is exported in /export/rootfs-openhabpi and that directory’s permissions are drwxr-xr-x and owned by root:root. I believe this is correct.

However, the files are physically located in the following directory:
/srv/dev-disk-by-label-300NAS25/nfs/rootfs/openhabpi
This has the same permissions as above but its parent directory is owned by root:users and has drwx–S— permissions as does the one above it. Could this be the problem? But I don’t understand how the Pi could “see” anything above its own root directory.

A couple of weird things I’ve noticed, in case they bring any ideas to someone:
a) I disabled root password on the Pi and logged in as root. But when trying to run “sudo bash” as root, I got the same error. So even the root user gets the “sudo: unable to stat /etc/sudoers: Permission denied” error!
b) When logged in as “pi”, I can open and read the /etc/sudoers file, even though its ownership is set to root:root and its permissions to r–r-----. How is this possible? “pi” is not in the root group as far as I can see.

OK, after countless hours, I finally managed to solve this.

The culprit was indeed openmediavault, which as I said uses a complicated permissions scheme which is based on ACL. So, even though filesystem permissions seemed actually correct when “ls”-ing, ACL had its own opinion.

Although I tried adding the no_acl option to the nfs export, this didn’t work. As neither did nfsrootdebug, as neither did the system boot without explicitly specifying v4. In general it seems that anything related to what I was trying to do was working against me by not working.

Unfortunately, it’s not possible to disable ACL in openmediavault, so I had to remove it manually from the container folder using “setfacl -bR .” in the terminal and then “rsync”-ing the root partition again. I hope openmediavault doesn’t decide to re-apply ACLs at some point, since as I said it can’t be disabled.

Thanks very much for your help @gersilex, not just in debugging but also in writing this very useful guide in the first place.

1 Like

This was an interesting journey. Thanks for reporting back in such great detail. I’m sure that people will find this useful!