I wanted to ask for some hints about what could be causing my issue. First a bit about the installation as it’s a bit complex.
I have a synology NAS where I export the custom openhab data via NFS (the configuration, logs and userdata).
I have a separate box Intel NUC usign Ubuntu where I run OpenHAB and few other services in a docker containers.
I mount the custom NFS export data from the NAS on the NUC and insert it into the docker as symbolic links as described in the documentation.
Everything runs smooth and without issues with one exception. When I edit the data directly through a CIFS share on the NAS via my VSCode, upon save the runtime does not recognize that the files have been changed and I need to go on the NUC via ssh and do a touch “configuration file” in order the runtime to “see” the change.
Do you have any clues if this could be somehow fixed by some NFS flags or it’s something else?
Thanks in advance,
First of all, it’s important to get the terminology correct. NFS is a completely separate protocol from CIFS. If you got some properties that apply to NFS, they won’t work for CIFS because the two are not the same.
As for your question, openHAB uses file system events to know when to reload the files. I’m not positive but reasonably sure that those file events do not occur on the file system where the files are mounted to for neither NFS nor CIFS.
If you want this to work, you must edit the files on the NUC, not the NAS.
Yeah. I agree with your terminology correction.
I was wondering if NFS may have any flag which could result in transfering these file system events across the network, i.e. the NAS to send the file system events via the NFS protocol to the mounting systems (in this case the NUC).
Anyway… What I did is I started a samba server on the NUC and shared the /conf folders there and now I’m accessing it directly as you proposed. This works well…
My only concern is that this may result in messup of the file permissions on the NAS but I’ll monitor the behaviour.