Migration to Rancher 2.5/RKE

I’m working through a migration from Rancher 1 to Rancher 2. It turns out that there are some major changes to how docker containers are deployed in the Rancher/Kubernetes environment. I wasn’t able to do much of anything relative to my original setup with Rancher 1. Having said that, I got it to work – and want to share the details.

My new OpenHAB stack is setup on a Ryzen9 (yes, overkill – but boy is it fast) with Ubuntu v20.04 install as the base OS. On top of the Ubuntu I’m running Kubernetes in a single cluster managed by Rancher v2.4.5. The setup was deployed with RKE v1.1.3. Similar to my setup with Rancher 1, I’ve used persistent storage on my FreeNAS file server (same network, different box). My FreeNAS is also freshly upgraded to v11.3-U3.2.

Permissions

By far the greatest issue I’ve had is setting up the persistent storage for the entire stack. When I migrated to Rancher 1 I hacked together persistence until it worked, but never back traced the effort to make sure that it made sense, was repeatable and not prone to simply breaking. The most significant issue on persistence related to how the OpenHAB docker image is built. This time, I focused on leaving the docker image as is and using the Rancher 2 interface to fix everything. The docker image for OpenHAB has some hurdles when initializing the docker container. Although I don’t understand all the exact details, the Rancher 2 mount services obviously use the Rancher 2 UID/GID which isn’t inconsistent with Rancher 1. But unlike Rancher 1, I wasn’t able to fix this by either setting all the users the same or manually applying the chown on the server side. I had to actually fix the permissions rather than forcing users/groups.

So, here’s how I solved it. [* I know I have some internal network security issues with how I set this up – but those are within my risk tolerances.] On the FreeNAS box, when I setup NFS services, I set MapallUser to root (Sharing, Unix Shares (NFS), right click details, click Advanced Mode, set MapallUser to root, save). This allowed the OpenHAB docker to run the scripts as intended.

From what I understand, you can have everything setup right in both the docker container and the NFS, but there is a permissions safety check in that when the docker container executes things as either root or user in the docker those permissions are handled in the docker container with strict rules. Once the NFS is mounted, the docker container no longer has control over the permissions, the container passes those permissions to the FreeNAS box. The NFS then remaps those request to nobody (which has no power to do anything) unless it is the actual owner of the item being effected.

As a result, any changes made on the mount drives by anyone other than openhab cannot be changed by openhab. This includes the start up scripts and anything added in the persistent storage outside of the docker container (by another user). By remapping the incoming root user from the docker container to the root user of the NAS for those attached volumes, the behavior is identical to having the directory physically in the docker container.

Accessing Persistence Outside the OpenHAB docker

One major change that I had to undertake is how I access the NAS from my Windows 10 computer. Historically, I would expose the OpenHAB persistence with an SMB share for the Windows 10 computer – this didn’t/wouldn’t/couldn’t work. I ended up enabling NFS on my Windows 10, setting the anonymous uid and gid to 0 (root), and then mounting the NAS as an NFS rather than an SMB. Although this works, there is an occasional “locking” issue where by Windows 10 box will not allow file changes (rw) due to another process locking the file or directory. When I SSH into FreeNAS and check, they are available. I’ll need to figure this out.

Now it works perfectly – everything is accessible (hence the security concern) and there are no permissions or access restrictions.

The Latest Stack

  1. Rancher/RKE
    a) OpenHAB – Regular dockerized OpenHAB
    b) Node-Red – used for more complex rules and automation
    c) InfluxDB – long term persistence of just about everything
    d) Grafana – not used to the fullest extent, but will use outside of the standard UI.
    e) Frontail - as a note, I pulled Frontail out of the OpenHAB container and run it separately so that within the same node I can map multiple logs into a single Frontail system. I have all the openhab logs and the node-red logs going there. It would be easy to add any other container logs if needed.
    f) Chronograf – Interface fro InfluxDB
  2. FreeNAS
    a) Persistent Store – there is only a single entry point for the NFS, but I set them up as separate mounts (probably not needed or correct, but it was incremental with each new container. Because the entry point is managed by the node, the shares are available to all the containers in the node. This made mapping frontail incredibly easy.

The site won’t let me upload a YAML file, but if anyone would like them just let me know.

1 Like

I guess I cna just as easily link to a github page…

Rancher2/RKE Deployment YAMLs

I moved this to the Tutorials and Solutions category where it will be easier to find. Thanks for posting!