OH3 / Docker / Nginx

Hi all,

So I managed to get my pi upgraded to 2.5.12 and I thought it would be sensible to do something leftfield and set up OH3.1 in a docker on my Synology.

Now, it’s been a while since I’ve been on OH and even longer since I have used docker but it seems the right time to get my head back in the game this way.

I’ve installed OH3 in docker, but used this guide as I couldn’t get even the most basic of commands working…to be honest, I fell at the first step -

sudo useradd -r -s /sbin/nologin openhab

I putty SSL into the synology as the admin user, run the command and it asks me for a password, trying the admin password and any other password I can think of returns

sudo: useradd: command not found

I’m sure I am missing something absolutely basic - any pointers?

Following the other guide and it worked perfectly…

So, I’m in OH3.1, looks a lot more user friendly and seems to have come on in leaps and bounds - kudos everyone involved. However, I don’t have an openhab user…does that matter long term? Everything else seems to work just fine, things, items etc tick.

Next step was to install nginx, installed it from the registry and it installed fine, I can browse to myip:post and it shows the welcome to nginx screen so I know it’s installed ok.

However, there are no folders in the docker folder and although I can cd into /etc/nginx/sites-enabled/ there is no default file to edit…

Again, probably a real head slap moment, but any pointers please?

Maybe RUN useradd -r -s /sbin/nologin openhab ?

1 Like

It’t not clear which command it can’t find, the sudo or the userdadd. This will be specific to the Synology. You might look up adduser which is an alternative way to create users. One is lower level than the other but I always forget which one. I’m not sure which one is standard.

It’s a good security policy to keep services running as an unprivileged user. That way if the program runs amok or someone attacks it the amount of damage it can do is limited.

By default openHAB in the container will be running with UID 9000. If you don’t tell it to use some other UID by passing it in the environment variable all the files it needs access to will be owned by 9000:9000. So all the files in the mounted volumes will have that ownership. It looks odd and can in some cases a little hard to deal with when working with the volumes outside of the container. So the recommendation is to create an openhab user on the host and tell the container to use that user’s uid when it runs.

1 Like

Use Synology’s GUI to create the openhab user and group and assign them the appropriate rights. After creating them SSH into the Syno and enter “id openhab” which will display the ID numbers for the user and the group. You will need the id numbers when updating the environment variables. Retrieving the id numbers should be the only thing that you need to do via SSH.

Create a share named Docker if it does not already exist.

After installing the Docker package in Package Center open it up, click on “Registry” in the left hand menu, then search for openhab. Download the openhab/openhab image (it will probably be the first image listed), selecting the “latest” version.

Click on “Image” in the left hand menu. Once the download is complete (you should receive a Syno notification) click on the openhab/openhab image and press “Launch:”

Complete the entries in the “Create Container” pop-up, especially the “Advanced Settings” and then OpenHab will start. The “Volumes” tab will allow you to create the openhab folder and the three subfolders (addons, conf & userdata).

In my case I change the EXTRA_JAVA_OPTS, GROUP_ID, HTTP_PORT, HTTPS_PORT, and USER_ID environment variables. (I need to change the ports due to conflicts with other software.)

1 Like

Hi,

That’s amazing thanks, I didn’t want to use the GUI as I assumed there was some special setting that I’d miss that way. I’m far more comfortable in the Syno DSM so that’s a relief…

I’ve got very similar settings to you, haven’t changed the ports as I’ve got no conflicts. I set crypto to unlimited as per the docker install instructions.

However, there always is a however, when I go to myip:8080 I get a connection refused:

image

I’m guessing I’ve missed something in the user rights to allow access?

I have an openhab group which has r/w access to the docker folder but nothing else, no quota, allow for all applications, no speed limit.

I have an openhab user which has a fixed password, is a member of the openhab group and just follows the group permissions etc, ie nothing extra user specific…

What have I missed?

EDIT - FIXED

No idea what happened but I cleared everything down, deleted folders etc and then started a fresh…working fine now!

Might have been too eager or just set up slightly wonky from all the tries…

I have set up a task in Synology Task Scheduler - set to run once - so that if I need to fire up a new container I can do it easily, (just need to change the name to something different)…might help someone else here:

docker run \
        --name openhab \
        --net=host \
    -v  /volume1/docker/openhab/addons:/openhab/addons \
    -v  /volume1/docker/openhab/conf:/openhab/conf \
    -v /volume1/docker/openhab/userdata:/openhab/userdata \
        -d \
        -e USER_ID=xxxx \
        -e GROUP_ID=yyyy \
        -e CRYPTO_POLICY=unlimited \
        --restart=always \
        openhab/openhab:latest

(where xxxx is your openhab user id and yyyy is your openhab group id)

Onto NGINX…anyone any pointers as to the process to set this up in Docker please???

Makes total sense, I’ve deleted the docker container that I created so will try again with a proper openhab user at the helm :slight_smile: