Update to OH 5 docker image on Synology NAS: /entrypoint: line 119: exec: gosu: not found

Hi all. I have been trying to upgrade the OH installation on my Synology NAS from OH 4.3.0 to OH 5.0.1. As usual, I have downloaded the new docker image. I have also installed Java 21.

Yet, when I start the image, I get this message:

/entrypoint: line 119: exec: gosu: not found 

I read some related posts and that made me think that this problem should be fixed by now!? Is there a workaround that you can suggest?

Indeed, this should be fixed. Are you certain you downloaded an Image released since June?

A work-around would be to custom build a docker image that includes gosu or make the changes as described in that PR.

If you are using the Synology-Containermanager it is possible to “duplicate” the container and edit the entrypoint manually to su-exec openhab tini -s ./start.sh

I also use a Synology NAS, and I haven’t encountered this issue yet. I’ve been using the official Docker container for years and have only mounted conf, addons, and userdata, and adjusted a few environment variables.

2 Likes

I also run openHAB on Synology (DSN 7.2) without this issue. However I did not install Java 21 via the Package Center. Seems that the Docker container installed it automatically. In fact, there on no Java packages installed in my Package Center.

Also, when I upgraded from 4.x to 5.0 I just used the “upgrade available” link on the Image tab of Container Manager. That link downloaded the new image and then restarted the container. The upgrade went smoothly.

That’s the whole point of containers. They ship with everything they need to run in a self contained bundle (didn’t want to use the word “container” to describe it), and they do not depend on any software installed on the host (i.e. DSM). If a container were to install Java or anything else on DSM, something is very seriously wrong.

Hmm this is odd.

First, about Java: I had downloaded an initial version of the 5.0.1 image some weeks ago and already had some problems with it then. Then I read here that one requires an installation of Java 21 and followed this instruction (Step 1) to install it. But you are right: if I remove that package again then I get the same error message as before, so Java is not the issue. I was also assuming that Java should be part of the image. And apparently that’s the case.

But I still do not understand why the image is not working for me. To make sure, I have deleted the image again, I have run also a “sudo docker image prune” on the commandline, and then re-downloaded the 5.0.1 image again. I even tried the image version “5.1.0-snapshot”, which I had never before downloaded. Same result, I still get the gosu error. So are these images still broken?

Luckily, @uk59821’s workaround worked for me: I duplicated the container and then manually patched the entrypoint. I have the same folder mapping as you do.

@JimH Your solution also sounds quite convenient. Are you using the “latest” image then? I had read somewhere that this was not recommended for some reason but I cannot recall any more why.

(Sorry I am not really at all a Docker expert.)

Yes, I’m using the openhab/openhab:latest image.

I can only think of two things that I do differently than typical Container Manager procedures..

  • I used the DSM UI in order to create an openhab user and openhab group. That way I can restrict the folders and apps that the openhab user has access to.
  • I use a bash script instead of creating the container in Container Manager. (I needed to do that in order to pass the z-wave dongle to openHAB.)

The the bash script that I use to create the container follows. The container it creates has the Execution command = su-exec openhab tini -s ./start.sh.

#!/bin/bash 
# Make sure that all line end characters are LF only, not CRLF.
# Line continuation is a backslash but the backslash must be the 
# last character before the end-of-line.  (No spaces between 
# the backslash and the LF). 
sudo docker create  \
--name OpenHab  \
--net=host  \
--restart=unless-stopped  \
--memory=4G  \
--device=/dev/ttyACM0 \
-v "/volume1/docker/OpenHab/userdata:/openhab/userdata"  \
-v "/volume1/docker/OpenHab/conf:/openhab/conf"  \
-v "/volume1/docker/OpenHab/addons:/openhab/addons"  \
-v "/volume1/docker/OpenHab/speedtest:/speedtest"  \
-e "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"  \
-e "CRYPTO_POLICY=unlimited"  \
-e "EXTRA_JAVA_OPTS=-Duser.timezone=America/Chicago -Dgnu.io.rxtx.SerialPorts=/dev/ttyACM0"  \
-e "GROUP_ID=65536"  \
-e "KARAF_EXEC=exec"  \
-e "LC_ALL=en_US.UTF-8"  \
-e "LANG=en_US.UTF-8"  \
-e "LANGUAGE=en_US.UTF-8"  \
-e "OPENHAB_BACKUPS=/openhab/userdata/backup"  \
-e "OPENHAB_CONF=/openhab/conf"  \
-e "OPENHAB_HOME=/openhab"  \
-e "OPENHAB_HTTP_PORT=8080"  \
-e "OPENHAB_HTTPS_PORT=8443"  \
-e "OPENHAB_LOGDIR=/openhab/userdata/logs"  \
-e "OPENHAB_USERDATA=/openhab/userdata"  \
-e "USER_ID=1027"  \
-e "JAVA_HOME=/usr/lib/jvm/default-jvm"  \
openhab/openhab:latest 

Hope this helps.

I’m running the latest Docker image and not seeing any problem. Are you running the Debian or Alpine image?

You don’t really need to go to that extreme. If you mount a folder as a volume to /etc/cont-init.d, any script in that folder will be executed prior to the entrypoint.

That is a good question. The Synology Container Manager does not show this. Is there an easy way to find out? My Synology device is a DS723+.

Had the same issue when upgrading some 4.3.x version. Seems like the entry point of the official image changed?

What is the full name of the image you are using? If "alpine"is in the name you are using Alpine. The default is Debian.

I use the default.

openhab/openhab:latest

I am sorry, maybe it’s a deficiency of Synology’s Container Manager app but it does not show anything about Alpine or Debian…

I guess it will be the default then, Debian.

I had the same issue and resolved it by exporting the container settings, editing the entry point, and creating a new container. Pro tip: delete the old container before creating the new one to avoid port conflicts.

Yes thanks, that actually seems to work!