Docker install on Synology : how to create use with no home and no shell

I want to migrate my RPi installation to my Synology NAS and use the docker image.

So first thing to do is obviously to reading the documentation but this is the question I have.

The docker installation instructions mention the creation of an openhab user configured to be a system user with no home and no shell.

sudo useradd -r -s /sbin/nologin openhab

and to add your regular user to that openhab group.

But Synology user management is done through the GUI, so can anyone tell me how to do these steps on a Synology?

Thanks

My old Synology supports SSH. I would guess yours does to :slight_smile: That is how I would try it.

Sure connecting to the synology is no problem but user management works completely different on a synology, it involves the synouser and synogroup commands. So you can’t just ssh into the synology and execute the commands from the documentation.

Ditto!
I also trying to figure out how to what to put for the user aspects

Hi @rswennen
Did you solve the issue? I also try to get openHAB running on a Synology NAS.

Hi.

I am just doing the same migration process from Pi3 to DS916+

First I wanna say thanks to Rich @rlkoshak and the other guys improving the docker container and it’s documentation. I think the docker install page is a really good reference. Great work! I’m really falling in love with docker :heart_eyes:

What is missing is (of course) the syno specific stuff regarding user management. So after same try-and-error I figured out the following steps (tried it several times with oh2.3 and oh2.4 container) to get the container up and running on DSM6.2

1) Create directories

Like mentioned in the docs you need to create the directories that should be mounted in the container later on. I have set up a share on my DSM with a directory for each docker container I’m using, containing the different mounts as subdirectories, e.g. conf (see also screenshot below, ignore the .vscode and .git folders for now).

2) Set up permissions for the share

I have defined a separate docker user on DSM with write access to this docker share and no other DSM permission (user is called docker_rw on my system). Basically you can also use your “normal” DSM user as long as this user has write access to the docker share. But I’m a fan of dedicated users for these things.

Now you have to find out the user ID of your docker user. To do so SSH into your DS. There may be different solutions to find out the ID of a linux user, I always use

cat /etc/passwd

The output is 1 line for each linux user on the system like this

docker_rw:x:1030:100:<Some-Description>:/var/services/homes/docker_rw:/sbin/nologin

where 1030 is the user ID.

3) Deploy the container using docker compose

I prefer docker compose for repeatable deployment. So let’s create the required docker-compose.yml file in our main openhab directory on the docker share (see also screenshot below) with the following content:

version: '3'

services:
  openhab_test: #name of the service
    container_name: openhab_test
    image: "openhab/openhab:2.4.0-amd64-debian"
    restart: always
    network_mode: host
    volumes:
      - "/etc/localtime:/etc/localtime:ro"
      - "/etc/TZ:/etc/timezone:ro"
      - "./addons:/openhab/addons"
      - "./conf:/openhab/conf"
      - "./userdata:/openhab/userdata"
    environment:
      USER_ID: "1030"
      OPENHAB_HTTP_PORT: "18080"
      OPENHAB_HTTPS_PORT: "18443"
      EXTRA_JAVA_OPTS: "-Duser.timezone=Europe/Berlin"
      LC_ALL: "de_DE.UTF-8"
      LANG: "de_DE.UTF-8"
      LANGUAGE: "de_DE.UTF-8"
    devices: 
      - "/dev/ttyACM0:/dev/ttyACM0" # for Zwave stick

I think most of the statements are more or less self-explanatory. This is the example for my test installation so I have changed to ports. This is not necessary if you want to use the standard ports. You can set all environment variables as described on the openhab docker install page.

Important step here is to set the user ID we found out in step 2 as environment variable and NOT as user flag. This took me some time to find out as the openhab dockerhub reference says you have to do both.

This didn’t work for me and caused to container to fail with permission issues. After removing the user statement it just worked. Actually I’m not sure if this is a Synology specific issue or just an issue in the docs.

After the docker-compose.yml file is saved go back to your Synology SSH terminal and let the magic happen:

cd /volumeX/<name_of_docker_share>/<openhab_folder>/ #example
cd /volume3/docker/openhab_test/ #real command on my system
sudo docker-compose up -d

That’s it :ok_hand: (hopefully :rofl:)

Now the container should be up and running and also visible in the docker UI on DSM. You can now stop/start/modify/delete the container from the UI as any other container created via UI.

Hint: Of course you don’t have to use docker-compose. It should work the same way with docker run ...

So, enough for now. I hope this write-up helps some of you guys to have fun with docker and openhab on your Synology.

And to the experts out there: I’m just learning docker so if you find some mistakes or room for improvement please let me know.

Cheers
Sebastian

6 Likes

Hi @hannibal29

Thanks a lot for your guideline.

Unfortunately I missed your post so I tried it on my own. My experience is, that step 1 and 2 worked for me as well. For step 3 I used the docker UI on DMS. Additional to the default settings I had to define the volumes passed to container and set the USER_ID environment variable. Setting the GROUP_ID as described in the docker install page failed, because the passed group ID was already in use on the container.

Your approach by using docker compose seems to be more flexible than the docker UI. So I will give it a try.

Best regards
Raphael

I can confirm this. Setting group ID also broke my container.

Thanks for your addition. Of course using the DSM UI also is possible. This is up to personal preference.

thanks alot for your detailed description. helped me alot to get OH working with docker on Synology.

This brings me to further questions, regarding this setup. Did you manage to get Visual Studio Code with working Syntax Error handling working? I changed the IP and port in the extension settings. Now the BasicUI from VS Code works, but the syntax check for rules, for example is not working. Default port of Language Server Port is 5007, but I do not know if it is necessary to change that. (I use the properties from your guideline, port 18080 for http).

any help would be appreciated.

I’ve not tried it with vsc yet. Can give you feedback after checking. But this may last until weekend

Fantastic post, used it to set up my docker image.
The only issue is getting my Aeotec ZStick Gen5 to workout with a chmod every boot.

I tried following the instructions here but with no success. If I manually create the symlink it works, because my chmod on the symlink is applied to the ttyACM0 as well, but my usb-zwave.rules file doesn’t appear to work. I also can’t figure out how to debug it.

Any ideas?

Thanks.

Syntax completion and error checking works for me. These are my VSC workspace settings for the modified ports:

    {
        "openhab.host": "<host IP>",
        "openhab.port": 18080,
        "openhab.karafCommand": "ssh openhab@<host IP> -p 8102",
        "openhab.lspPort": 5008
    }

Mh, for me this was the solution. To double check here is exactly what I did for my ZStick Gen5:

  1. Created /lib/udev/rules.d/55-usb-zwave.rules with following content:
SUBSYSTEM=="tty", ATTRS{idVendor}=="0658", ATTRS{idProduct}=="0200", SYMLINK+="usbZwave", MODE="0666"
  1. Update docker-compose.yml:
[...]
EXTRA_JAVA_OPTS: "[...] -Dgnu.io.rxtx.SerialPorts=/dev/usbZwave"
[...]
devices: 
      - "/dev/usbZwave:/dev/usbZwave"

Thanks.
I must be doing something stupid, again this works, but only if I create the symbolic link myself and chmod it to 666.

I did look the other day and it appears my Synology is trying, and failing, to load the USBmodem rules on my ZStick. No log items for the 55-usb-zwave.rules file I created.

Maybe my issues was creating the group for the symlink.

homeautomation:x:11:root,openhab

Not sure how I was supposed to do this, and can’t remember now. Maybe I got it wrong? Can someone help so I can check if it is correct?

Thanks

Steve

Thanks for the info, but doesn’t work for me. Still error in language server:

[Error - 18:09:02] Server initialization failed.
Message: Internal error.
Code: -32603
java.util.concurrent.CompletionException: java.lang.IllegalArgumentException: URI has an authority component
at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at org.eclipse.xtext.ide.server.concurrent.WriteRequest.run(WriteRequest.java:44)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalArgumentException: URI has an authority component
at sun.nio.fs.UnixUriUtils.fromUri(UnixUriUtils.java:53)
at sun.nio.fs.UnixFileSystemProvider.getPath(UnixFileSystemProvider.java:98)
at java.nio.file.Paths.get(Paths.java:138)
at org.eclipse.smarthome.model.lsp.internal.MappingUriExtensions.toPathAsInXtext212(MappingUriExtensions.java:209)
at org.eclipse.smarthome.model.lsp.internal.MappingUriExtensions.mapToClientPath(MappingUriExtensions.java:119)
at org.eclipse.smarthome.model.lsp.internal.MappingUriExtensions.toUriString(MappingUriExtensions.java:110)
at org.eclipse.xtext.ide.server.LanguageServerImpl.lambda$publishDiagnostics$26(LanguageServerImpl.java:447)
at org.eclipse.xtext.xbase.lib.ObjectExtensions.operator_doubleArrow(ObjectExtensions.java:139)
at org.eclipse.xtext.ide.server.LanguageServerImpl.publishDiagnostics(LanguageServerImpl.java:457)
at org.eclipse.xtext.ide.server.LanguageServerImpl.lambda$null$9(LanguageServerImpl.java:293)
at org.eclipse.xtext.ide.server.ProjectManager.lambda$null$3(ProjectManager.java:135)
at org.eclipse.xtext.build.IncrementalBuilder$InternalStatefulIncrementalBuilder.validate(IncrementalBuilder.java:267)
at org.eclipse.xtext.build.IncrementalBuilder$InternalStatefulIncrementalBuilder.lambda$launch$6(IncrementalBuilder.java:244)
at org.eclipse.xtext.build.ClusteringStorageAwareResourceLoader.lambda$executeClustered$1(ClusteringStorageAwareResourceLoader.java:77)
at org.eclipse.xtext.xbase.lib.internal.FunctionDelegate.apply(FunctionDelegate.java:42)
at com.google.common.collect.Lists$TransformingRandomAccessList$1.transform(Lists.java:617)
at com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48)
at java.util.AbstractCollection.toArray(AbstractCollection.java:141)
at java.util.ArrayList.addAll(ArrayList.java:581)
at com.google.common.collect.Iterables.addAll(Iterables.java:352)
at org.eclipse.xtext.build.ClusteringStorageAwareResourceLoader.executeClustered(ClusteringStorageAwareResourceLoader.java:80)
at org.eclipse.xtext.build.BuildContext.executeClustered(BuildContext.java:55)
at org.eclipse.xtext.build.IncrementalBuilder$InternalStatefulIncrementalBuilder.launch(IncrementalBuilder.java:251)
at org.eclipse.xtext.build.IncrementalBuilder.build(IncrementalBuilder.java:399)
at org.eclipse.xtext.build.IncrementalBuilder.build(IncrementalBuilder.java:384)
at org.eclipse.xtext.ide.server.ProjectManager.doBuild(ProjectManager.java:115)
at org.eclipse.xtext.ide.server.ProjectManager.doInitialBuild(ProjectManager.java:107)
at org.eclipse.xtext.ide.server.BuildManager.doInitialBuild(BuildManager.java:148)
at org.eclipse.xtext.ide.server.WorkspaceManager.refreshWorkspaceConfig(WorkspaceManager.java:148)
at org.eclipse.xtext.ide.server.WorkspaceManager.initialize(WorkspaceManager.java:117)
at org.eclipse.xtext.ide.server.LanguageServerImpl.lambda$initialize$10(LanguageServerImpl.java:295)
at org.eclipse.xtext.ide.server.concurrent.WriteRequest.run(WriteRequest.java:38)
… 5 more

any idea where I can find the issue?

Hi @hannibal29

Today I tried to get my EnOcean USB310 stick to work on my Synology/docker setup. Based on your solution for your Zwave stick I did the following:

  1. I created /lib/udev/rules.d/55-usb-enocean.rules with the following content:
SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", SYMLINK+="usbEnocean", MODE="0666"

The USB vendor and product ID I found with lsusb:

|__usb1          1d6b:0002:0404 09  2.00  480MBit/s 0mA 1IF  (Linux 4.4.59+ xhci-hcd xHCI Host Controller 0000:00:15.0) hub
  |__1-2         0403:6001:0600 00  2.00   12MBit/s 90mA 1IF  (FTDI FT232R USB UART AL05792Y)

I would expect, that the Synology will now create a /dev/usbEnocean device. Unfortunately there is no such entry at /dev.

Do I need some additional configurations on the Synology itself to allow attaching external USB device?

Best regards
Raphael

Hi

The instruction here solved the problem. Now I see /dev/ttyUSB0 device.

I modified my compose file as follow:

[...]
EXTRA_JAVA_OPTS: "-Dgnu.io.rxtx.SerialPorts=/dev/ttyUSB0"
[...]
devices: 
      - "/dev/ttyUSB0:/dev/ttyUSB0"

Unfortunately OH is not able to find the port “ttyUSB0”. I’ve got the following log entry:

2019-02-17 18:49:29.867 [WARN ] [erial.internal.SerialPortManagerImpl] - No SerialPortProvider found for: /dev/ttyUSB0

The documentation for a OH linux instalation here, talks about “Privileges for Common Peripherals” and to add the groups “dialout” and “tty”. At the moment I do not know if I need this step and how to do it within my docker installation?

Best regards
Raphael

What are the privileges you see for /dev/ttyUSB0 inside the container?

The privileges I see within the container are:

crw-rw-rw- 1 root root   5, 0 Feb 17 22:04 tty                                                                                                                    
crw------- 1 root root 188, 0 Feb 17 22:04 ttyUSB0 

Thanks for your hint. With privileges changed to 666 for ttyUSB0 the EnOcean binding is able to see the EnOcean stick.