Running openHAB 2 in Docker

That is the correct repository.

That is probably an error. When OH moved from 1.x to 2.0 they changed from Eclipse to Karaf. Eclipse’s management port is 5555 and Karaf’s is 8101.

Like I said before, treat EXPOSE as documentation and unfortunately in this case its wrong. But functionally it still works. When you use --new=host it doesn’t matter if the port was defined in EXPOSE or not, it grabs that port. When using -p, it doesn’t matter if the port is in EXPOSE, that port will be mapped to the indicated host’s port.

SSH is implemented by the Karaf console. The ssh you are doing is into the Karaf console, not the container itself. Therefore there is no sshd. Furthermore, when you ssh to the Karaf console, you are not working at a typical terminal but instead a custom environment specifically for monitoring and managing Karaf.

I am getting closer. Just not quite there.

When I ssh in there is some problem with the negotiation, which has me stumped.

ssh -vvv openhab@localhost -p 8101
OpenSSH_7.3p1 Ubuntu-1, OpenSSL 1.0.2g 1 Mar 2016
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug2: resolving “localhost” port 8101
debug2: ssh_connect_direct: needpriv 0
debug1: Connecting to localhost [::1] port 8101.
debug1: Connection established.
debug1: identity file /home/craigh/.ssh/id_rsa type 1
debug1: key_load_public: No such file or directory
debug1: identity file /home/craigh/.ssh/id_rsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/craigh/.ssh/id_dsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/craigh/.ssh/id_dsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/craigh/.ssh/id_ecdsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/craigh/.ssh/id_ecdsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/craigh/.ssh/id_ed25519 type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/craigh/.ssh/id_ed25519-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_7.3p1 Ubuntu-1
ssh_exchange_identification: Connection closed by remote host

Anyone have any ideas?

thanks in advance.

Hi Craig,

On the official openhab2 docker readme it states that you can do this via exec. It works perfectly for me.

docker exec -it openhab /openhab/runtime/karaf/bin/client

awesome. that works for me too. I just missed those lines in the README.

appreciate the help.

This is working great for me, except that openhab is keeping time in UTC. I suspect this is because my Fedora 24 system’s /etc/timezone directory is empty. I love to find a fix for this.

It sounds like you have already identified the solution. Either populate /etc/timezone with your timezone or link in a volume to the Docker image to replace the built in /etc/timezone with your valid timezone.

See the following for the info about timezone on Fedora.

Leaving an update in case someone else runs into this.

Since Fedora doesn’t use /etc/timezone, I was a little stumped on what to provide, but I see that other distros use a text file with the name of the timezone. In my case, I created /etc/timezone with the content ‘America/New_York’ and it worked.

I’m seeing in your directions a new function to call in the Docker RUN command:
" --user= "

I’m not seeing relevant code in the docker files though to correspond to this, similar to what was explained here by @sir_luddite. And what would that option look like in a Compose file? I’m trying to understand how to use this so I can get over the hump of having a persistent stored setup. I already just bonked my fresh setup trying to do so as I missed grabbing something before killing the container and screwed myself. Hoping to resolve this and not having this problem anymore.

–user is a standard docker docker argument that causes the endpoint in the container to be run under the indicated user. You can look at the docker documents for a full explanation. I’ve no experience with docker-compose so I don’t know how to specify the user w with that tool.

I’m not sure how this user option will address your peoblem though. It sounds like you are not mounting the right volumes into the container so all the needed data gets persisted.

Thanks for the detail, I wasn’t realizing that would essentially run the container in that user context. Interesting option I hadn’t realized before. That will solve my issue, as it will allow me to specify the user to match the backend storage system. I run a docker host with SMB/CIFS storage attached. Due to the usual permissions debacle I fight the uphill battle of ironing out permissions issues on the backend. If I can line up the user running the app to match the user that I use for the shared location, then I can have a seamless experience that doesn’t have the hiccups most run into.

I’ve looked at attempting to run the data that I need to persist on the local host as well, but that approach brought me some issues recently and forced a re-configuration of my whole setup from a mistake made with the volume. :frowning: So I’m trying to approach it again and see how I can handle it best. This User option may work easier though, I just need to do some validation.

I’m following the instructions here but running into permission issues:

The user id passed to “–user” is actually 999, and then start_debug.sh says “java.io.FileNotFoundException: /openhab/runtime/system/org/ops4j/pax/web/pax-web-api/4.3.0/pax-web-api-4.3.0.jar (Permission denied)”. Attaching to the container, and I can see inside the container the system creates an user whose user id is 9001, and all the directories under /openhab/runtime are only writable by that user.

Any idea how to deal with this?

That is not the behavior I’m seeing but I’ve not updated in awhile. I can’t say if something has changed.

For me, when I don’t pass in the user argument OH ends up running as UID 1000 for me. I have no idea where 9001 is coming from.

New issue I’m running into. Anyone else using Docker to run multiple other apps, specifically ones like Plex that may be utilizing a mapping to port 1900? This is causing me issues with Hue Emulation. If I run the Plex container first, it beats out OH2 from locking down the port for use. If I start OH2 first, Plex fails to start. So I find myself having to down OH2 so I can get the Echo to update the list of devices through the Hue Emulation bridge, then down OH2, up Plex again, then up OH2.

It’s an annoying process more than anything as a workaround. I found this issue: https://github.com/openhab/openhab2-addons/issues/698 which is relevant and outlines the same problem I’ve seen. The difference is this is relevant to two plugins I believe conflicting, whereas I have 2 different applications on a single host conflicting. Anyone finding a similar issue? My OpenHAB.log is being flooded with the error relevant to the port being in use.

I’ve not seen the same issue but can you use port mapping to move OH or Plex to another port? Or are these well known ports applications are expecting to look for?

I run Plex in Docker too and have no idea what port 1900 is for and what would happen if you moved it to some other port or blocked it.

If you are running OH with --net=host you have fewer opportunities to move the Hue emulation port in this way.

@rlkoshak - I believe it’s port 1900 and I believe it’s a standard for UPNP for a lot of things unfortunately. In this case, it’s more important to me to have my Plex discoverable than my Hue bridge exposed, but I also want to get HomeKit running, and I suspect this is causing me woes there as well.

I don’t like using Host mode unless I have to. So for now I’ve got Plex mapped to port 1900 already, so mapping OH as well won’t work. I’m going to try just testing if using a different port # will work, but I believe these apps rely on the receiving port to be 1900. Just as if you were sending mail, you’d expect it on 25. Telling the app differently doesn’t seem as easy. I’m no guru though with UPnP so we shall see.

@hclxing, @shawnmix,

I myself just ran into these issues with my install. Clearly there was a very recent change to the image where now the default user it runs as is openhab which has an ID of 9001 inside the container. Unfortunately most OH users do not have a 9001 user and because the default permissions tend to not give world read the openhab user inside the container does not have permission to read/write the files.

See the following:

The tl;dr: create an openhab user on your host with ID 9001 or give world read/write on the openhab directories and files that get mounted into the container.

So what I’m doing is telling docker to make a macvlan on the same subnet as the docker host. In this case, my IP range is 192.168.1.x

docker network create -d macvlan
–subnet=192.168.1.0/24 --gateway=192.168.1.1
-o parent=enp1s0f1
-o macvlan_mode=bridge
macvlan0

In my docker create statement, I add these lines.

–net=macvlan0
–ip=192.168.1.7
-h openhab2 \

My docker host is on 192.168.1.4 and now my OH2 install runs on 192.168.1.7. One thing to note is that my OH2 install can’t speak to the host over IP. My plex server is also setup the same way on a different IP, and OH2 and plex can talk to each other.

@obbers - I just discovered this new supported network type. I’ve seen macvlan and ipvlan - though I’m sorting out which will work better in my environment. For my case, I’m running my docker host on ESXi - which I believe I read somewhere that macvlan will not work properly in a virtualized environment, so it may be ipvlan i need to work with.

Either way, it’s good to hear this has been successful for you. I’m wondering how well the system works if Plex is on the regular bridge network and OH on the macvlan/ipvlan network. The only reason I don’t like the idea of the vlan network types, is because then EACH app will have it’s own IP. Currently I like having one IP with various ports for the services (plex, plexrequests, openhab, unifi-controller, plexpy, etc). Eventually I’ll get to using nginx to proxy everything anyway, so then I won’t be exposing the other apps at all.

One question - have you seen documentation for using these drives with Docker compose? I can’t seem to find any, and that’s my method of spinning up containers currently. I usually try to mimic the levels I’d expect in the yaml file, but I’d rather official documentation. :smiley: And have you found that you MUST assign the IP or the container just auto-assigns the next IP in the range? This was a disappointing find in one my tests is that the container won’t use DHCP from the gateway assigned. I’d rather this so that all my IP reservations can be done through my PFsense box.

Does this effect the ability to use “discovery” with openhab like mentioned in the documentation?

I would like to use a dedicated IP for the openhab container, too.
But I do not want to lose any feature.

see: Question regarding discovery with docker container

It keeps the discovery features. Using the macvlan setup I use, is the equivalent of --net=host but you’re also giving it a static IP address as well.