Hi, fellow openhabians
I’m at a diffficoult state of mind regarding my openhab setup and experience
I’ve been running openhab for several years, and I simply love it! Started with a simple setup on rpi2 nad upgraded to rpi3 onwards. After some time I felt it was about time to move my setup over to my Ubuntu server PC. That was the time I where introduced to docker containers.
Finally I decided to run my setup in docker environment, and found it to be a safe quick and great solution to run my OH2 instance. BUT… I have been struggling with some challenges ever since.
To the questions:
- I would like to run several rules that triggers python rules on host computer.
How can I interact with host resources like python etc??
- Also have ipcamera binding installed, but can’t get access to host ffmpeg. Any solution on this matter would be truly appreciated.
I have certainly spent my hours on this forum, looking for a solution. So would be very happy to have some concrete proposals or better yet, solutions to my cases😉
Any reason to not run them in s separate container? Depending on how complex you wish to get there are python libraries that expose a rest api.
Disclaimer: I’ve never run docker and know very little about it
That said. I’m pretty sure Rich will be along shortly and chime in as ‘docker’ is in your thread title but… I’m pretty sure he usually states that running external scripts is one of the harder things to do in docker. I believe the reason has to do with docker not being a complete running o/s. It only has access to the things you preinstall in the docker, not thing that exist in the host o/s. Perhaps virtualizing would be a better fit.
Speaking of thread titles, maybe change this one to IP camera and python scripts in docker or something would be better then ‘yet, another docker topic’
You treat a Docker container like another host on the network. You should be able to ssh to the host fine, especially when the openhab user has the same UID on both systems.
You can’t, at least not directly. That’s the whole point of containers. They are isolated from the host as much as possible.
You will need to create some sort of server that OH can communicate with over the network. For example, https://github.com/rkoshak/sensorReporter which uses MQTT.
Again, you can’t. You need to access the video stream as if it were hosted on some other computer, which, from the container’s perspective, it is. In fact, when you don’t use --net=host the host machine is on a completely different networking subnet from the container.
It has more to do with the fact that the whole purpose of containers are:
- to provide isolation from everything else running outside the container, including other containers and the host machine
- only include the bare minimum set of libraries and programs to run the service; Python 3 is not required to run OH so Python 3 is not present.
But indeed, if you need more stuff inside the container than it comes with, you need to build a new Docker Image that installs and includes that stuff or use the container init scripts Kevin linked to in order to install it. But over all, Bruce’s concept of running that stuff in another container and exposing it over the network is a more typical containerized approach.
Typically a Docker container is not running
sshd so usually you can’t
ssh into a running container. I’ve seen lots of articles claiming that doing so is in fact an anti-pattern. The more accepted way to interact with a running container is to use
docker exec -it <container name or ID> <command>. If you want a shell into the container for some reason you would use
There are other aspects of a container that make thinking about it as just another host on the network not the best way to think about it.
For anyone who stumbles on this, please look at Docker Hub
I’m migrating my OH3 over to a docker based setup on my Synology and I needed to be able to run python scripts natively through exec.
I used this to create a small shell script that installs python at the start of the docker session. It’s not the most amazing thing, but it works. Basically create the cont-init.d folder and make sure it is mapped to /etc/cont-init.d inside the environment. I then created a python-install.sh file and put:
apt-get install --no-install-recommends -y python
Yes that causes my docker to take longer to start, but it all works so until I have a better answer, this is my answer.
If the start up times are a problem, you can create a Docker file and build a custom image based on the OH image. The Docker file would be pretty simple.
RUN apt update &&\
apt install -y python && \
rm -rf /var/lib/apt/lists/*
Building the image on the Synology is an exercise left for the reader. But this will install Python into the image so it will be there already when the container is started.
After playing with it a bit, its only slow on the first start after clearing. Otherwise since it’s installed into the container it only added about 2 seconds to the start time which is not noticeable.