I’m creating multiple openhab containers (X, Y, Z) on the same OpenHAB image. Problem is when when i run the following command, it always go into the console of the first container that started the openhab instance instead i should expect it goes into individual container’s console.
sudo docker exec -it X /openhab/runtime/bin/client
The following is the command i used to create the instance (replace X with Y and Z and different HTTP and HTTPS ports)
To get the to console it uses ssh, even when using openhab-cli or that shell script. You are running these containers using --net=host meaning that the first container gets the networking ports (8080, 8443, 8101, and others) and the rest of the containers get conflicts and bind errors. Only some of the ports can be changed through environment variables but not all, and this ssh port is one of those that cannot.
The ssh port used to access the karaf console is port 8101 so you’ll need to stop using --net=host and instead map the ports individually so each container uses a different set of ports for everything, not just http/https. Then use ssh -p 8101 openhab@localhost to access the console from the host.
Or do not expose port 8101 from any container and the docker exec command you wrote above should work as is since it’s running from inside the container and the port is only available inside the container.
But in the end, you cannot run multiple instances of OH on the same machine and have full functionality using --net=host.
Why not? The services running inside the containers can still reach out to the network to communicate. All the --net=host does is skip all the port forwarding and mapping stuff and lets them remain at the defaults. In other words it can change how other services connect to OH, not the other way around.
But consequently that means you can’t run multiple copies of the openHAB container on that same host unless you remap all the ports to an unused port for all the containers. And the set of ports that OH will listen on varies as some bindings open their own ports.
So your choices are:
use port mapping (i.e. the -p option) to expose and map all the ports you want to expose to the network for incoming connections, leaving the rest only available inside the container
continue to use the --net=host option and run the containers on different hardware/virtual machines
@rlkoshak Thank you for your guidance. Yes, I finally got it to work without --net=host. For those facing similar issue, removing --net=host may break access to services via network to host services. You need to explicitly change the iptables to allow traffic from docker0 to host with the following command:
sudo iptables -A INPUT -i docker0 -j ACCEPT
and don’t forget to include “–add-host=host.docker.internal:host-gateway” (I believe docker 18.x and above), so that you can access the host directly with host.docker.internal instead of docker0 interface IP address.