Found multiple local interfaces - ignoring - again OH3M5

Having difficulty getting OH3M1 → OH3M5 upgrade running.

[WARN ] [org.openhab.core.net.NetUtil ] - Found multiple local interfaces - ignoring 10.0.28.79

Googling this issue produces lots of stuff over many years.

The official OH3M5 container is running in SWARM mode. OH3M1 runs fine, so this is a new thing (for me). I’ve tried all the stuff lots of people have said works. Nothing works for me. OH3M5 always ignores the correct interface :frowning:

Any help most welcome - rolling back to OH3M1 in the meantime. Might try the intermediate milestones to see if any of them work for me.

Thanks in advance for any help.
PhillB

P.S. Using Portainer 2.5.1 with 10 pi nodes

I am not sure that is a supported mode for OH.

First of all, it’s a warning, not an error. It may or may not be something that matters to you.

If it does in fact matter, in MainUI you can pin OH to the primary address you want it to under Network Settings.

I don’t know that it’s not supported, but it’s certainly not something tested. Given that OH is a home automation hub, I’d guess that 90%+ of all OH users have a hardware dependency with their OH server meaning they have to pin OH to specific machines anyway, eliminating most of the benefits of running it in a swarm.

But as far as OH is concerned, whether it’s running in a swarm or not should be hidden from it. But I don’t know if there is some stuff going on with the networking that might be causing problems here.

Thankyou so much for the replies.

With regards to warning, indeed this may have nothing to do with my issue.

OH has been running fine on whichever node it’s been assigned to (which is mostly the RPi4B 8G nodes of which there are 3). It has a private network associated so that it never touches the real network. All contact to the outside world is via various proxies, something HA cannot do - and hence the reason for picking OH. If “you” do some bad thing then data cannot leak to OH “home” as it does in HA.

The point of bringing that up is that I don’t have control over the address range assigned to the various interfaces. At least I haven’t configured addresses (if it can even be done).

Further digging reveals the interface is active but the DNS can no longer resolve anything on the private LAN, including the machines own address. Furthermore the other machines on the private LAN, all of which are working OK, cannot see the OH machine in the DNS. It’s as if the container has failed to startup correctly.

However I can connect to the OH container’s console and wonder about inside there until it gets restarted, which is about 5 minutes worth.

Eventually Portainer/Swarm declares the startup to be a failure and so closes it down and schedules it start running on another node. And it just repeats over and over forever.

As for the upgrade itself, OH declares everything is OK and it upgraded from OH3M1 to OH3M5 correctly. The only thing wrong appears to be the network issue.

I’ve put some stuff in a web page here describing my antics with OH and Tasmota gadgets. ElectricBrain | OpenHAB/Tasmota

And thanks again for the help.

Sorry, I forgot to mention, I cannot get to the MainUI to pin an address. That might be do-able via the console (karaf - although it says it’s already running and won’t start).

Cheers.

In addition to running in a swarm configuration, this too is an unusual configuration for openHAB because openHAB advertises itself on the network to it can be discovered and it depends on network broadcasts for automatic discovery of a number of APIs and technologies. So most people run OH using net=host to allow the full range of support for networking capabilities.

All things considered, this is likely going to be a networking problem that is very specific to your particular setup. The number of users here who run OH in Docker is relatively small. The number who run OH in any sort of swarm/kubernetes/openshift type environment can probably be counted on one’s fingers.

The DHCP/DNS problems are completely outside of openHAB and even outside the openHAB Image. That’s all managed by Docker and the arguments you pass to the container when you start it. So I recommend fixing that problem fist as nothing else you mess with will work until you get that fixed. And the solution to that is likely going to be found outside of this forum.

There should be logs, particular logs captured by Docker which is the stdout/stderr from the container starting up which might have some useful information. You haven’t mentioned anything beyond the one warning which, now that you’ve provided some details, at worst is a symptom of other problems and not the root cause of these problems. As far as I’ve followed, and I tend to read at least the subject of all the issues, there has been no changes to anything related to networking in the OH container in quite some time.

That’s not going to fix anything except to make that warning message in openhab.log go away.

I don’t understand this at all.

Either you are using the openHAB Cloud connector or you are not.

If you are using the openHAB Cloud Connector and myopenhab.org then there is nothing your proxies can do to prevent data from leaking to myopenhab.org. It’s the openHAB server itself that initiates the connection to myopenhab.org. And unless you have a really sophisticated firewall that inspects each message exchanged to block only those messages that contain data you don’t want leaked, it’s either going to be all or nothing. Either myopenhab.org will have authenticated access to all your REST API or no access what so ever.

If you are not using the Cloud Connector addon, OH doesn’t communicate with anything on the Internet except to download add-ons (but various addons do, I’m just talking about openHAB core). And if you manually download the addons.kar file and drop it into a folder mounted to addons folder in the container it won’t even connect to the internet for that.

So I don’t see the point here. If you don’t want to leak data to the openHAB Cloud, don’t use it. Problem solved. No need for proxies or fancy network isolation or anything like that. Just don’t install the add-on. If you do want to connect to the openHAB Cloud for remote access, then these intermediate proxies are just going to break that, assuming that OH isn’t already bypassing them entirely.

As far as I can tell you are going to great lengths to solve a problem that doesn’t need to be solved.

Thanks Rich,

Agreed, this is an unusual configuration.

By way of explanation, and at the risk of becoming overly philosophical, the paranoia essentially stems from the discovery that one of the WiFi access points here was (necessarily) sniffing the mac addresses of all devices on the LAN/Wifi. It then set about sending the information along with dates and times etc. etc to Amazon IoT for analysis. The manufacturer Netgear discloses all this in their privacy statement. i.e. Netgear knows where my phone has been and when. OpenWRT and friends are the answer to this issue, but this expensive piece of hardware isn’t supported…

Having a system inside that can listen to such things and report them remotely is a (possibly personal) problem for me. Obviously you trust your software. However, here docker is using the “container” aspect of containers to restrict access to such things. And pretty successfully so far really.

In this case the container will never be allowed to operate in mode=host where it has unfettered access to the network. In my view it is simply not possible to ensure network security and privacy in this scenario otherwise. This is a differentiator between HA and OH. HA appears to go to great lengths to ensure it can spy on its users, either by accident or design.

Thanks for the advice regarding to the network issue being out of scope. It helps a lot knowing that nothing has been done to specifically change how the container operates. It sounds like this is a docker issue.

Thanks again for your help. And please keep up the excellent work. OH is the best and I love it.

If I find the cause I’ll post the solution here too.

Further probing has found the the OH31M5 container has an added docker “health check” which does not exist the OH31M1 container. It seems the container is not considered running until it passes a healthcheck.

Healthcheck

Interval 300000000000
Retries 3
Test
    0 CMD-SHELL
    1 curl -f http://localhost:${OPENHAB_HTTP_PORT}/ || exit 1
Timeout 5000000000

Hostname b228ca881247
Image openhab/openhab:3.1.0.M5-debian.arm64@sha256:25ad656a186011fcbeb05f4e01f8b50849ed27ea3a5231d19e63292f2c61600e

Because the system here requires the use of proxies this health check is being proxied. Chicken-egg.

The solution should be something like making curl bypass the proxy for the healthcheck.
It seems the healthcheck may need to be rewritten to something like (because curl can’t use the no_proxy environment variable):

curl --no-proxy http://localhost:${OPENHAB_HTTP_PORT} http://localhost:${OPENHAB_HTTP_PORT}/

Add this to you compose.yml or docker swarm stack definition:

healthcheck:
disable: true

The container instantly started. SOLVED.