Occasional "Offline" in BasicUI when proxying via nginx / connection timed out

I am running OH 2.4 official and did set up an Nginx-Proxy to access my internal installation from the outside according to the Wiki.
All runs well besides an occasional “offline” in BasicUI and nginx error-log telling:

upstream timed out (110: Connection timed out) while reading upstream, client: xx.xx.xx.xx, server: openhab.xx.xx, request: “GET /rest/sitemaps/events/0846c4f0-ce6c-4606-b278-786aa03c2cee?sitemap=yyy&pageid=yyy HTTP/1.1”

I found related posts talking about the scheme (http/https) but this did not solve it.

Further analysis showed that the problem is only the event-stream XHR backchannel from OH via nginx to the browser.

Nginx does some default buffering/caching and that does not play well with the XHR streams. I real fix would probably be in the Runtime/Jetty to set an HTTP-header only for the XHR-backchannel like this

Cache-Control: no-cache;
X-Accel-Buffering: no;

as mentioned here.

As a workaround (that indeed works works for me) is to really disable any kind of buffering and a large timeout while proxying to OH:

    location / {
            proxy_pass                      http://your-openhab-server-and-port;
            proxy_buffering                 off;
            proxy_request_buffering         off;
            proxy_http_version              1.1;

            proxy_set_header                Host $http_host;
            proxy_set_header                X-Real-IP $remote_addr;
            proxy_set_header                X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header                X-Forwarded-Host $server_name;
            proxy_set_header                X-Forwarded-Proto $scheme; // or just "https" to force it

            client_body_buffer_size         0;
            client_max_body_size            0;
            proxy_max_temp_file_size        0;
            proxy_read_timeout              18000;
            proxy_send_timeout              18000;

            gzip                            off;

If anybody else is experiencing these “offline” messages, please give my workaround a try and post your results here (so the wiki for proxying with nginx could be updated).

As this disables any buffering/caching of other stuff as well (sitemaps, icons and so on) it would be great to give this extra http-header a try (does not hurt other proxies or if no proxy is used at all).
I am not a Java-guy, so I can’t do it myself, but I could test a nightly/special build.


1 Like

A very late reply to your excellent post - unfortunately this doesn’t work for me… I still get offline messages and errors in the nginx log.


you’re right the errors where gone after adding following settings to my location:

client_body_buffer_size         0;
        client_max_body_size            0;
        proxy_max_temp_file_size        0;
        proxy_read_timeout              18000;
        proxy_send_timeout              18000;

        gzip                            off;

But afterwards I had problems changing settings in the PaperUI that’s why I had to remove it again.

Have you got another Idea or already improved your configuration?

Thank you!


Yes, unfortunately it is better but not completely OK with this configuration.
When switching sitemaps in the IOS-App, also OH throws some errors :

2019-11-12 22:45:38.178 [ERROR] [ersey.server.ServerRuntime$Responder] - An I/O error has occurred while writing a response message entity to the container output stream.
org.glassfish.jersey.server.internal.process.MappableException: org.eclipse.jetty.io.EofException
at org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundWriteTo(MappableExceptionWrapperInterceptor.java:92) ~[?:?]
at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162) ~[171:org.glassfish.jersey.core.jersey-common:2.22.2]

But I have no idea how to prevent that :frowning:

This seems like MAYBE it is working. I really hate vodoo magic fixes but you can’t argue with something working.

Thanks. I’ll subscribe to this thread and update later.