Using NGINX Reverse Proxy (Authentication and HTTPS)

I would stay away from nested locations in NGINX. The matching logic is somewhat complex and it doesn’t really add anything to a simple configuration like this.

BTW, one of the issues with your initial configuration was the mismatch between redirecting to port 443 (via HTTP response 301) and listing on port 8443 for HTTPS connections.

Yo can do it this way, but it requires configuring frontail somewhat, to make all assets load from the right path. Also, the location / block needs to be the last one, since this catches all requests.

Note that it’s not the port itself that don’t support ssl, any port can use it, it’s the application listening to the port that needs to implement ssl, and frontail doesn’t. OpenHAB does, on port 8443, but it’s easier to let nginx handle it and then proxy all other services. This can be done by configuring different server blocks in nginx with different ports, each one proxying one service, or one server block with different location blocks for different services. The second alternative is a bit trickier to implement because you need to configure the services as well.

Edit: as @noppes123 wrote, don’t put a location block inside another, it’s very hard to predict where a request will lead. Generally, nginx check the locations from top to bottom, choosing the first one that matches, but I haven’t tried nesting location blocks, so I don’t know how those are handled.

Dear all,
How can I deny access for sitemap=admin with nginx ? i would like to allow only one IP

worker_processes  1;

events {
    worker_connections  1024;
}

http {
    include       mime.types;
    default_type  application/octet-stream;


    sendfile        on;

    keepalive_timeout  65;

    server {
        listen                          80;
        server_name                     X.X.X.X;

        location /basicui/app?sitemap=admin {
            deny all;
        }

        location / {
            proxy_pass                              http://localhost:8080;
            proxy_set_header Host                   $http_host;
            proxy_set_header X-Real-IP              $remote_addr;
            proxy_set_header X-Forwarded-For        $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto      $scheme;
            satisfy                                 any;
            allow                                   X.X.X.X;
            deny                                    all;
            auth_basic                              "Username and Password Required";
            auth_basic_user_file                    /usr/local/etc/nginx/.htpasswd;
        }

    }
}

I tried this :

        location /basicui/app?sitemap=admin {
            allow myIPx.x.x.x;
            deny all;
        }

but it doesnt work

I’m no NGINX expert, but something like this just might work:

location /basicui/app?sitemap=admin {
    set $myip all;
    if ($arg_sitemap = admin) {
        set $myip myIPx.x.x.x;
    }
    allow $myip;
    deny all;
    ...

@YannF It will not gone work because the second location allow the access to all IP , regardless the first configuration that you made ,
I also think there is a possibility to allow access using only one IP, (read all this big topic from the beginning)

I was trying to access the the openHAB log viewer via HTTPS but then i come up with the idea to make a exception for this port and allow the access via HTTP, is a temporary solution for me
Why you want to block all the IP except one ?

Hi Radu10
It’s because I want that x.x.x.x can connect without password.
This configuration deny all ip to connect without password except one.
I try to do this that day or monday.
Thanks for all your reply

Problem:
if I configure nginx to load a sitemap directly by htttp://server/mysitemap it doesn’t work… my css doesn’t load, there is a lot of bug in the webpage… i can not control the automation…
i don’t understand how the panel is loading.

Hello all, i’d kindly ask for some guidance in getting my nginx up and running.

I realise this might be really basic for you guys, considering how advanced feel your posts so far… it isn’t for me.

Following


and the first post in this thread and a number of other guides regarding certbot and let’s encrypt:

I breezed through the steps with no issues up until managing to get my certs with let’s encrypt+certbot where i’ve hit the first hurdle.
After quite some bickering vs my ISP supllied router not applying nat and firewall changes (massive loss of time :cry: ) i moved forward.

As long as i have listen 80;without ssl and without the 301 redirection from 80 to https+443, my nginx config works - i have at least achieved authentication. This has been my failsafe step back for 10 days now :confounded:

As soon as i switch to listen 443;, move to any browser on any device (with cleared cache, cookies, session etc, having done sudo nginx -t + sudo service nginx restart) if i call to my Rpi’s internal ip via https://192.168.1.32:443 i get a neverending blank loading page. if i just type in the ip (meaning http+80) i get a “page not available” error page.

I’ve tried swapping back in “mydomain_or_myip” instead of any of the actual 2 values, however the only way i can get through this step is by swapping in as proxy_set_header Host $proxy_host and keeping the “mydomain_or_myip” server name; this way i can get to authenticate from the LAN calling the ip with https+:443, and the browser asks me to accept the certificates and authenticate.

So i then follow up adding in the 301 redirection server block.

If i have as proxy_set_header Host $proxy_host;
i get an error "page unreachable https://mydomain_or_myip"
So the 301 redirection from http+80 to https+443 happened, but somehow it doesn’t get into openhab’s location.
Out of curiosity, if i swap back in $http_host there is no difference.
So i back i go to $proxy_host, but with no valid result yet.

one last thing i thought of doing was to swap in my actual domain’s name in the 301 server name block.
This is a funny one!
Basically that means the call would go from
my pc>http>rpi>(asks to accept certs)https>dyndnsprovider>myrouter(blocks it, but i could open port 443 too…)>my rpi
The funny thing is, i have the feeling this one would work :joy: , but i find that the whole point of making that 301 was to make some kind of a localhost translation from http to https, otherwise if i have to open both 80 and 443 on my superresponsive ISP provided router, then i might just have opened 443 in the first place without the 301 redirect and opening port 80.

So i’ve tried changing the 301 server name for localhost - doesn’t work, i think it tries to resolve it on the pc i’m browsing on :joy: :rofl:.
So i’ve tried swapping in for the 301 server name the actual ip of the Rpi -192.168.1.32 and obviously in LAN it works, but as soon as i try and connect from outside, my tablet or phone starts looking for 192.168.1.32 in whatever network i happen to be (this could get even funnier if there is an actual something on that netwrk’s address)

please please please help, i’m so lost i’m almost delirious…

this is my nginx file

server {
    listen                          80;
    server_name                     mydomain_or_myip;
    return 301                      https://$server_name$request_uri;
  }

server {
    listen                                    443 ssl;
    server_name                               mydomain_or_myip;

    ssl_certificate                           /etc/letsencrypt/live/brandolin1.homepc.it/fullch$
    ssl_certificate_key                       /etc/letsencrypt/live/brandolin1.homepc.it/privke$
    add_header                                Strict-Transport-Security "max-age=31536000";

    location / {
        proxy_pass                            http://localhost:8080/;
        proxy_buffering                       off;
        proxy_set_header Host                 $proxy_host;
        proxy_set_header X-Real-IP            $remote_addr;
        proxy_set_header X-Forwarded-For      $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto    $scheme;
        satisfy                               any;
        allow                                 127.0.0.1;
        deny                                  all;
        auth_basic                            "Username and Password Required";
        auth_basic_user_file                  /etc/nginx/.htpasswd;
                }
    location /.well-known/acme-challenge/ {
        root                                  /var/www/mydomain;
                                           }
}

Ok, let’s see…

First of all, if you want to access OpenHAB via https from outside your network, you need to open port 443 in your router.

Second: the $server_name variable in nginx holds the value you specify in server_name, soft it is your ip, it will redirect to that, if it’s your domain it will redirect to that, regardless of what you type in the browser.

Third: if you have a certificate for your domain, and try to access your server by ip your browser will likely show a certificate warning that the name doesn’t match.

What I would have done is specify two servers in nginx, one using IP-adress over port 80 (no need for ssl on a secured network) just make sure it isn’t reachable from the outside.

For the second server you can use the config above, and your domain as server_name.

1 Like

Hey pacive, thank you so much for your reply!

that mirrors my tests… hmm…
So then, if I understand correctly, were i not including the 301 block, i don’t need to even define the actual server name, as there is no redirection and can leave it to a generic “myip_or_mydomain”, right?

+

But 80 needs to be reachable and listening for let’s encrypt…

Soo …the guide (aside for the user’s authentication part) is a workaround to the fact that we NEED to keep 80 open for certbot+letsencrypt renewal!!!
Total epiphany! I completely missed that.
now i see the logic.

Correct me if i misundestood then:

The 301 redirection back to the dyndns in https+ssl
IS DESIRED
from a “connection from outside” perspective and allows to split out the calls for renewing the certificates,
WHILE
it doesn’t make any sense from the inside of the lan, as it redirects what would have been an otherwise local call, through the internet: generating pointelss internet traffic+ rendering that session dependant on internet response timing fluctuations+ dependant on internet connection staying alive/failing during use.

I might be starting to get some footholds here.
Sorry if i’m saying obvious things for most of you, but as a -non expert in networking openhab user- i need to understand what are the desired and non desired use cases of the guide to understand how to get it working. Even the simple “when to switch a “myip”” for the actual value and “when not to do it cause it is nginx’s language to say “localhost””, isn’t obvious to me.


edit
Thanks to pacive’s reply I got the remote access working.
The few key information in order for the guide for me to work were:

  • in the 301 server block . server name is my actual dyndns address
  • the second server’s server name needed to stay “mydomain_or_myip”. If i swap it for my dyndns i end up in endless loading, whereas if i swap it for my local ip i get nowhere from the
    outside of my lan
  • proxy_set_header Host: accessing from outside my lan doesn’t seem to make a difference whether is $proxy_host or $http_host, but prior to adding the 301 block, from the inside of the lan, only $proxy_host was working, so i kept that.

That’s not the only thing server_name does, it also tells nginx what requests to respond to.

When you type an address in your browser, a couple of things happen: First it makes a DNS lookup, to see what ip corresponds to your address. Then it connects to the ip-address, on port 80 for http-requests or port 443 for https-requests (you can manually connect to another port, but these are defaults). It also sends the address you typed in in a header (Host).

Nginx then listens to request to the ports specified in the listen directives. If it receives a request it first tries to match the value in the requests Host header to a server_name that is used to handle the request. If none matches it uses a default one. That way you can have multiple servers listening to the same port, but using different addresses. You can also type an ip address as server_name (for example your lan ip) if you want to access it by ip.

If you have literally written “mydomain_or_myip” as server name it probably only works because it is the first server block, and nginx use it as default for any request it cannot find a matching server name for. You should write either your dyndns address, or your ip address.

Edit:

Certbot can actually handle the redirect to https. I redirect all http traffic to https, but certbot still works! You need to have both port 80 and port 443 open in your router though.

This depends on your router. Some routers allow you to act as a local dns so the address can be resolved to a local address. Don’t think most ISP supplied routers do however.

1 Like

Hi,

I got nginx to work as a proxy for just openhab by following this guide. However, I also want to proxy my grafana site running on the same machine, I saw that @rlkoshak was trying to do something similar, without any luck. But I get different errors.
Remotely
When I try to enter mydomain.net/openhab I’m prompted for credentials and then redirected to mydomain.net/start/index which just shows a “404 Not found” error message. When I try to enter mydomain.net/grafana I’m not prompted for credentials but instead I just get {message":“Basic auth failed”}
Locally
I’m redirected straight to the 404 Not found error in both cases.

So, what’s wrong with my authentication? Why do I get two different errors, shouldn’t I at least get the same one in both cases? Secondly, is it even possible to get this kind of setup (openhab and grafana) working with nginx or do I need apache2 for it as someone posted about

My file looks like this:

server {
        listen                          80;
        server_name                     mydomain.net;
        return 301                      https://$server_name$request_uri;
}
server {
        listen                                  443 ssl;
        server_name                             mydomain.net;

        location /openhab {
                rewrite                                 ^/openhab(/.*)$ $1 break;
                proxy_pass                              http://localhost:8080/;
                proxy_set_header Host                   $http_host;
                proxy_set_header X-Real-IP              $remote_addr;
                proxy_set_header X-Forwarded-For        $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto      $scheme;
                auth_basic                              "Username and Password Required";
                auth_basic_user_file                    /etc/nginx/.htpasswd;
                satisfy                                 any;
                allow                                   <local range>;
                allow                                   127.0.0.1;
                deny                                    all;
        }
        location /grafana {
                rewrite                                 ^/grafana(/.*)$ $1 break;
                proxy_pass                              http://localhost:3000/;
                proxy_set_header Host                   $http_host;
                proxy_set_header X-Real-IP              $remote_addr;
                proxy_set_header X-Forwarded-For        $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto      $scheme;
                auth_basic                              "Username and Password Required";
                auth_basic_user_file                    /etc/nginx/.htpasswd;
                satisfy                                 any;
                allow                                   <local range>;
                allow                                   127.0.0.1;
                deny                                    all;
 }
        location /.well-known/acme-challenge/ {
                root                                    /var/www/mydomain.net;
        }

    ssl_certificate /etc/letsencrypt/live/mydomain.net/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/mydomain.net/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

}

I guess this is the same problem as I have. Everything is working fine. I can access my.domain.se over https:// and after entering the credentials I get the OH page.

I can select Basic UI and see my sitemap but the webview elements which are graphs from Grafana, running at localhost:3000 do not show. I have not entered the location /grafana section in my nginx config as described by @kosken but I have the feeling that I will end up with the same error.

So, is it worth plodding on or should I look at apache2?

1 Like

Hi,
i configured a reverse proxy with nginx on port 443 with authentication, which works.

i run openhab 2.4 and * Grafana v6.2.2 everything on centos on one host.
I’m including the grafana graphs with:

Image refresh=60000 url=“http://openhab:3000/render/d-solo/TEoheCWiz/openhab?panelId=8&orgId=1&from=now-7d&to=now&width=1000&height=500

This images are never shown neither on the app nor on the basicui. Everything else works fine. If i connect without reverse proxy direkt to openhab with port 8443 also the grafs are shown.

If i click on the broken image i see:

https://openhab.domain.de/proxy?sitemap=default.sitemap&widgetId=0204&t=1560674611752

if i open that in a new tab i get a json response:

message “Basic auth failed”

Any idea what is wrong, it looks like many users are having this issue but i found no working solution.

By the way, if i use apache as a revers proxy the same happens.

Can anyone point me to an updated guide for setting up NGINX reverse proxy as a docker for OH 2.4?
Also, would I need to make any necessary settings on my pfsense 2.4. firewall/router?

Anyone unable to get nginx to work properly, you can install openHABian.
It has an option to also install a working nginx - then just copy’n paste config.
But general network, docker, FW, proxy setup all are pretty open, complex questions and certainly beyond the forum scope which is openHAB so please g**gle for those rather than asking here.

Thanks Markus. But I would really prefer to install nginx as docker in my ubuntu VM as that is how my OH is setup (along with mqtt,frontail docker containers etc).

i found the solution:
when using the above nginx conf examples a image/graph included by a sitemap is not shown.
To get images shown from grafana you need to add:

proxy_set_header   Authorization     "";

in the nginx config file

2 Likes

Thank you Heiko for that information! It works like a charm for me too!

I had a problem with openhab where it about every minute would show the connection was lost.

When looking in the browaser console, it was showing me this

net::ERR_SPDY_PROTOCOL_ERROR

After a bit more looking, it saw the same thing timing out every 1.0 minute.

https://url/rest/sitemaps/events/90c5c899-c7fb-48cc-804d-613315145da7?sitemap=site&pageid=page

So every 1.0 minute this failed to load, as nginx seemed to close the connection to it.

Adding this to the nginx location config solved the problem

proxy_read_timeout 600s;

I am not sure if 600s is needed, or if a lower number will be enough. But I am no longer getting the message about the connection being lost.

In my case everything works pretty fine apart form openhab:3000 (graphana ssl error) and log viewer Openhab:9001 (ssl error) - can someone tell me how to fix it.