Using NGINX Reverse Proxy (Authentication and HTTPS)

Hi all,

I have been trying to rewrite the openhab2 documentation with a tutorial with how to setup NGINX with use for openHAB2, I see a lot of questions about authentication and HTTPS and I feel these are the steps that would make it easier for people. I’m looking for any type of feedback and questions. There’s a lot of information here but I hope this helps, you can see the intended formatting in the docs here.




Running openHAB Behind a Reverse Proxy

A reverse proxy simply directs client requests to the appropriate server. This means you can proxy connections to http://mydomain-or-myip to your openHAB runtime. You just have to replace mydomain-or-myip with either an internal or external IP (e.g. xx.xx.xx.xx) or a domain if you own one that links to the external IP of openHAB (e.g. openhab.mydomain.tld).

Running openHAB behind a reverse proxy allows you to access your openHAB runtime via port 80 (HTTP) and 443 (HTTPS). It also provides you a simple way of protecting your server with authentication and secure certificates.

Setting up NGINX to Proxy openHAB

These are the steps required to use NGINX, a lightweight HTTP server, although you can use Apache HTTP server or any other HTTP server which supports reverse proxying.

Installation

NGINX runs as a service in most Linux distributions, installation should be as simple as:

sudo apt-get update && sudo apt-get install nginx

Once installed, you can test to see if the service is running correctly by going to http://mydomain-or-myip, you should see the default “Welcome to nginx” page. If you don’t, you may need to check your firewall or ports and check if port 80 (and 443 for HTTPS later) is not blocked and that services can use it.

Basic Configuration

NGINX configures the server when it starts up based on configuration files. The location of the default setup is /etc/nginx/sites-enabled/default. To allow NGINX to proxy openHAB, you need to change this file (make a backup of it in a different folder first).

The recommended configuration below assumes that you run the reverse proxy on the same machine as your openHAB runtime. If this doesn’t fit for you, you just have to replace proxy_pass http://localhost:8080/ by your openHAB runtime hostname (such as http://youropenhabhostname:8080/).

server {
	listen                          80;
	server_name                     mydomain-or-myip;

	location / {
		proxy_pass                            http://localhost:8080/;
		proxy_buffering                       off;
		proxy_set_header Host                 $http_host;
		proxy_set_header X-Real-IP            $remote_addr;
		proxy_set_header X-Forwarded-For      $proxy_add_x_forwarded_for;
		proxy_set_header X-Forwarded-Proto    $scheme;
	}
}

After saving over the file but before you commit the changes to our server, you should test to see if our changes contain any errors; this is done with the command:

sudo nginx -t

If you see that the test is successful, you can restart the NGINX service with…

sudo service nginx restart

…and then go to http://mydomain-or-myip to see your openHAB server.




Authentication with NGINX

For further security, you may wish to ask for a username and password before users have access to openHAB. This is fairly simple in NGINX once you have the reverse proxy setup, you just need to provide the server with a basic authentication user file.

Note: There is currently an issue with Proxy Authentication and HABmin when using some browsers. If you require HABmin, consider connecting locally or using Safari for now.

Creating the First User

You will be using htpasswd to generate a username/password file, this utility can be found in the apache2-utils package:

sudo apt-get install apache2-utils

To generate a file that NGINX can use you use the following command, don’t forget to change username to something meaningful!

sudo htpasswd -c /etc/nginx/.htpasswd username

You will receive a prompt to create a password for this username, once finished the file will be created. You’re then free to reference the file to NGINX.

Referencing the File in the NGINX Configuration

Now the configuration file (/etc/nginx/sites-enabled/default) needs to be edited to use this password. Open the configuration file and add the following lines underneath the proxy_* settings:

		auth_basic                            "Username and Password Required";
		auth_basic_user_file                  /etc/nginx/.htpasswd;

Once done, test and restart your NGINX service and authentication should now be enabled on your server!

Adding or Removing users

To add new users to your site, you must use following command, do not use the -c modifier again as this will remove all previously created users:

sudo htpasswd /etc/nginx/.htpasswd username

and to delete an existing user:

sudo htpasswd -D /etc/nginx/.htpasswd username

Once again, any changes you make to these files must be followed with restarting the NGINX service otherwise no changes will be made.




Setting up a domain

To generate a trusted certificate, you need to own a domain. To acquire your own domain, you can use one of the following methods:

  1. Purchasing a domain name e.g (GoDaddy, Namecheap, Enom, Register)
    You should have an IP adress that doesn’t change (i.e. fixed), or changes rarely, and then update the DNS A record so that your domain/subdomain to point towards your IP.

  2. Obtaining a free domain (i.e. FreeNom)
    Setup is the same as above.

  3. Using a “Dynamic DNS” service (e.g. No-IP, Dyn)
    Uses a client to automatically update your IP to a domain of you choice, some Dynamic DNS services offer a free domain too.




Enabling HTTPS

Encrypting the communication between client and the server is important because it protects against eavesdropping and possible forgery. The following choice is available depending if you have a valid domain:

  • If you need to use an internal or external IP to connect to openHAB, follow the instructions for OpenSSL.

  • If you have a valid domain and can change the DNS to point towards your IP, follow the instructions for Let’s Encrypt

Using OpenSSL to Generate Self-Signed Certificates

Skip this step if you have a valid domain name and continue to the instructions for Let’s Encrypt

OpenSSL is also packaged for most Linux distributions, installing it should be as simple as:

sudo apt-get install openssl

Once complete, you need to create a directory where our certifcates can be placed:

sudo mkdir -p /etc/ssl/certs

Now OpenSSL can be told to generate a 2048 bit long RSA key and a certificate that is valid for a year:

sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/openhab.key -out /etc/ssl/openhab.crt

You will be prompted for some information which you will need to fill out for the certificate, when it asks for a Common Name, you may enter your IP Address:
Common Name (e.g. server FQDN or YOUR name) []: xx.xx.xx.xx

Adding the Certificates to Your Proxy Server

The certificate and key should have been placed in /etc/ssl/. NGINX needs to be told where these files are and then enable the reverse proxy to direct HTTPS traffic. In the NGINX configuration, place the following underneath your server_name variable:

	ssl_certificate                 /etc/ssl/openhab.crt;
	ssl_certificate_key             /etc/ssl/openhab.key;

Using Let’s Encrypt to Generate Trusted Certificates

Skip this step if you have no domain name or have already followed the instructions for OpenSSL

Let’s Encrypt is a service that allows anyone with a valid domain to automatically generate a trusted certificate, these certificates are usually accepted by a browser without any warnings.

Setting up the NGINX Proxy Server to Handle the Certificate Generation Procedure

Let’s Encrypt needs to validate that the server has control of the domain, the most simple way of doing this is using a webroot plugin to place a file on the server, and then access it using a specific url: /.well-known/acme-challenge. Since the proxy only forwards traffic to the openHAB server, the server needs to be told to handle requests at this address differently.

First, create a directory that Certbot can be given access to:

sudo mkdir -p /var/www/mydomain

Next add the new location parameter to your NGINX config, this should be placed above the last brace in the server setting:

	location /.well-known/acme-challenge/ {
		root                            /var/www/mydomain;
	}

Using Certbot

Certbot is a tool which simplifies the process of obtaining secure certificates. The tool may not be packaged for some Linux distributions so installation instructions may vary, check out their website and follow the instructions using the webroot mode. Don’t forget to change the example domain to your own! An example of a valid certbot command (in this case for Debian Jessie) would be:

sudo certbot certonly --webroot -w /var/www/mydomain -d mydomain

Adding the Certificates to Your Proxy Server
The certificate and key should have been placed in /etc/letsencrypt/live/mydomain-or-myip. NGINX needs to be told where these files are and then enable the reverse proxy to direct HTTPS traffic, using Strict Transport Security to prevent man-in-the-middle attacks. In the NGINX configuration, place the following underneath your server_name variable:

		ssl_certificate                 /etc/letsencrypt/live/mydomain-or-myip/fullchain.pem;
		ssl_certificate_key             /etc/letsencrypt/live/mydomain-or-myip/privkey.pem;
		add_header                      Strict-Transport-Security "max-age=31536000"; 

Setting your NGINX Server to listen to the HTTPS port

Regardless of the option you choose, make sure you change the port to listen in on HTTPS traffic.

	listen                          443 ssl;

After restarting NGINX service, you will be using a valid HTTPS certificate, you can check by going to https://mydomain-or-myip and confirming with your browser that you have a valid certificate. These certificates expire within a few months so it is important to run the updater in a cron expression (and also restart NGINX) as explained in the Certbot setup instructions. If you want to keep hold of a HTTP server for some reason, just add listen 80; and remove the Strict-Transport-Security line.

Redirecting HTTP Traffic to HTTPS

You may want to redirect all HTTP traffic to HTTPS, you can do this by adding the following to the NGINX configuration. This will essentially replace the HTTP url with the HTTPS version!

server {
	listen                          80;
	server_name                     mydomain-or-myip;
	return 301                      https://$server_name$request_uri;
}



Putting it All Together

After following all the steps on this page, you should have a NGINX server configutration that looks like this:

server {
	listen                          80;
	server_name                     mydomain-or-myip;
	return 301                      https://$server_name$request_uri;
}
server {
	listen                          443 ssl;
	server_name                     mydomain-or-myip;
		
	ssl_certificate                 /etc/letsencrypt/live/mydomain/fullchain.pem; # or /etc/ssl/openhab.crt
	ssl_certificate_key             /etc/letsencrypt/live/mydomain/privkey.pem;   # or /etc/ssl/openhab.key
	add_header                      Strict-Transport-Security "max-age=31536000"; # Remove if using self-signed or are keeping HTTPS.

	location / {
		proxy_pass                              http://localhost:8080/;
		proxy_buffering                         off;
		proxy_set_header Host                   $http_host;
		proxy_set_header X-Real-IP              $remote_addr;
		proxy_set_header X-Forwarded-For        $proxy_add_x_forwarded_for;
		proxy_set_header X-Forwarded-Proto      $scheme;
		auth_basic                              "Username and Password Required";
		auth_basic_user_file                    /etc/nginx/.htpasswd;
	}

	#### When using Let's Encrypt Only ####
	location /.well-known/acme-challenge/ {
		root                                    /var/www/mydomain;
	}
}
15 Likes

Hey Ben @Benjy ,
Thanks for your tutorial, i managed to install nginx some time ago.
I use nginx to concentrate a few webpages and to enable https support.
If i access the site via cellular or any other external site the pictures won’t be loaded.
Everything else is fine.
When i am at my local my Wifi (local dns server which uses the local IP) i can access the images.
the only differerence is that i use a dedicated file (with a few options regarding HSTS and so on)

# from https://cipherli.st/
# and https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_ecdh_curve secp384r1;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
# Disable preloading HSTS for now.  You can use the commented out header line that includes
# the "preload" directive if you understand the implications.
#add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
ssl_dhparam /etc/ssl/certs/dhparam_4096.pem;

I can’t see anything immediately obvious there, although I’d argue that X-Frame-Options might be better set as “SAMEORIGIN”, those ciphers are very strict, there’s not a lot of compatibility there.

Other than that I am also using similar security options, so don’t believe it’s due to that. Images failing to load (in openhab) is usually a symptom of proxy_set_header X-Forwarded-Proto not being set correctly (i.e. to $scheme or https)

I’ve set this up only running openHAB and nginx in Docker. I’ll post the differences here.

One big thing that should be made clear. Either I did something wrong or else the redirect from HTTP to HTTPS is not optional. When I tried it without the redirect I would get as far as entering my username and password and then I would get connection refused. I’m not sure if this is a side effect of running in Docker or a fundamental step.

#Installation
I downloaded the latest base official nginx image using

docker pull nginx

I configured it to run as a service on port 80 and 443 with the following .service file

nginx.service : NOTE, edits get made to this file later, I include each version at each step though so you can test as you go.

[Unit]
Description=nginx
Requires=docker.service
After=docker.service

[Service]
Restart=always
ExecStart=/usr/bin/docker run --name=%n \
  -p 80:80 -p 443:443 \
  -v /etc/localtime:/etc/localtime:ro \
  -v /etc/timezone:/etc/timezone:ro \
  nginx
ExecStop=/usr/bin/docker stop -t 2 %n ; /usr/bin/docker rm -f %n

[Install]
WantedBy=multi-user.target

NOTE: We map in localtime and timezone so nginx gets its time from the host.

Copy that file to /etc/systemd/system and run the following commands:

sudo systemctl daemon-reload
sudo systemctl enable nginx.service
sudo systemctl start nginx.service
systemctl status nginx.service

You should see that it is up and running now. If you go to http://host (where host is your host machine’s name or address) you should see the NGINX welcome message.

#Basic Configuration
Now we need to get the nginx.conf file so we can update the config.

sudo mkdir /opt/nginx
sudo chmod a+rwx /opt/nginx
cd /opt/nginx
docker ps
# look for nginx in the list, highlight the CONTAINER ID
docker exec <container id> cat /etc/nginx/nginx.conf > nginx.conf
mkdir conf.d
cd conf.d
docker exec <container id> cat /etc/nginx/conf.d/default.conf > default.conf

At this point I had to do me some learn’n on nginx because the Docker image doesn’t have a sites-enabled folder. :expressionless: It turns out that file is located in conf.d.

Edit /opt/nginx/conf.d/default.conf per the instructions above. NOTE: Your openHAB will NOT be running in this container so you must use the http://youropenhabhostname:8080/ address

TODO: Figure out how to take advantage of Docker networking so the only way to get to openHAB, even locally, is through nginx.

NOTE: there is a section about error pages that you will need to comment out to match the above.

Now that we have these config files we need to map them into the container. We do this by editing the start script to add them as volumes. While we are at it, we will also mount a volume for the logs so we can monitor for access attempts (fail2ban perhaps?) outside the container.

mkdir /opt/nginx/logs

New nginx.service file:

[Unit]
Description=nginx
Requires=docker.service
After=docker.service

[Service]
Restart=always
ExecStart=/usr/bin/docker run --name=%n \
  -p 80:80 -p 443:443 \
  -v /etc/localtime:/etc/localtime:ro \
  -v /etc/timezone:/etc/timezone:ro \
  -v /opt/nginx/nginx.conf:/etc/nginx/nginx.conf:ro \
  -v /opt/nginx/conf.d:/etc/nginx/conf.d:ro \
  -v /opt/nginx/logs:/var/log/nginx \
  nginx
ExecStop=/usr/bin/docker stop -t 2 %n ; /usr/bin/docker rm -f %n

[Install]
WantedBy=multi-user.target

sudo systemctl daemon-reload
sudo systemctl restart nginx.service
systemctl status nginx.service

If all went well you should see the service is up and running. You should now also see an access.log and error.log in /opt/nginx/logs.

To run the test of the config as described above:

docker ps
# Find the CONTAINER ID of the nginx.service container
docker exec <container id> nginx -t

#Authentication with NGINX
We will be generating and managing the user/password external to the Docker Image so make sure to install apache2-utils on your host machine as described above.

From the /opt/nginx folder run

sudo htpasswd -c .htpasswd username

create the password when asked. Now we have another file to mount into the image to add it as a read only volume same as we did above.

[Unit]
Description=nginx
Requires=docker.service
After=docker.service

[Service]
Restart=always
ExecStart=/usr/bin/docker run --name=%n \
  -p 80:80 -p 443:443 \
  -v /etc/localtime:/etc/localtime:ro \
  -v /etc/timezone:/etc/timezone:ro \
  -v /opt/nginx/nginx.conf:/etc/nginx/nginx.conf:ro \
  -v /opt/nginx/conf.d:/etc/nginx/conf.d:ro \
  -v /opt/nginx/logs:/var/log/nginx \
  -v /opt/nginx/.htpasswd:/etc/nginx/.htpasswd:ro \
  nginx
ExecStop=/usr/bin/docker stop -t 2 %n ; /usr/bin/docker rm -f %n

[Install]
WantedBy=multi-user.target

Don’t forget to daemon-reload and restart the service.

Now edit /opt/nginx/conf.d/default.conf as described above.

Adding and removing users is the same as above only you do so to /opt/nginx/.htpasswd

Setting up a domain

I’ll add:

  1. Some routers come with access to a free dynamic dns service. I can confirm that the Netgear R7000 is one such router.

##Using OpenSSL to Generate Self-Signed Certificates

As with the htpasswd, we will be creating these certs on the host machine and passing them into the container using a volume. Follow the instructions above, only put the ssl certs into /opt/nginx/certs.

TODO: Determine the correct permissions so nginx can read these but other users cannot. Find out what user it is running as in the container…

Add the certs folder to the container as we have done above.

[Unit]
Description=nginx
Requires=docker.service
After=docker.service

[Service]
Restart=always
ExecStart=/usr/bin/docker run --name=%n \
  -p 80:80 -p 443:443 \
  -v /opt/nginx/nginx.conf:/etc/nginx/nginx.conf:ro \
  -v /opt/nginx/conf.d:/etc/nginx/conf.d:ro \
  -v /opt/nginx/logs:/var/log/nginx \
  -v /opt/nginx/.htpasswd:/etc/nginx/.htpasswd:ro \
  -v /opt/nginx/certs:/etc/ssl/certs:ro \
  nginx
ExecStop=/usr/bin/docker stop -t 2 %n ; /usr/bin/docker rm -f %n

[Install]
WantedBy=multi-user.target

Edit the default.conf file per the instructions. Do not forget to add the port 80 autoforwarding or else it won’t work. You will get connection refused after authenticating with nginx.

Using Let’s Encrypt to Generate Trusted Certificates.

To be continued…

TODO: Configure the reverse proxy to resolve https://external.domain.com/openhab to go to openHAB to more easily differentiate and coexist with multiple web servers.

2 Likes

With regards to your redirect issue, did you try to keep HSTS on? Or load a page previously with it on?

I tried every combination of options I could think of. The only combo that lets me authenticate and successfully bring up OH is to have the port 80 redirect.

Have you actually tried mutual SSL authentication yet? I was always meant to try it… but you know—
I believe it would work easily via browser but I dunno about the app.

After clearing my browser cache, I was able to use the following settings to connect to both HTTP and HTTPS, if this doesn’t work for you it might be the docker specific version of NGINX? I’m not familiar with docker unfortunately.

server {
        listen                          80;
        listen                          443 ssl;
        server_name                     my.server.uk;
#        add_header                      Strict-Transport-Security "max-age=31536000; includeSubDomains";

        ssl_certificate                 /etc/letsencrypt/live/my.server.uk/fullchain.pem;
        ssl_certificate_key             /etc/letsencrypt/live/my.server.uk/privkey.pem;

        location / {
                proxy_pass                              http://localhost:8080/;
                proxy_buffering                         off;
                proxy_set_header Host                   $http_host;
                proxy_set_header X-Real-IP              $remote_addr;
                proxy_set_header X-Forwarded-For        $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto      $scheme;
                auth_basic                              "Username and Password Required";
                auth_basic_user_file                    /etc/nginx/.htpasswd;
        }

        location /.well-known/acme-challenge/ {
                root                                    /var/www/my.server.uk/html;
        }
}

Thanks @Benjy
It works now, maybe the image cache needed to be cleared within HABdroid

Few changes made by me:
I don’t need to enter the password when i am accessing OH from my local network.
This will only apply when you have a dns server configured for your local network, which points to the local IP of your nginx server.
I use snippets for the SSL configuration openhab.smarthome.com.conf contains the certificate files.
ssl-params.conf: contains the SSL options and a strong 4096 Bit cipher, which will lead to a score of A+ on qualys

I got a question concerning Nginx listening on other subdomains:
How can i avoid to listen to invaliurl.smarthome.com with the certificate of openhab.smarthome.com
(I do not have a wildcard certificate, so it would be great to redirect from invalidurl.smarthome.com to www.smarthome.com for example.

Thanks :slight_smile:

listen 443 ssl;
server_name  openhab.smarthome.com;
include snippets/ssl-openhab.smarthome.com.conf;
include snippets/ssl-params.conf;
proxy_set_header Host                   $http_host;
proxy_set_header X-Real-IP              $remote_addr;
proxy_set_header X-Forwarded-For        $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto      $scheme;
satisfy any;
allow 10.0.0.0/8;
allow 172.16.0.0/12;
allow 192.168.0.0/16;
deny  all;
auth_basic                              "Username and Password Required";
auth_basic_user_file                    /etc/nginx/openhab.smarthome.com/.htpasswd;

ssl-params.conf

# from https://cipherli.st/
# and https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_ecdh_curve secp384r1;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
#Disable preloading HSTS for now.  You can use the commented out header line that includes
#the "preload" directive if you understand the implications.
#add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
ssl_dhparam /etc/ssl/certs/dhparam_4096.pem;

No problem @3DJupp, you’ve reminded me that I’ve recently added another section about additional HTTPS security for those that want it.

I’m assuming you have a DNS Wildcard that allows any sub-domain to connect to the same IP? You’re probably looking to use the rewrite command for example:

server_name *.smarthome.com
if ($host != 'openhab.smarthome.com') {  
    rewrite ^/(.*)$ $scheme://openhab.smarthome.com$request_uri permanent;
}

I’ve not tried any of that and it may mess with the proxy settings. Since I can’t test anything related to multiple subdomains myself, I’d try searching around for “subdomain NGINX rewrite” if that doesn’t solve your problem.

The big difference I think was I did not listen on 80 for my proxy server config. I do not want any unencrypted network traffic so once I convinced myself the proxy was working I removed the listen 80 on the server.

If you remove the listen 80;, can you still get through to OH over 443? For me when I tried I would connect, get the warning since I’m using a self signed cert, enter my username and password, and then get a connection refused error.

If I put the redirect in it works like a champ. I suspect if I put in the listen 80; it will work as well, but I don’t want to expose any port 80 traffic so that isn’t a viable option for me.

Thanks for posting and for the advice.

I’m interested in this too. I was able to get multiple subdomains to work with ZoneMinder which was easy because I didn’t need the rewrite, but when I tried to do it with openHAB and Gogs (lightweight Git Server) I’ve so far failed. openHAB simply doesn’t work at all and Gogs comes over only in bit and pieces. I started looking into whether I needed to set up Websocket proxying or something when I ran out of time.

For the curious, here is my config so far:

server {
    listen        80;
    server_name         my-external-domain;
    return 301          https://$server_name$request_uri;
}
server {
    listen              443 ssl;
    server_name         my-external-domain;
    ssl_certificate     /etc/ssl/certs/koshak.crt;
    ssl_certificate_key /etc/ssl/certs/koshak.key;
    ssl_protocols       TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers         HIGH:!aNULL:!MD5;
    add_header          Strict-Transport-Security "max-age=31536000; includeSubdomains";

    access_log  /var/log/nginx/log/host.access.log  main;

#    location /openhab{
    location / {
#        rewrite ^/openhab(.*) /$1 break;
        proxy_pass            http://openhab-server:8080/;
        proxy_buffering       off;
        proxy_set_header      Host              $http_host;
        proxy_set_header      X-Real-IP         $remote_addr;
        proxy_set_header      X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header      X-Forwarded-Proto $scheme;
        auth_basic            "Username and Password Required";
        auth_basic_user_file  /etc/nginx/.htpasswd;
    }

    location /zm {
        proxy_pass            http://zoneminder-server:8082;
        proxy_buffering       off;
        proxy_set_header      Host              $http_host;
        proxy_set_header      X-Real-IP         $remote_addr;
        proxy_set_header      X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header      X-Forwarded-Proto $scheme;
        auth_basic            "Username and Password Required";
        auth_basic_user_file  /etc/nginx/.htpasswd;
    }

    location /gogs {
        rewrite               ^/gogs(.*) /$1 break;
        proxy_pass            http://gogs-server:3000;
        proxy_buffering       off;
        proxy_set_header      Host              $http_host;
        proxy_set_header      X-Real-IP         $remote_addr;
        proxy_set_header      X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header      X-Forwarded-Proto $scheme;
        auth_basic            "Username and Password Required";
        auth_basic_user_file  /etc/nginx/.htpasswd;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
}

The openHAB location works because there is no rewrite and the root location is used (I left my commented out version), ZoneMinder works because the /zm location is how the local address is anyway so there is no need for a rewrite. The gogs location lets me connect but it is as if the images and CSS are not coming through.

The problem is that rewrite has removed the /gogs from the request, any subsequent requests will also have it removed, which then get redirected using the first location rule.

Try

location ^~ /gogs {
    proxy_pass http://gogs-server:3000/;
}

First without the additional parameters and then add the headers afterwards, to see which works for you. The proxy_pass should automatically rewrite the url for you and pass it on with the inclusion of the slash at the end.

Thanks. I’ve played around with it some more and confirmed the behavior is
as you describe. When I try to click on one of the few links that come over
it tries to go to that address without the /gogs in front.

However, your suggested changes exhibit the same behavior. I added Calibre
into the mix to make experimenting faster. Both of these location configs
exhibit the same behavior (i.e. each subsequent request removes the /gogs
part).

    location /calibre {
        rewrite ^/calibre(.*) /$1 break;
        proxy_pass            http://chimera:8888/;
    }

    location ^~ /gogs {
        proxy_pass            http://chimera:3000/;
    }

Will reordering the locations help? I don’t know if nginx applies them in
order of the file or based on best match.

I also tried putting openhab into the /openhab location so there was no '/'
location to match to but I get the same result.

Yes, although trying HTTP again on a browser that already experienced a redirect will try to move on HTTPS if nothing’s found on port 80. If you clear browser AND SSL cache do you get the same problem? If you do, does it help if you set X-Forwarded-Proto to https instead of $scheme? I suspect its being cached and you only need one.

Quite honestly I’m not sure what to suggest next, this tutorial suggests that it’s possible to do, where to go if that doesn’t work? I may spend a bit of time in the weekend playing around with it to see what happens with multiple services on my setup and will let you know if I come to any solution.

It’s also possible that these issues are related to the docker version of NGINX?

@Benjy ; Great write up!

I was successful all the way to near end - but I fail getting “certbot renew --dry-run” to work - if fails.
xenon nginx # certbot renew --dry-run

-------------------------------------------------------------------------------
Processing /etc/letsencrypt/renewal/www.mysite.com.conf
-------------------------------------------------------------------------------
2016-10-01 20:09:39,416:WARNING:certbot.renewal:Attempting to renew cert from /etc/letsencrypt/renewal/www.mysite.com.conf produced an unexpected error: Failed authorization procedure. www.mysite.com (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from http://www.mysite.com/.well-known/acme-challenge/gKf9dhwfjFpprbL_iciDyL9cbyfadhPSWP3lLa-faTA: "<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>". Skipping.
** DRY RUN: simulating 'certbot renew' close to cert expiry
**          (The test certificates below have not been saved.)

All renewal attempts failed. The following certs could not be renewed:
  /etc/letsencrypt/live/www.mysite.com/fullchain.pem (failure)
** DRY RUN: simulating 'certbot renew' close to cert expiry
**          (The test certificates above have not been saved.)
1 renew failure(s), 0 parse failure(s)

IMPORTANT NOTES:
 - The following errors were reported by the server:

   Domain: www.mysite.com
   Type:   unauthorized
   Detail: Invalid response from
   http://www.mysite.com/.well-known/acme-challenge/gKf9dhwfjFpprbL_iciDyL9cbyfadhPSWP3lLa-faTA:
   "<html>
   <head><title>404 Not Found</title></head>
   <body bgcolor="white">
   <center><h1>404 Not Found</h1></center>
   <hr><center>"

   To fix these errors, please make sure that your domain name was
   entered correctly and the DNS A record(s) for that domain
   contain(s) the right IP address.
xenon nginx #

However, if I browse to my site, it is a valid (green) secure connection, so it seams to work, kind of.

Looking in /var/www/mysite.com/ it only contains an empty folder “.well-known” - is this normal?

Hi @vespaman , that empty folder is created to place a file inside during the creation or the renewal process. This file is removed when the process finishes. Certbot renewals may have to happen as a superuser to place this file. Did you try sudo certbot renew --dry-run? Since the error you’re getting is unauthorized, if the above doesn’t work certbot may be running into password protection, can I see your NGINX config?

You’re secure, which means the setup should be fine, but the renewal process hasn’t worked yet so we should fix this so you don’t have to renew manually every 3 months.

Hi @Benjy,
I found the (last) issue after a good nights sleep; I think it all related to me having the server_name as *.mysite.com, changing into www.mysite.com all seams to work now.

After having the issues last night, I pushed on, and during this, the error above where replaced by another, anyway I’m good now. :slight_smile:

I also tested using the ssllabs site, but here I got A- (which is not too bad), since it says Forward Secrecy is not supported.
But shouldn’t it be?

My nginx.conf looks like this currently;

user nginx nginx;
worker_processes 1;

error_log /var/log/nginx/error_log info;

events {
        worker_connections 1024;
        use epoll;
}

http {
        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        log_format main
                '$remote_addr - $remote_user [$time_local] '
                '"$request" $status $bytes_sent '
                '"$http_referer" "$http_user_agent" '
                '"$gzip_ratio"';

        client_header_timeout 10m;
        client_body_timeout 10m;
        send_timeout 10m;

        connection_pool_size 256;
        client_header_buffer_size 1k;
        large_client_header_buffers 4 2k;
        request_pool_size 4k;

        gzip off;

        output_buffers 1 32k;
        postpone_output 1460;

        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;

        keepalive_timeout 75 20;

        ignore_invalid_headers on;

        index index.html;


        server {
                listen                          80;
                server_name                     www.mysite.com;
                return 301                      https://$server_name$request_uri;
        }


        server {
                listen 443 ssl;
                server_name www.mysite.com;

                access_log /var/log/nginx/mysite.com_ssl_access_log main;
                error_log  /var/log/nginx/mysite.com_ssl_error_log info;

                ssl_certificate         /etc/letsencrypt/live/www.mysite.com/fullchain.pem;
                ssl_certificate_key     /etc/letsencrypt/live/www.mysite.com/privkey.pem;
                add_header              Strict-Transport-Security "max-age=31536000";

                ssl_protocols                   TLSv1 TLSv1.1 TLSv1.2;
                ssl_prefer_server_ciphers       on;
                ssl_dhparam                     /etc/nginx/ssl/dhparam.pem;
                ssl_ciphers                     ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:HIGH:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!CBC:!EDH:!kEDH:!PSK:!SRP:!kECDH;
                ssl_session_timeout             1d;
                ssl_session_cache               shared:SSL:10m;
                keepalive_timeout               70;

                location / {
                        proxy_pass                            http://localhost:8080/;
                        proxy_buffering                       off;
                        proxy_set_header Host                 $http_host;
                        proxy_set_header X-Real-IP            $remote_addr;
                        proxy_set_header X-Forwarded-For      $proxy_add_x_forwarded_for;
                        proxy_set_header X-Forwarded-Proto    $scheme;
                        auth_basic                            "Username and Password Required";
                        auth_basic_user_file                  /etc/nginx/.htpasswd;
                }

                location /.well-known/acme-challenge/ {
                        root    /var/www/mysite.com;
                }
        }
        server {
                server_name www.myothersite.se;
                access_log /var/log/nginx/myothersite.access_log main;
                error_log /var/log/nginx/myothersite.error_log info;
                root /var/www/myothersite.se/htdocs;
        }

}

Glad you have it sorted. SSL labs should tell you in its report what it sees as a possible vulnerability (usually in orange), but since you have password protection, the tool won’t have full access to your site, this means it won’t receive secure headers such as a HSTS if authorisation is denied.

If you want to see a full report, disable the two password lines in your NGINX config, restart NGINX, try the test and then re-enable auth.

To answer my own question:
I’ve set up NGINX in a docker container on a spare Raspi. Created all the certs and did the config. Works nicely when you stay browser based. No need to enter a password and it is more secure as well, yet the Habdroid doesn’t respond well to the the certificate challenge.

I only get “Bad request” - guess the App does not use Android Webviews?