OH2, NGIFX, EMONCMS and SSL, can they all play nicely together?

I’d like to setup NGIFX and OH2 so they can coexist with EMONCMS which already uses the SSL port. EMONCMS works via Apache with a LetsEncrypt certificate, and is accessed by my subdomain.
Is it possible to get 2 SSL servers to coexist on the same machine? Moving Apache to a non standard SSL port seems to stop connection from Web Browsers, so I guess it has to end up being accessed via port 443.
Can NGINX proxy the the incoming web request and route it accordingly?. For example
https://openhab.mydomain.com gets routed to OH2 on port 8080 and HTTP, and
https://emoncms.mydomain.com gets routed to Apache2 on port 9443 and HTTPS
If this works presumably I need a set of certificates for each sub domain setup in NGIFX.

Assuming that you’ve got an nginx redirect from http:// to https:// and that you don’t map the openHAB root into a subdirectory of whatever server you use, then openHAB works quite even in a multi-domain, single-certificate setting. SNI (and even limiting to TLS v1.2) works just fine in all major browsers, desktop, iOS and Android, as well as Java 8.

With this kind of setup, you have all your HTTP servers on 80 and all your HTTP-S servers on 443 with a single, multi-subject cert (which Let’s Encrypt supports) and the client evaluates that cert to see if the requested domain is in the cert. If so, it opens the TLS connection and sends the request, which in HTTP/1.1 and HTTP/2.0 includes the host requested. he reverse proxy then dispatches to the upstream server on whatever host and port it needs to.

Note: There is a know issue with openHAB not handling its redirects and URI generation properly when the proxied scheme is https and openHAB is being accessed through insecure http. The previously mentioned redirect from http to https hides this problem from the client.

Thanks for the reply. Don’t suppose you can point me in the direction of any further reading on this? I’m a relative newbie to doing this stuff encrypted, although I’ve hosted a few standard websites this way under Windows in the past.
I’m guessing that simply changing the port on apache and then installing nginx to redirect the port back to 443 will break the encryption

Thankfully the world has advanced a bit since the days of Apache’s VirtualHost directives…

The “trick” is that clients now generally will accept a certificate as long as one of the hostnames match, as validating that they’ve connected to the proper host.

I’ll not go into a lot of security configuration decisions, but at the very least, make sure you’ve disabled all SSL and TLSv1, at a minimum. https://wiki.mozilla.org/Security/Server_Side_TLS is one good starting point.

Assuming that you’ve gotten and installed a multi-host certificate for all the hosts that you want to serve from the same host (for example, using certbot with multiple -d options), the rest of it is “straightforward” (OK, it took me a while to figure out)

From my own config, as a guidelineI do not warrant that it meets anyone’s needs for security, now and certainly not in the future!

In the http section, configure your general TLS configuration. This is nothing special to multi-domain hosting from a single node at this point.

    # Implement recommendations from EFF -> Mozilla
    # https://mozilla.github.io/server-side-tls/ssl-config-generator/
    # nginx 1.10.2 | modern profile | OpenSSL 1.0.2k

    ssl_certificate      path/to/fullchain.pem;
    ssl_certificate_key  path/to/privkey.pem;

    ssl_session_timeout 1d;
    ssl_session_cache shared:SSL:50m;
    ssl_session_tickets off;

    # modern configuration. tweak to your needs.
    ssl_protocols TLSv1.2;
    ssl_prefer_server_ciphers on;

    # HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
    #    add_header Strict-Transport-Security max-age=15768000;

    # OCSP Stapling ---
    # fetch OCSP records from URL in ssl_certificate and cache them
    ssl_stapling on;
    ssl_stapling_verify on;

    ## verify chain of trust of OCSP response using Root CA and Intermediate certs
    # https://community.letsencrypt.org/t/howto-ocsp-stapling-for-nginx/13611/5
    #   "You need to set the ssl_trusted_certificate to chain.pem
    #    for OCSP stapling to work.
    ssl_trusted_certificate path/to/chain.pem;

    # https://wiki.mozilla.org/Security/Server_Side_TLS
    # "Modern" profile removes DH entirely
    # RFC 7991 referenced there says to use
    # pre-defined DH groups ffdhe2048, ffdhe3072 or ffdhe4096
    # so this is cruft for current config
    # openssl dhparam -out dhparams.pem 2048
    ssl_dhparam         path/to/dhparams.pem;

    ssl on;

Set up a name resolver (probably one that knows your internal names, if you’re using split-horizon or local DNS) and specify how to connect to your upstream servers

    proxy_http_version 1.1;


Skipping over how to do a redirect from example.com to www.example.com, or the reverse, if you desire, set up two server sections for each host. The first does a redirect from 80 to 443, with a change to https, if needed. Since SSL is on by default, explicitly turn it off for this server block.

Be careful with the Strict-Transport-Security until you’re confident everything is working. A browser will end up “forcing” https until it expires and if it’s not all working right, you may not be able to test further to fix it without “wiping” the browser.

    server {
        listen          80;
        server_name     www.example.com;
        ssl             off;
        include         local/acme-challenge;
        location / {
            return 307 https://$server_name$request_uri;
            add_header Strict-Transport-Security
                       "max-age=86400; includeSubDomains"
            # add_header Strict-Transport-Security
            #          "max-age=31536000; includeSubDomains"
            #          always;

$server_name can be hard coded to avoid a lookup, if you have high traffic volume and are worried abut load. I’ll get to include local/acme-challenge; later.

The second actually serves the content retrieved from the upstream node.

    server {
        listen          443 ssl http2;
        server_name     www.example.com;
        include         local/acme-challenge;
        location / {
            include     local/proxy_set_header;
            add_header        Strict-Transport-Security
                      "max-age=86400; includeSubDomains"
            # add_header Strict-Transport-Security
            #          "max-age=31536000; includeSubDomains"
            #          always;
            proxy_pass http://internal.server.example.com/;

This section basically says, “Get the content from http://internal.server.example.com/” You can include a port number as well.

nginx does not, by default, provide the “usual” headers in a proxy situation. My local/proxy_set_header includes

# Standard proxy_set_header declarations

proxy_set_header        Host $host;
proxy_set_header        X-Real-IP $remote_addr;
proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header        X-Forwarded-Proto $scheme;

There is some magic on the back end needed if you want to log the real IP of the originating node. It differs between various servers/apps.

local/acme-challenge is how I run a single instance of certbot to obtain certificates for all my domains at once.

# Don't proxy /.well-known/acme-challenge/

location /.well-known/acme-challenge/ {
    index off;
    alias /path/to/your/local/webroot/or/appropriate/dir/.well-known/acme-challenge/;

Thanks very much indeed for your comprehensive and detailed reply. I’ll work through it and let you know if there is anything I can’t make out.
Thanks again!

I’ve got it all working, thanks very much! For anyone following on, NGIFX won’t install if Apache2 is running, even if the ports don’t collide