14. 07. 2022 Alessandro Valentini DevOps

My OpenShift Journey #2: Nginx Load Balancing and SSL Termination

In a previous blog post I described how we installed our first OpenShift cluster and how we used HA Proxy as a load balancer.

Our cluster is meant to host both internal services (like CI and docker registry) and public services, and we thus have to expose them on multiple domains with valid SSL certificates.

HAproxy performs both SSL passthrough and also SSL termination, but unfortunately not both at the same time. Furthermore it does not allow us to handle different certificates for different domains. We therefore decided to switch to Nginx: it’s able to perform both load balancing and SSL termination, and we already use it elsewhere in our infrastructure.

Load balancing with Nginx is pretty straight forward. For example, for an API load balancer, you can use a configuration similar to the following:

stream {
    upstream machine-config-server_backend {
        server backend1.example.com:22623;
        server backend2.example.com:22623;
        server backend3.example.com:22623;
    }

server {
        listen 22623;
        proxy_pass machine-config-server_backend;
    }
}

but for the Ingress load balancer things get more complicated because of following requirements:

  • Load balancing on ports 80 and 443
  • Performing SSL termination for one or more domains
  • Forcing HTTPS for some requests on port 80
  • Working as a plain load balancer in all other cases

SSL Termination and SSL Passthrough

Before starting I want to highlight that the default Nginx version on Red Hat 8.6 does not support ssl_preread. You may either use a more updated Linux distro or else add the official Nginx repository:

[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true

and then update the Nginx and Nginx modules (if already installed):

yum update nginx
yum remove nginx-mod*
yum install nginx-module-*

We need to handle SSL termination and SSL passthrough, therefore we have to work at the TCP level. All subsequent configuration is contained inside a stream section:

stream {
    ...
}

Luckily we can exploit the ssl_preread feature introduced with Nginx 1.11.5. This module allows us to extract host names from SSL handshakes, information that is not otherwise available at the TCP level.

We have to create a server listening on port 443 and store the domain into a variable:

server {
    listen 443;
    proxy_pass $https_name;
    ssl_preread on;
}

Then the Nginx map feature will match domains with a corresponding upstream. In our example we perform SSL passthrough by default, like HAproxy does, and SSL termination only for mapped domains.

map $ssl_preread_server_name $https_name {
    myservice.example.com myservice_example_com;
    default https_openshift;
}

Finally we have to create the upstreams: two for HTTP and HTTPS load balancing traffic, plus one upstream for each mapped domain:

upstream http_openshift {
    server node01.myopenshift.wp.lan:80;
    server node02.myopenshift.wp.lan:80;
    server node03.myopenshift.wp.lan:80;
}

upstream https_openshift {
    server node01.myopenshift.wp.lan:443;
    server node02.myopenshift.wp.lan:443;
    server node03.myopenshift.wp.lan:443;

}

upstream myservice_example_com {
    server 127.0.0.1:8443;
}

To perform SSL termination we have to create a server which exposes the correct certificate and listens on a dedicated port, since port 443 is already bound. All the traffic is then re-routed to a plain HTTP upstream and is thus load balanced among all schedulable nodes.

server {
    listen 127.0.0.1:8443 ssl;

    proxy_pass http_openshift;

    ssl_certificate "/etc/nginx/certs/myservice_example_com.crt";
    ssl_certificate_key "/etc/nginx/certs/private/myservice_example_com.key";
}

HTTP Strict Transport Security

Now we’re able to handle SSL connections, but we’re still missing the plain HTTP connection. We handle port 80 inside an http section because we want to send a redirect response to the client to forward it to HTTPS. In case we need to redirect more domains, we can just add them in the server_name field of the first server. The second server will act as a default to re-route all other pages to the load balancer.

http {
       server {
        listen 80;
        server_name  myservice.example.com;
        location / {
            return 301 https://$host$request_uri;
        }
    }

    server {
        listen 80;
        server_name  _;
        location / {
            proxy_pass http://127.0.0.1:6080;
        }
    }
}

We also have to create a server inside the stream section: this way we don’t need to replicate upstreams inside the http section.

Since this server is used only internally, we bind it only to localhost on a non-standard port.

stream {
    ...
    server {
        listen 127.0.0.1:6080;
        proxy_pass http_openshift;
    }
}

Conclusions

In this post I detailed our solution to using Nginx as a load balancer on OpenShift. As far as I know this is a pretty specific requirement: the official documentation and most tutorials suggest using HAproxy.

It’s certainly also possible to keep HAproxy as a load balancer and use Nginx (or even Apache) in front of it to perform SSL termination, or else expose everything under a single domain and use HAproxy only. In our standardization effort however, we decided to invest some time in reducing the software stack and machine count in order to simplify maintenance. At the moment we still have HAproxy in place for API load balancing, but we plan to replace it as well in the near future.

These Solutions are Engineered by Humans

Did you find this article interesting? Are you an “under the hood” kind of person? We’re really big on automation and we’re always looking for people in a similar vein to fill roles like this one as well as other roles here at Würth Phoenix.

Alessandro Valentini

Alessandro Valentini

DevOps Engineer at Wuerth Phoenix
DevOps Engineer at Würth Phoenix

Author

Alessandro Valentini

DevOps Engineer at Würth Phoenix

Leave a Reply

Your email address will not be published. Required fields are marked *

Archive