Use nginx as secure reverse proxy

Main Config

// list of backend instances
upstream lb {
  server IP_ADDRESS_1:443;
  server IP_ADDRESS_2:443;
  server IP_ADDRESS_3:443;
}

server {
  listen 80 default_server;
  listen [::]:80 default_server;

  # Redirect all HTTP requests to HTTPS with a 301 Moved Permanently response.
  return 301 https://$host$request_uri;
}

server {
  listen 443 ssl http2;
  listen [::]:443 ssl http2;
  
  server_name lbfe;

  # certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
  ssl on;
  ssl_certificate /path/to/signed_cert_plus_intermediates;
  ssl_certificate_key /path/to/private_key;
  ssl_session_timeout 1d;
  ssl_session_cache shared:SSL:50m;
  ssl_session_tickets off;

  # modern configuration. tweak to your needs.
  ssl_protocols TLSv1.2;
  ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
  ssl_prefer_server_ciphers on;

  # HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
  add_header Strict-Transport-Security max-age=15768000;

  # OCSP Stapling ---
  # fetch OCSP records from URL in ssl_certificate and cache them
  ssl_stapling on;
  ssl_stapling_verify on;

  ## verify chain of trust of OCSP response using Root CA and Intermediate certs
  ssl_trusted_certificate /path/to/root_CA_cert_plus_intermediates;

  # Optional: specify an internal DNS resolver to resolve your backend instance names
  # resolver <IP DNS resolver>;

  location / {
    proxy_pass https://lb;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
  }
}

Important: If you have a certificate chain file from a certificate authority, add it to your server block using the ssl_trusted_certificate directive. Place this directive after the ssl_certificate_key directive.

Choose a load balancing method

NGINX load balancing defaults to the round-robin method of routing traffic. This method routes traffic to your backends using round-robin ordering, where each new request is sent to a different server. In addition, NGINX provides the following load balancing methods:

  • Least-connected

    Incoming traffic is sent to the target server with the lowest number of active connections.

  • IP Hash

    Incoming traffic is routed according to hash function that uses the client’s IP address as input. This method is particularly useful for use cases that require session persistence.

You can substitute the default round-robin method with an alternative method by adding the appropriate directive to your virtual host’s upstream block. To use the least-connected method, add the least_conn directive:

upstream {
  least_conn;
  server 1.2.3.4;
  ...
}

To use the IP hash method, add the ip_hash directive:

upstream {
  ip_hash;
  server 1.2.3.4;
  ...
}

Weight your servers

You can adjust your load balancer to send more traffic to certain servers by setting server weights. To set a server weight, add a weight attribute to the specific server in your upstream directive:

upstream {
  server 1.2.3.4 weight=3
  server 2.3.4.5
  server 3.4.5.6 weight=2
}

If you use this configuration with the default round-robin load balancing method, for every six incoming requests, three are sent to 1.2.3.4, one is sent to 2.3.4.5, and two are sent to 3.4.5.6. The IP hash method and least-connected method also support weighting.

Configure health checks

NGINX automatically performs server health checks. By default, NGINX marks a server as failed if the server fails to respond within ten seconds. You can customize the health checks for individual servers by using the max_fails and fail_timeout parameters.

server 1.2.3.4 max_fails=2 fail_timeout=15s\

Set server state

If you want to set a specific backend server to be used only when the other servers are unavailable, you can do so by adding the backup parameter to the server definition in your upstream directive:

server 1.2.3.4 backup

Similarly, if you know that a server will remain unavailable for an indefinite period of time, you can set the down parameter to mark it as permanently unavailable:

server 1.2.3.4 down

Source: