Join our Telegram: @serverrental_wiki | BTC Analysis | Trading Signals | Telegraph
Nginx Reverse Proxy Configuration
Nginx Reverse Proxy Configuration
This guide explains how to configure Nginx as a reverse proxy to serve multiple applications from a single server, enabling load balancing and support for WebSockets.
Introduction
A reverse proxy acts as an intermediary between clients and backend servers. It receives client requests, forwards them to the appropriate backend server, and then returns the server's response to the client. This offers several advantages, including:
This tutorial assumes you have a basic understanding of Linux command line and Nginx installation. For installation instructions, please refer to Nginx Installation.
Prerequisites
Before you begin, ensure you have the following:
- A Linux server with Nginx installed. Ubuntu/Debian based systems are assumed for commands.
- Root or sudo privileges.
- At least two backend applications running on different ports (e.g., an API on port 3000 and a web app on port 8080).
- Domain names or subdomains pointing to your server's IP address.
Step 1: Basic Reverse Proxy Configuration
We'll start by configuring Nginx to proxy requests for a single application.
First, create a new Nginx configuration file for your application. Replace `your_domain.com` with your actual domain name.
sudo nano /etc/nginx/sites-available/your_domain.com
Paste the following configuration into the file, replacing `your_backend_app_ip` and `your_backend_app_port` with the actual IP address and port of your application. If your application is running on the same server, you can use `127.0.0.1`.
server {
listen 80;
server_name your_domain.com www.your_domain.com;
location / {
proxy_pass http://127.0.0.1:8080; # Replace with your backend app's IP and port
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
- `listen 80;`: Nginx will listen on port 80 for incoming HTTP requests.
- `server_name your_domain.com www.your_domain.com;`: Specifies the domain names this server block will respond to.
- `location / { ... }`: This block defines how to handle requests for the root path (`/`).
- `proxy_pass http://127.0.0.1:8080;`: This is the core directive. It tells Nginx to forward requests to `http://127.0.0.1:8080`.
- `proxy_set_header ...;`: These directives pass important information about the original client request to the backend application.
Save and close the file (Ctrl+X, Y, Enter).
Next, enable the site by creating a symbolic link to the `sites-enabled` directory:
sudo ln -s /etc/nginx/sites-available/your_domain.com /etc/nginx/sites-enabled/
Test your Nginx configuration for syntax errors:
sudo nginx -t
If the test is successful, reload Nginx to apply the changes:
sudo systemctl reload nginx
Now, when you access `your_domain.com` in your browser, Nginx will forward the request to your backend application running on port 8080.
Step 2: Serving Multiple Applications
To serve multiple applications, you'll create separate server blocks for each application, often using different subdomains or paths.
Example: Serving a second application on `api.your_domain.com`
Create a new configuration file for your API:
sudo nano /etc/nginx/sites-available/api.your_domain.com
Paste the following configuration, adjusting `api.your_domain.com` and the backend port (e.g., 3000):
server {
listen 80;
server_name api.your_domain.com;
location / {
proxy_pass http://127.0.0.1:3000; # Replace with your API's IP and port
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Enable this new site and test/reload Nginx as done in Step 1:
sudo ln -s /etc/nginx/sites-available/api.your_domain.com /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
Now, `api.your_domain.com` will serve your API, while `your_domain.com` continues to serve your main web application.
Serving applications on different paths of the same domain
You can also route different paths to different applications on the same domain. For instance, serving the API at `your_domain.com/api/`.
Edit your main `your_domain.com` server block:
sudo nano /etc/nginx/sites-available/your_domain.com
Add a new `location` block for the API path:
server {
listen 80;
server_name your_domain.com www.your_domain.com;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /api/ {
alias /var/www/api_static_files/; # If serving static files for API
# Or proxy to a different backend for API
proxy_pass http://127.0.0.1:3000/api/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Important for proxy_pass with trailing slash
rewrite ^/api/(.*)$ /$1 break;
}
}
Note the `rewrite` directive. When using `proxy_pass` with a trailing slash and a path, Nginx appends the request URI to the `proxy_pass` URL. The `rewrite` directive removes the `/api/` prefix from the request URI before it's passed to the backend, ensuring that requests to `your_domain.com/api/users` are correctly forwarded to `http://127.0.0.1:3000/users`.
Test and reload Nginx.
Step 3: WebSocket Support
WebSockets are crucial for real-time communication. Nginx needs specific headers to handle WebSocket connections correctly.
Edit your server block configuration (e.g., `/etc/nginx/sites-available/your_domain.com`):
sudo nano /etc/nginx/sites-available/your_domain.com
Add the following headers within the `location` block that handles your WebSocket-enabled application:
server {
listen 80;
server_name your_domain.com www.your_domain.com;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
- `proxy_http_version 1.1;`: WebSocket connections require HTTP/1.1.
- `proxy_set_header Upgrade $http_upgrade;`: Passes the client's `Upgrade` header to the backend.
- `proxy_set_header Connection "upgrade";`: Passes the client's `Connection` header to the backend.
Test and reload Nginx.
Step 4: Load Balancing
Load balancing distributes incoming traffic across multiple backend servers, improving performance and availability.
Define an `upstream` block to list your backend servers. This block typically goes inside the `http` context in your main Nginx configuration file (`/etc/nginx/nginx.conf`) or in a separate file included in the `http` block.
Edit your main Nginx configuration:
sudo nano /etc/nginx/nginx.conf
Add an `upstream` block like this:
http {
# ... other http configurations ...
upstream my_backend_app {
server 192.168.1.10:8080;
server 192.168.1.11:8080;
server 192.168.1.12:8080 weight=5; # Higher weight means more traffic
# server 192.168.1.13:8080 backup; # Backup server, used if others fail
}
# ... other http configurations ...
}
- `upstream my_backend_app { ... }`: Defines a group of backend servers named `my_backend_app`.
- `server a.b.c.d:port;`: Lists your backend servers.
- `weight`: Assigns a relative weight to servers.
- `backup`: Designates a server as a backup.
Now, modify your `server` block to use this upstream group:
server {
listen 80;
server_name your_domain.com www.your_domain.com;
location / {
proxy_pass http://my_backend_app; # Use the upstream group name
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket headers if needed
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
Test and reload Nginx. Nginx will now distribute traffic across the servers defined in the `my_backend_app` upstream block.
Step 5: SSL/TLS Configuration
Securing your applications with HTTPS is crucial. You can terminate SSL at the Nginx reverse proxy level.
You'll need SSL certificates. You can obtain free certificates from Let's Encrypt using Certbot.
Install Certbot:
sudo apt update sudo apt install certbot python3-certbot-nginx
Obtain and install certificates for your domain(s):
sudo certbot --nginx -d your_domain.com -d www.your_domain.com
Certbot will automatically modify your Nginx configuration files to enable HTTPS and set up automatic renewal. It will create new server blocks for port 443 and redirect HTTP traffic to HTTPS.
Your Nginx configuration for `your_domain.com` will look something like this after Certbot:
server {
server_name your_domain.com www.your_domain.com;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket headers if needed
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/your_domain.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/your_domain.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
And a separate server block for HTTP to HTTPS redirection:
server {
if ($host = www.your_domain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = your_domain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
listen [::]:80;
server_name your_domain.com www.your_domain.com;
return 404; # managed by Certbot
}
Troubleshooting
- 502 Bad Gateway: This usually means Nginx cannot reach your backend application.
* Check if your backend application is running. * Verify the `proxy_pass` address and port are correct. * Check firewall rules on the server and between servers if they are separate. * Examine Nginx error logs: `sudo tail -f /var/log/nginx/error.log`.
- WebSocket connection issues:
* Ensure `proxy_http_version 1.1;`, `proxy_set_header Upgrade $http_upgrade;`, and `proxy_set_header Connection "upgrade";` are correctly configured. * Some backend frameworks might require specific configurations for WebSocket handling.
- Site not loading or Incorrect site serving:
* Check `server_name` directives in your Nginx configuration files. They must match the domain you are accessing. * Ensure you have enabled the correct site configuration by creating a symlink in `sites-enabled`. * Check for syntax errors with `sudo nginx -t`.
- SSL certificate errors:
* Ensure your domain's DNS records are correctly pointing to your server's IP. * Verify that Certbot completed successfully. * Check Nginx logs for SSL-related errors.