NGiNX Traffic Split

 
 

NGiNX traffic split or traffic distribution can be very handy in various user cases, for example this can be used to improve security for our websites or web applications. In this particular tutorial we will show you how to make use of this great feature that comes with NGiNX by implementing a simple solution for traffic distribution in order to protect our backend web applications or maybe a websites by simply using two individual backend upstreams. Same traffic management principle using NGiNX, explained below, can be used for various other backend services and solutions within various stacks.

Table of contents

Context – NGiNX Traffic Split
NGiNX Read-Only Upstream Block
NGiNX Read-Write Upstream Block
NGiNX HTTP Block Configuration

Context – NGiNX Traffic Split

As we said previously we will try to keep this tutorial as simple as possible by using a basic example about how to split traffic using NGiNX. Assuming that we have a WordPress website that that is hosted on four web servers we would like to secure this by allowing only a particular IP address to have Read-Write permissions and Read-Only for all other IP addresses. Shortly we will allow only our office IP or VPN to have full permissions like for example media files upload on our WordPress marketing website, all other IPs will only have Read-Only permissions. This way we are making sure that no one else except authorized IPs will gain full controll over our website.

On our scenario we have four web servers having next IP addresses and attributes:

10.0.0.1 - NGiNX Load Balancer
10.1.1.1 - Read-Only Web Server
10.1.1.2 - Read-Only Web Server
10.1.1.3 - Read-Only Web Server
10.1.1.4 - Read-Write Web Server

NGiNX Read-Only Upstream Block

Now that we know the infrastructure architecture we need to define our upstream blocks, we will need one upstream for Read-Only backend servers and a second upstream for Read-Write backend servers.

On our NGiNX Load Balancer (10.0.0.1) we need to add the next lines in order to create a Read-Only upstream block that will be called readonly-backend



	## NGiNX ReadOnly Backend Server(s) ##

	upstream readonly-backend  {

		server 10.1.1.1:80;
		server 10.1.1.2:80;
		server 10.1.1.3:80;
		ip_hash;
	
	}


As you can see we have defined all three backend web servers that will be used for all our visitors, now let’s move to our next step where we will have to define the next upstream block for Read-Write.

NGiNX Read-Write Upstream Block

The second upstream block named readwrite-backend is pretty much the same as the readonly-backend block but this one contains only 10.1.1.4 server that will be later used for all our internal users like office IP or VPN, let’s add the next lines on the same file:



  	## NGiNX ReadWrite Backend Server(s) ##

	upstream readwrite-backend {

		server 10.1.1.4:80;

	}
 

We are now done configuring our upstream blocks for traffic split but we have to add a few more lines in order to filter the traffic.

NGiNX HTTP Block Configuration

This particular block will handle the traffic distribution for our both upstreams. We will pass our office IP or VPN IP like for example 180.1.2.3 within a if statement.


    server {

        ## Any other config that we may have ##

	## Traffic Split Configuration ##

        location / {

                proxy_set_header        Accept-Encoding   "";
                proxy_set_header        Host              $http_host;
                proxy_set_header        X-Forwarded-By    $server_addr:$server_port;
                proxy_set_header        X-Forwarded-For   $remote_addr;
                proxy_set_header        X-Forwarded-Proto $scheme;
                proxy_set_header        X-Real-IP         $remote_addr;
        
                
                ## Default backend (ReadOnly)
                proxy_pass  http://readonly-backend;
        
                
                ## Send traffic to ReadWrite backend if IP is 180.1.2.3 ##
                if ( $remote_addr ~* 180.1.2.3 ) {

                        proxy_pass http://readwrite-backend;
                
                }

                proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;

        }
        
        ## All other config below this line ##

    }

Our short tutorial about NGiNX traffic split or traffic distribution ends here, all that is left is just make sure that the configuration passes config test by inking nginx -t on our terminal window and if no errors are showing up then a NGiNX reload should work just fine in order to test our solution.

Video

No video posted for this page.

Screenshots

No screenshots posted for this page.

Source code

No code posted for this page.

About this page

Article
NGiNX Traffic Split
Author
Category
Published
03/07/2019
Updated
12/08/2019
Tags

Share this page

If you found this page useful please share it with your friends or colleagues.