NGiNX Reverse Proxy Caching Setup
NGiNX reverse proxy caching can boost your website or web application loading speed when enabled and properly configured. We all know that NGiNX, at least for now, is one of the best software mainly used for web servers but NGiNX has more amazing capabilities, can be easily used as Reverse Proxy and even as Load Balancer within any stack, can be used in many other cases but these are the common scenarios. In this very short tutorial we will be focusing on its caching capability, more precisely caching with NGiNX when used as Reverse Proxy for backend web servers.
Table of contents
Scenario
NGiNX Reverse Proxy Caching Configuration
Tips and Tricks
Scenario
Assuming that we have a few web servers behind a load balancer we would like to cache pretty much everything using NGiNX. Having most of the content cached we can serve it this way faster to our visitors, not only that is cached but will be served as static content also. Having a caching layer in front make sense as most of the requests won’t pass through the load balancer way back to our back-end web servers, so less travel time means a lower response time.
NGiNX Reverse Proxy Caching Configuration
Below you can see the code used for http
configuration block that will cache our content, this can be placed being in nginx.conf
file being in one of our custom .conf
file.
http {
proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=STATIC:10m inactive=24h max_size=1g;
server {
location / {
proxy_pass http://1.2.3.4;
proxy_set_header Host $host;
proxy_buffering on;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;
}
}
}
The lines above are pretty much self explanatory but if you find any of these difficult to understand please use NGiNX documentation website which is excellent and always there for you.
/data/nginx/cache
is the folder where all cached content will be automatically stored by NGiNX using two subfolders beneath root one, this is managed via levels=1:2
argument and can be tweaked accordingly.
The amount of cached content is specified by max_size=1g
where some will find this value too high and the caching process pointless but this is not true.
proxy_pass http://1.2.3.4;
is used for back-end servers, can be a Load Balancer or a pair of web servers defined within a upstream
block.
Tips and Tricks
The best way to cache and serve cached content is to mount /data/nginx/cache
as tmpfs
(RAM memory) drive, depending on how much RAM memory you can allocate for caching you can adjust max_size
value.
Avoid creating many caching levels (keys) like levels=1:200
as this can slow things down, by keeping this value to a lower level will make NGiNX to respond faster having less computational operations to perform.