Introduction
As you know, nginx can cache server response, and issue it on request instead of accessing the backend, thereby saving server resources. The speed of return of such cached pages sometimes is amazing, for the sake of such speeds it is sometimes not a shame to transfer many functions of the site to javascript just to be able to cache another 1 page entirely (For example, to render a drawing of a plate with user authorization to js which is identical for all users, except for this very plate).
I used the possibility of caching nginx pages many times, and ran across a couple of inconvenient things for myself:
- You can easily cache all pages altogether, but is it necessary for dynamic sites or for sites with authorization?
- It is possible to cache several URLs separately, of the form / album / *, but not to rewrite the same nginx config each time new sections of the site appear?
If the project, in addition to everything, is laid out by the admins, decomposed into several servers, processed by several nginx - rewriting configs becomes a non-trivial task.
For myself, I solved this task with the help of the X-Accel-Expires header and several lines in the nginx configuration.
')
Task
For my decision to cache pages, I formed the following requirements:
1. Manage the on or off caching of each page from php.
2. Manage caching by full url.
3. Manage the cache time of each individual page or section of the site.
4. To be able to change the time of caching and all cache settings at any time without overwriting the nginx configuration.
Decision
In my case, use fastcgi (php for content upload, nginx for statics)
We will solve the problem in several steps:
1. Set up a new zone for caching in the nginx configuration. The line is prescribed before the section "server" with the configuration of our site, for example.
fastcgi_cache_path /tmp/mycache levels=2 keys_zone=mycachename:5m inactive=2m max_size=1500m;
2. In the server section, set page caching as follows:
location / { root $PROJECT_ROOT/data/ ; fastcgi_cache mycachename; fastcgi_cache_valid 200 301 302 304 30m; fastcgi_cache_key "$request_method|$http_if_modified_since|$http_if_none_match|$host|$request_uri"; fastcgi_cache_use_stale error timeout invalid_header; fastcgi_pass_header "X-Accel-Expires"; fastcgi_pass 127.0.0.1:9900; fastcgi_index index.php; }
Here we pay attention to the following parameters:
fastcgi_cache - the name of the zone specified in point 1
fastcgi_cache_valid - for which response codes we enable caching. The last parameter — cache time — will actually be the maximum cache time for a page. Those. we can manage cache time from 1 second to 30 minutes.
fastcgi_cache_key - not to cache really different requests in one file
fastcgi_pass_header "X-Accel-Expires" - optional, but useful for debugging - to see how in the end we have cached the page.
3. In php we add the following things:
Set the default header:
<?php header("X-Accel-Expires: 0");
In order to default no answer, we have not cached.
And in the function responsible for displaying data to the user:
function flushResponse() { if (self::$cache_time) {
Here we check if the page can be cached (in my case, using some logic or from the configuration of each page of the site, we get the page cache time in seconds) and set the X-Accel-Expires header one more time, overwriting the previously set value with 0.
What's happening?
Thus, nginx does the following when processing the request:
1. Checks whether the requested page is in the cache (the cache file name is determined by the parameter fastcgi_cache_key)
2. If it lies, it gives content without accessing php and tearing up the connection.
3. Runs the php script, gets the X-Accel-Expires header in response. If it is 0, nothing is cached, otherwise it is cached for the number of seconds specified in the header. In our case, the maximum time is 30 minutes, which is determined by the last argument of the fastcgi_cache_valid parameter.
Conclusion
Thus, you can comfortably manage the powerful caching of nginx without overwriting the server configuration. It was “rewriting the configuration” and prescribing the caching conditions in the configs that pushed me away from using this technology for a long time.
I did not describe all the charms of the nginx caching mechanism, which has already been discussed many times. The article is designed to help start using this technology for people who find it easier to write logic in their programming language, and not through the nginx configuration.