It's no secret that users love when content on the site is updated more often than once a year. This love of users for dynamic pages is shared by search engines. Google, for example, can detect the presence of updated blocks on a page and adds some karma to it (read PR).
However, dynamic content is quite poorly combined with large loads. For a web server, returning a static page is a much simpler task than running code that generates this page dynamically. In some cases, the pregeneration of all possible page options may help, but it will not save if there are too many of them, or the page is updated too often.
')
A simple example - suppose you had a wonderful static HTML that was given to nginx at a speed of 2000 times per second (this is achievable without any problems). And here the good designer drew a new version of the page, where for logged-in users there is a small block containing a login, first name, and, for example, an avatar.
Everything has arrived. You can't get away with static here, which means running PHP (Perl / Python) for each request, checking the session, crawling over the file system (or, even worse, the database) in order to find a login using SID and other information about the user. Performance sags tenfold. Or not. :)
This problem can be solved quite easily if you use two useful features provided by the nginx web server.
The feature number that has been in it since time immemorial is SSI. In order to use it, we make a small script that can receive a session ID from a cookie, and give only that small block with information about the user. That is, a request of the form
GET /get_user_info.php
gives an HTML fragment, like <div class = "login"> Hello, Vasily Pupkin </ div>.
Accordingly, in the page itself we write SSI-include:
<!--# include virtual="/get_user_info/" -->
.
In order for this construction to work, the corresponding location must be described in the nginx config file:
location /get_user_info/ {
proxy_pass 127.0.0.1:12345/get_user_info.php;
proxy_pass_header Cookie; # sid Cookie
}
However, as you have probably already noticed, such a construction does not solve the problems with the performance of the code, but only allows to bring the dynamic part into a separate script. Ok, this is exactly what feature number two is needed for - caching proxy requests. In the production-branch, it appeared relatively recently, but it allows you to do a lot. :)
So, for a start, let's say nginx, where we want to store our cache.
proxy_cache_path /var/nginx/cache
levels=1:2 keys_zone=my_cache:64m max_size=1024m
inactive=1d;
This line means that we are creating a new zone for the cache called my_cache, the cache files will be in / var / nginx / cache, taking up no more than 1 gigabyte, and stored for no more than 1 day.
Now let's say nginx that we want to cache requests for our location.
location /get_user_info/ {
proxy_pass 127.0.0.1:12345/get_user_info.php;
proxy_pass_header Cookie; # sid Cookie
proxy_cache my_cache;
proxy_cache_valid 200 3h; # :)
proxy_cache_valid any 0; # 500 400
proxy_cache_use_stale updating error timeout invalid_header http_500 http_502 http_503 http_504 http_404; # , , ,
proxy_cache_key "$scheme$proxy_host$uri$is_args$args$cookie_sid";
}
Pay attention to the last line. It will allow us to give each individual user the correct version of the cached block. More precisely, its last parameter is important - the $ cookie_sid variable, the value of which will always be equal to the value of the cookie with the name “sid”, which the user must give you.
Everything. Now nginx itself will, as necessary, access your script and cache the result of its work. At the same time, the performance will almost never fail, and the load on the server will not increase very much.
All those who are interested in the details, I send to the site of the author nginx for the documentation:
sysoev.ru/nginx/docs .
UPD: Understood with the settings, transferred to the blog nginx.