📜 ⬆️ ⬇️

Nginx configuration

The theme of the correct setup of nginx is very large, and, I am afraid, it doesn’t fit into the framework of a single article on the habr. In this text I tried to tell about the general structure of the config, more interesting little things and particulars, maybe, will come later. :)

A good starting point for configuring nginx is the config that comes with the distribution, but many of the features of this server are not even mentioned in it. A significantly more detailed example is on Igor Sysoev’s website: sysoev.ru/nginx/docs/example.html . However, let's better try to build from scratch your config, with a bridge and poetess. :)

Let's start with the general settings. First, we will indicate the user on whose behalf nginx will work (from the root it’s not working well, everyone knows :))

user nobody;

Now let's say nginx, how many workflows to spawn. Usually, a good choice is the number of processes equal to the number of processor cores in your server, but it makes sense to experiment with this setting. If a high load on a hard disk is expected, you can do it by a process on each physical hard disk, since all the work will be anyway limited by its performance.
')
worker_processes 2;

Specify where to write error logs. Then, for individual virtual servers, this parameter can be redefined, so that only global errors, for example, related to server start, will be added to this log.

error_log /spool/logs/nginx/nginx.error_log notice; # "notice", ,

Now comes a very interesting section "events". In it, you can specify the maximum number of connections that will be simultaneously processed by a single worker process and the method that will be used to receive asynchronous notifications about events in the OS. Of course, you can choose only those methods that are available on your OS and were included during compilation.

These settings can have a significant impact on the performance of your server. They must be selected individually, depending on the OS and iron. I can give only a few general rules.

Event Modules:
- select and poll are usually slower and rather heavily load the processor, but are available almost everywhere, and work almost always;
- kqueue and epoll are more efficient, but are only available on FreeBSD and Linux 2.6, respectively;
- rtsig is a fairly efficient method, and is supported even by very old linux, but can cause problems with a large number of connections;
- / dev / poll - as far as I know, it works in several more exotic systems, such as a Solaris, and is quite effective in it;

Worker_connections parameter:
- The total maximum number of clients served will be worker_processes * worker_connections;
- Sometimes even the most extreme values, like 128 processes, 128 connections per process, or 1 process, can work in a positive direction, but with the parameter worker_connections = 16384. In the latter case, however, most likely you will need to tune the OS.

events {
worker_connections 2048;
use kqueue; # BSD :)
}

The next section is the largest, and contains the most interesting. This is a description of virtual servers, and some parameters common to them all. I will omit the standard settings that are in each config, such as paths to logs.

http {
# %)
# ...
}

Inside this section there can be some pretty interesting parameters.

The sendfile system call appeared in Linux relatively recently. It allows you to send data to the network, bypassing the stage of copying them into the address space of the application. In many cases, this significantly improves server performance, so the sendfile parameter is always better to include.

sendfile on;

The keepalive_timeout parameter is responsible for the maximum time for keeping a keepalive connection, in case the user requests nothing from it. Consider exactly how requests are sent to your site, and correct this parameter. For sites that actively use AJAX, it is better to keep the connection longer, for static pages that users will read for a long time, it is better to break the connection early. Note that by supporting an inactive keepalive connection, you are occupying a connection that might have been used differently. :)

keepalive_timeout 15;

Separately, it is worth highlighting the nginx proxy settings. Most often, nginx is used exactly as a server-proxy, respectively, they are quite important. In particular, it makes sense to set the buffer size for proxied requests to be no less than the expected response size from the backend server. With slow (or, conversely, very fast) backends, it makes sense to change the timeouts for waiting for a response from the backend. Remember, the longer these timeouts are, the longer your user will have to wait for the response when the backend brakes.

proxy_buffers 8 64k;
proxy_intercept_errors on;
proxy_connect_timeout 1s;
proxy_read_timeout 3s;
proxy_send_timeout 3s;

A little trick. In case nginx serves more than one virtual host, it makes sense to create a “virtual host by default”, which will process requests in cases where the server cannot find another alternative to the Host header in the client's request.

# default virtual host
server {
listen 80 default;
server_name localhost;
deny all;
}

Further one (or several) sections of "server" can follow. Each of them describes a virtual host (most often, name-based). For owners of multiple sites on the same hosting, or for hosts, there may be something like a directive

include /spool/users/nginx/*.conf;

The rest are likely to describe their virtual host directly in the main config.

server {
listen 80;

# , server_name .
server_name myserver.ru myserver.com;
access_log /spool/logs/nginx/myserver.access_log timed;
error_log /spool/logs/nginx/myserver.error_log warn;
# ...

Set the encoding for the default return.

charset utf-8;

And we say that we do not want to receive requests from clients with a length of more than 1 megabyte.

client_max_body_size 1m;

Enable SSI for the server and ask for SSI variables to reserve no more than 1 kilobyte.

ssi on;
ssi_value_length 1024;

And finally, we will describe two locations, one of which will lead to the backend, to an Apache running on port 9999, and the second to give static pictures from the local file system. For two locales, this is not very meaningful, but for a larger number of them, it makes sense to also immediately determine the variable in which the server root directory will be stored, and then use it in location descriptions.

set $www_root "/data/myserver/root";

location / {
proxy_pass 127.0.0.1:9999;
proxy_set_header X-Real-IP $remote_addr;
proxy_intercept_errors off;
proxy_read_timeout 5s; # , ,
proxy_send_timeout 3s;
# ...

A separate block in the root location is dedicated to compressing the result in gzip. This will allow you and your users to save on traffic. Nginx can specify which types of files (or, in our case, responses from the backend) should be compressed, and what should be the minimum file size in order to use compression.

# ...
gzip on;
gzip_min_length 1024;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain application/xml;
}

location /i/ {
root $www_root/static/;
}
}

Thank you all for your attention. And, sorry, that the post turned out quite long.

Source: https://habr.com/ru/post/66764/


All Articles