📜 ⬆️ ⬇️

Accelerate Nginx in 5 minutes

image
Try to repeat it yourself.

As a rule, properly configured Nginx server on Linux can handle 500,000 - 600,000 requests per second. But this figure can be very significantly increased. I would like to draw attention to the fact that the settings described below were used in a test environment and, perhaps, they are not suitable for your combat servers.

A moment of banality.

yum -y install nginx 

For every fireman, let's create a backup of the original config.
')
 cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.orig vim /etc/nginx/nginx.conf 

And now you can pohimichit!

Let's start with the worker_processes directive. If Nginx does the work of loading a processor (for example, SSL or gzipping), then it is optimal to set this directive to a value equal to the number of processor cores. Winning at a higher value will be obtained only if a very large amount of static is processed.

 # This number should be, at maximum, the number of CPU cores on your system. worker_processes 24; 

Also, the worker_processes directive multiplied by the worker_connections from the event section will give the maximum possible number of clients.

 # Determines how many clients will be served by each worker process. worker_connections 4000; 

The last proletarian directive I want to address is worker_rlimit_nofile . This directive specifies how many file descriptors Nginx will use. For each connection, you need to allocate two dexryptor, even for static files (images / JS / CSS): one to connect to the client, and the second to open the static file. Thus, the value of worker_rlimit_nofile should be equal to twice the value of Max Clients . On the system, this value can be set from the command line ulimit -n 200000 or using /etc/security/limits.conf .

 # Number of file descriptors used for Nginx. worker_rlimit_nofile 200000; 

Now let's deal with logging. First, let's leave only critical errors.

 # Only log critical errors. error_log /var/log/nginx/error.log crit 

If you are completely fearless and want to disable logging errors entirely, remember that error_log off will not help you. You just get the whole log in the off file. To disable error logging, do this:

 # Fully disable log errors. error_log /dev/null crit; 

But access logs are not so scary to turn off completely.

 # Disable access log altogether. access_log off; 

Or, at least, enable the read / write buffer.

 # Buffer log writes to speed up IO. access_log /var/log/nginx/access.log main buffer=16k; 

Nginx supports a number of methods for handling connections. The most effective for Linux is the epoll method.

 # The effective method, used on Linux 2.6+, optmized to serve many clients with each thread. use epoll; 

In order for Nginx to try to accept the maximum number of connections, you need to enable the multi_accept directive. However, if the worker_connections are too small, their limit can be quickly exhausted.

 # Accept as many connections as possible, after nginx gets notification about a new connection. multi_accept on; 

Of course, we can not do without caching information about:
I advise you not to copy the values ​​of the caching directives, but to play with them, choosing the best ones for your environment.

 # Caches information about open FDs, freqently accessed files. open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; 

The sendfile directive activates the copying of data between file descriptors by means of the kernel, which is much more efficient than the read () + write () binding, which requires the exchange of data with user space.

 # Sendfile copies data between one FD and other from within the kernel. sendfile on; 

After enabling sendfile , you can force Nginx to send HTTP response headers in a single packet, rather than in separate parts.

 # Causes nginx to attempt to send its HTTP response head in one packet, instead of using partial frames. tcp_nopush on; 

For keep-alive connections, you can turn off buffering ( Nagle's algorithm ). This will be useful if you frequently request small amounts of data in real time, without receiving an immediate answer, when timely data delivery is important. A classic example is mouse hover events.

 # Don't buffer data-sends (disable Nagle algorithm). tcp_nodelay on; 

It is worth paying attention to two more directives for keep-alive connections. Their purpose looks obvious.

 # Timeout for keep-alive connections. Server will close connections after this time. keepalive_timeout 30; # Number of requests a client can make over the keep-alive connection. keepalive_requests 1000; 

To free up additional memory allocated for sockets, enable the reset_timedout_connection directive. It will allow the server to close the connection of those customers who have stopped responding.

 # Allow the server to close the connection after a client stops responding. reset_timedout_connection on; 

You can also significantly reduce the timeouts for the directives client_body_timeout and send_timeout (the default value of both is 60 seconds). The first one limits the time for reading the request body from the client. The second is the response time to the client. Thus, if the client does not start reading the data in the specified period of time, then Nginx will close the connection.

 # Send the client a "request timed out" if the body is not loaded by this time. client_body_timeout 10; # If the client stops reading data, free up the stale client connection after this much time. send_timeout 2; 

And, of course, data compression. Plus - the only and obvious: reducing the size of traffic sent. Minus - the only and obvious: does not work for MSIE 6 and below. You can disable compression for these browsers with the gzip_disable directive, specifying the special mask “msie6” as the value, which corresponds to the regular expression “MSIE [4-6] \.” But works faster (thanks to hell0w0rd for the comment ).

 # Compression. gzip on; gzip_min_length 10240; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml; gzip_disable "msie6"; 

Perhaps this is all I wanted to talk about. Let me just say again that you should not copy the above settings one to one. I advise you to use them one at a time, each time running some kind of utility for load testing (for example, Tsung ). It is very important to understand what settings really speed up your web server. Methodical testing will save you a lot of time.

PS All settings in one piece for the fearless lazy
 # This number should be, at maximum, the number of CPU cores on your system. worker_processes 24; # Number of file descriptors used for Nginx. worker_rlimit_nofile 200000; # Only log critical errors. error_log /var/log/nginx/error.log crit events { # Determines how many clients will be served by each worker process. worker_connections 4000; # The effective method, used on Linux 2.6+, optmized to serve many clients with each thread. use epoll; # Accept as many connections as possible, after nginx gets notification about a new connection. multi_accept on; } http { # Caches information about open FDs, freqently accessed files. open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # Disable access log altogether. access_log off; # Sendfile copies data between one FD and other from within the kernel. sendfile on; # Causes nginx to attempt to send its HTTP response head in one packet, instead of using partial frames. tcp_nopush on; # Don't buffer data-sends (disable Nagle algorithm). tcp_nodelay on; # Timeout for keep-alive connections. Server will close connections after this time. keepalive_timeout 30; # Number of requests a client can make over the keep-alive connection. keepalive_requests 1000; # Allow the server to close the connection after a client stops responding. reset_timedout_connection on; # Send the client a "request timed out" if the body is not loaded by this time. client_body_timeout 10; # If the client stops reading data, free up the stale client connection after this much time. send_timeout 2; # Compression. gzip on; gzip_min_length 10240; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml; gzip_disable "msie6"; } 

Source: https://habr.com/ru/post/198982/


All Articles