SO_REUSEPORT
socket option, which is available in modern versions of operating systems such as DragonFly BSD and Linux (kernels 3.9 and newer). This option allows you to open several listening sockets at the same address and port at once. At the same time, the kernel will distribute incoming connections between them.SO_REUSEPORT
has many potential applications for solving various problems. So, some applications can use it to update executable code on the fly (NGINX has always had this capability from time immemorial using a different mechanism). In NGINX, enabling this option increases performance in some cases by reducing locks on the locks.SO_REUSEPORT
one listening socket is divided between multiple workflows, and each of them tries to accept new connections from it:SO_REUSEPORT
option, we have a lot of listening sockets, one for each workflow. The kernel of the operating system distributes to which of them a new connection falls (and thus which of the working processes will eventually receive it):SO_REUSEPORT
in http
or stream
modules, it is enough to specify the reuseport
directive parameter in the example, as shown in the example: http { server { listen 80 reuseport; server_name example.org; ... } } stream { server { listen 12345 reuseport; ... } }
reuseport
will automatically disable accept_mutex
for this socket, since no mutex is needed in this mode.reuseport
OK
string. Three configurations were compared: with accept_mutex on
(default), with accept_mutex off
and with reuseport
. As can be seen in the diagram, the inclusion of reuseport
increases the number of requests per second by 2-3 times and reduces delays, as well as their fluctuations.reuseport
there is a decrease in delays, similar to the previous measurement, and their spread decreases even more (almost by an order of magnitude). Other tests also show good results from using the option. Using reuseport
load was distributed evenly across workflows. With the accept_mutex
directive accept_mutex
there was an imbalance at the beginning of the test, and in the event of a shutdown, all workflows took up more CPU time.Latency (ms) | Latency stdev (ms) | CPU Load | |
---|---|---|---|
Default | 15.65 | 26.59 | 0.3 |
accept_mutex off | 15.59 | 26.48 | ten |
reuseport | 12.35 | 3.15 | 0.3 |
reuseport
option reuseport
achieved when the load responds to this pattern. Thus, the reuseport
option reuseport
not available for the mail
module, since mail traffic definitely does not meet these conditions. We recommend that everyone make their own measurements to ensure that the effect of the reuseport
, and not blindly turn on the option wherever possible. Some tips on testing the performance of NGINX can be found in the speech of Konstantin Pavlov at the nginx.conf 2014 conference.SO_REUSEPORT
in NGINX. The NGINX team used their ideas for implementation, which we consider to be ideal.Source: https://habr.com/ru/post/259403/
All Articles