Last week, Maxim Dunin
posted a message in the English-speaking Nginx developer mailing list asking him to test a patch that adds full (including chunked-answers) support for Keep-Alive connections (using the upstream keepalive module) with http, fastcgi and memcached back-end servers.
To avoid misunderstandings, let me remind you that Nginx has excellent support for the HTTP 1.1 protocol and Keep-Alive client connections. However, persistent connections with http backend are not supported. There were reasons for this.
Among the benefits that can be obtained from the expected innovations:
- Fewer system calls to request, which can add to your performance, especially in cases where the cost of setting up a connection is commensurate with the speed of response (fast back-end or high channel delays); [one]
- Less risk of exhausting the limit of short-lived ports on high-load balancers. There is no need to lower the TIME_WAIT interval for faster release; [2]
- It is also likely that FreeBSD users who benefit from Unix sockets with a high load on the file system will also benefit, given the presence in the core of a common shared resource for these subsystems. [3]
The proposed patch for Nginx versions 1.0.5 and 1.1.0 is experimental and runs basic tests. The author
encourages everyone to take part in more intensive testing. As Igor Sysoev already
said , in the future it is expected that the code will be added to one of the releases of the 1.1.x branch that is being actively developed at the moment.
All interested please follow the link for detailed instructions:
mailman.nginx.org/pipermail/nginx-ru/2011-August/042069.html')
The latest version of the patch can be found here:
nginx.org/patches/patch-nginx-keepalive-full-2.txtps Do not get carried away by the number of keepalives in the upstream directive, it is multiplied by the number of working processes. So, if you have the worker_processes directive set to 4, and keepalive to set 10, then you will get up to 4 * 10 == 40 keep-alive connections. Do not overestimate the possibilities of your back end.