In this article I will describe the fundamental differences between Apache and Nginx, the architecture of the frontend backend, installing Apache as the backend and Nginx as the frontend. And also I will describe the technology that allows you to speed up the web server: gzip_static + yuicompressor.
Nginx
Nginx server is easy; it starts the specified number of processes (usually the number of processes = the number of cores), and each process in the cycle accepts new connections, processes the current ones. Such a model allows to serve a large number of customers with low resource costs. However, with such a model, you cannot perform lengthy operations when processing a request (for example mod_php), since it essentially hangs the server. During each cycle inside the process, essentially two operations are performed: read a data block
from somewhere , write
somewhere .
From somewhere and
somewhere is a connection to a client, a connection to another web server or a FastCGI process, a file system, a buffer in memory. The work model is configured with two main parameters:
worker_processes
- the number of running processes. Usually set equal to the number of processor cores.worker_connections
- the maximum number of connections processed by a single process. Directly depends on the maximum number of open file descriptors on the system (1024 by default on Linux).
Apache
Apache is a heavy server (it should be noted that, if desired, it can be quite easy, but this will not change its architecture); It has two main work models -
prefork and
worker .
When using the
prefork model
, Apache creates a new process for processing each request, and this process does all the work: accepts the request, generates the content, gives it to the user. This model is configured with the following parameters:
StartServers
- sets the number of processes to be started at the start of the web server.MinSpareServers
- the minimum number of processes hanging idle. This is necessary in order to start processing it faster when a request arrives. The web server will run additional processes to have the specified number.MaxSpareServers
- the maximum number of processes hanging without work. This is necessary in order not to take up extra memory. The web server will kill unnecessary processes.MaxClients
- the maximum number of parallel serviced clients. The web server will not start more than the specified number of processes.MaxRequestsPerChild
- the maximum number of requests that the process will process, after which the web server will kill it. Again, to save memory, because memory in the process will gradually "flow away".
This model was the only one supported by Apache 1.3. It is stable, does not require multithreading from the system, but consumes a lot of resources and loses a little in terms of the speed of the worker.
When using the
worker model, Apache creates several processes, several threads in each. In addition, each request is fully processed in a separate thread. Slightly less stable than prefork, because A thread crash can crash the entire process, but it works a little faster, consuming less resources. This model is configured with the following parameters:
StartServers
- sets the number of processes to be started at the start of the web server.MinSpareThreads
- the minimum number of threads hanging idle in each process.MaxSpareThreads
- the maximum number of threads hanging idle in each process.ThreadsPerChild
- sets the number of threads that each process starts when the process starts.MaxClients
- the maximum number of parallel serviced clients. In this case, sets the total number of threads in all processes.MaxRequestsPerChild
- the maximum number of requests that the process will process, after which the web server will kill it.
Front end backend
The main problem of Apache is that a separate process is allocated for each request (at least a thread), which is also hung with various modules and consumes a lot of resources. In addition, this process will hang in memory until it gives all the content to the client. If the client has a narrow channel, and the content is quite voluminous, then this can take a long time. For example, the server will generate content in 0.1 seconds, and give it to the client for 10 seconds, all the while taking up system resources.
Frontend backend architecture is used to solve this problem. Its essence is that the client request comes to a light server, with an architecture like Nginx (
frontend ), which redirects (proxies) the request to a heavy server (
backend ). The backend forms the content, very quickly gives it to the frontend and frees up system resources. The front end puts the result of the backend work into its buffer and can give it (result) to the client for a long time and persistently, while consuming much less resources than the backend. Additionally, the frontend can independently process requests for static files (css, js, images, etc.), control access, check authorization, etc.
Nginx bundle setup (frontend) + Apache (backend)
It is assumed that Nginx and Apache are already installed. You need to configure the server so that they listen to different ports. In this case, if both servers are installed on the same machine, it is better to hang the backend only on the loopback interface (127.0.0.1). In Nginx, this is configured by the listen directive:
listen 80;
In Apache, this is configured by the Listen directive:
Listen 127.0.0.1:81
Next, you need to specify Nginx to proxy requests to the backend. This is done by the directive
proxy_pass 127.0.0.1:81;
. This is all minimal configuration. However, we said above that returning static files is also better to entrust Nginx. Suppose we have a typical PHP site. Then we need to proxify only requests to .php files on Apache, processing everything else on Nginx (if your site uses mod_rewrite, then you can also rewrite them
on Nginx , and simply throw out the .htaccess files). It is also necessary to take into account that the client's request comes to Nginx, and the request to Apache is already done by Nginx, therefore the Host HTTP header will not be, and Apache will define the client's address (REMOTE_ADDR) as 127.0.0.1. The Host header is easy to substitute, but Apache defines REMOTE_ADDR itself. This problem is solved with the help of
mod_rpaf for Apache. It works as follows: Nginx knows the client's IP and adds a certain HTTP header (for example, X-Real-IP), in which it writes this IP. mod_rpaf gets this header and writes its contents to the Apache REMOTE_ADDR variable. Thus, the PHP scripts executed by Apache will see the real IP of the client.
Now the configuration will be complicated. First, make sure that the same virtual host with the same root exists in both Nginx and Apache. Example for Nginx:
server {
listen 80;
server_name example.com;
root /var/www/example.com/;
}
Apache example:
<VirtualHost 127.0.0.1:81>
DocumentRoot "/var/www/example.com/"
ServerName example.com
</ Virtualhost>
Now we set the settings for the above scheme:
Nginx: server {
listen 80;
server_name example.com;
location / {
root /var/www/example.com/;
index index.php;
}
location ~ \ .php ($ | \ /) {
proxy_pass http://127.0.0.1:81;
proxy_set_header X-Real-IP $ remote_addr;
proxy_set_header Host $ host;
}
}
Apache: # mod_rpaf settings
RPAFenable On
RPAFproxy_ips 127.0.0.1
RPAFheader X-Real-IP
<VirtualHost 127.0.0.1:81>
DocumentRoot "/var/www/example.com/"
ServerName example.com
</ Virtualhost>
The regular expression
\.php($|\/)
describes two situations: a request to
* .php and a request to
* .php / foo / bar . The second option is necessary for the work of many CMS. When requesting
example.com/, the request will be rewritten to
example.com/index.php (since we have defined the index file) and will also be proxied to Apache.
Accelerate: gzip_static + yuicompressor
Gzip on the Web is good. Text files are perfectly compressed, traffic is saved, and content is delivered faster to the user. Nginx can compress on the fly, so there is no problem. However, it takes some time to compress a file, including processor time. And here comes the Nginx
gzip_static directive. The essence of its work is as follows: if when requesting a file, Nginx finds a file with the same name and an additional extension ".gz", for example, style.css and style.css.gz, then instead of compressing style.css, Nginx will read from the disk is already compressed style.css.gz and will give it under the guise of a compressed style.css.
Nginx settings will look like this:
http {
...
gzip_static on;
gzip on;
gzip_comp_level 9;
gzip_types application / x-javascript text / css;
...
Fine, we will once generate a .gz file so that Nginx will give it away many times. In addition, we will compress css and js with the help of
YUI Compressor . This utility minimizes css and js files as much as possible, removing spaces, reducing names, etc.
And to make it all shrink automatically, and you can even update it automatically with the help of cron and a small script. Register in cron to run once a day the following command:
/ usr / bin / find / var / www -mmin 1500 -iname "* .js" -or -iname "* .css" | xargs -i -n 1 -P 2 packfile.sh
in the
-P 2 parameter, specify the number of cores of your processor, do not forget to set the full path to
packfile.sh and change
/ var / www to your web directory.
In the file
packfile.sh list:
java -jar /var/www/gzip/yuicompressor-2.4.2.jar "$ 1" | gzip -c -9> "$ 1.gz"
Do not forget to specify the correct path to
yuicompressor-2.4.2.jar .
')
That's all, ready to listen to criticism :)