I provide services for setting up web- and DB-servers. The other day I was approached by Ivan Usachev, the owner of the portal
ochevidets.ru, with a request to save the site from braking.
Pages at peak times began to load for a long time, up to 5 minutes per page.
UPDATE: The article was written in 2010. Something has changed: new versions of programs have been released, some directives have changed in nginx and new ones have appeared. Consider this.
The first thing to start with in this case is to check if this is a problem of the provider through which the client enters the site. The Internet is a large aggregate of networks, in which the package path from the data center to the client can be inhibited for reasons beyond the control of the site. Therefore, we check the download speed of the site through other providers.
')
For this purpose there are convenient services:
http://site-perf.com/ (access point is better to choose European)
http://tools.pingdom.com/The problem was not in the providers. We reason further.
Rather heavy video content is distributed. PHP scripts are quite simple, do not consume a lot of CPU time. I learned this by looking at the output of the top linux command.
The server has a βmirrorβ raid with a read speed of each disk of about 60 MB / s (the server has been around for several years). The server is connected to the Internet via a 100 Mb / s channel, which is roughly equivalent to 10 megabytes per second, considering that approximately 20% of the data is official.
Obviously, the linear read speed is much higher than the throughput of the ethernet interface. At first glance, there should be no problems.
What if software raid moved to the degraded-state? In this case, its performance may fall many times. I look at the output of cat / proc / mdstat - all the disks [UU], everything is OK.
Reason further. The problem arose for the first time. Attendance increased. Many requests to the disk subsystem. Since many users work with the site in parallel, the hard drive constantly repositions its heads and the reading speed can drop phenomenally due to such an unpleasant thing as seek-time.
In addition, as I learned from the output of the free command from 8 GB of swap, 300 MB was used, which is in itself a very bad sign. Writing and loading pages from a swap can specifically kill performance.
From the output of the same top command, I learned that 70-80% of the time processes are in the io state β which means waiting for input / output. In principle, the network card also falls under the input, but according to the munin reports, the peak traffic was 70 Mbit, i.e. peak performance has not been achieved.
Here, of course, you need to take into account that the provider channel may not give the declared 100 Mbps and it is he who can limit the traffic.
But taking into account the fact that during normal browsing of directories from mc, there was a slowdown in the problem was clearly in the disk subsystem.
In general, there is a wonderful iotop utility, but to work with it, the kernel must include: CONFIG_TASK_IO_ACCOUNTING, CONFIG_TASK_DELAY_ACCT, CONFIG_TASKSTATS.
The kernel was old, these options were not in it. Therefore, during the day I could not use this diagnostic tool, because server was undesirable to stop.
I suggested that if you put the most frequently used files into the cache, then the number of repositioning will decrease by several times and this will save the server.
The most beautiful way I know to cache a huge amount of data is a raid card with support for Max IQ technology. SSD Drive for 64 GB or more is connected to the hardware raid controller. And voila! We have 64 GB cache! Beautiful - but expensive to get started you need a controller with support for Max IQ ($ 500) and a special SSD Drive (from $ 1000).
There was a place for the controller in the server, but not for the additional hard drive. The server has a very compact 1U package. So I decided to start using the system file system cache. To increase this cache, simply increase the system memory.
There were 4 more connectors left in the server and I found the necessary memory, focusing on the marking on the already standing memory modules.
It was installed 2Gb of memory, I suggested adding another 8 GB. I ordered it from an online store and while waiting for delivery, I decided to adjust the remaining moments.
The server as frontend uses nginx and I decided to see what can be done with it.
At first I installed
worker_processes 6;
2. 8 , nginx apache mysql.
/ :
http {
....
# sendfile(). sendfile() , ,
# .
sendfile on;
#
tcp_nopush on;
tcp_nodelay on;
# .
access_log off;
#
gzip on;
#
gzip_min_length 1000;
gzip_buffers 16 8k;
#
gzip_types text/plain text/css text/xml application/x-javascript application/xml application/xhtml+xml;
}
flv mp4. ( /streaming) . , β , , .
flv- nginx --with-http_flv_module.
x264 (mp4), nginx .
flv. :
http {
...
server {
...
location ~ \.flv$ {
flv;
}
...
}
...
}
. , nginx , apache. :
server {
listen 80;
server_name www.ochevidets.ru ochevidets.ru;
...
location / {
limit_conn one 30;
proxy_pass http://127.0.0.1:8128/;
...
}
...
}
location flv
location ~ \.flv$ {
limit_conn one 2;
proxy_pass http://127.0.0.1:8128;
proxy_buffer_size 2m;
...
}
nginx . :
location /video/ {
root /home/ochev/html/ochevidets.ru/;
}
location ~ /video/.*\.flv$ {
root /home/ochev/html/ochevidets.ru/;
flv;
# 2
limit_conn one 2;
}
server
# ,
# 150
limit_rate_after 1m;
set $limit_rate 150k;
- , - . - )
php.ini - php-.
eaccelerator :
zend_extension="//eaccelerator.so"
eaccelerator.shm_size="8" ; shared memory
eaccelerator.cache_dir="/home/ochev/tmp/eaccelerator" ; , shared memory
eaccelerator.enable="1" ;
eaccelerator.optimizer="1" ;
eaccelerator.check_mtime="1" ; ,
eaccelerator.debug="0"
eaccelerator.filter=""
eaccelerator.shm_max="0"
eaccelerator.shm_ttl="0"
eaccelerator.shm_prune_period="0"
eaccelerator.shm_only="0" ; shared memory
eaccelerator.compress="0" ; shared memory
ab, .
ab -c 10 -n 20 -t 20 http://www.ochevidets.ru/
β 10, 20, β 20
Time per request 0.35 . : eaccelerator mod_php 4,11 c 22,53 92,67 .
mysql. --log-slow-queries=_. , 30 . :
... ORDER BY RAND(), ...... LIMIT ...
RAND() 2
. .
nginx , . , ?
linux- :
curl --head ____http
- -, ,
be1.ru/stat:
Cache-control β
Expires β ,
Pragma β , cache-control
Etag β - . ,
:
:
ochevidets.ru/userfiles/1209/images/2010/12/ekaterinburg/0.jpgStatus: HTTP/1.1 301 Moved Permanently
Server: nginx/0.8.53
Date: Wed, 01 Dec 2010 19:47:26 GMT
Content-type: text/html; charset=iso-8859-1
Connection: keep-alive
Location: http://www.ochevidets.ru/userfiles/1209/images/2010/12/ekaterinburg/0.jpg
Cache-control: max-age=3600
Expires: Wed, 01 Dec 2010 20:47:26 GMT
Content-length: 404
, .
, , , www.
, ,
ochevidets.ru ,
www.ochevidets.ru. .
.htaccess :
#
#RewriteCond %{HTTP_HOST} ^ochevidets\.ru$ [NC]
#RewriteCond %{REQUEST_URI} !^/robots\.txt$
#RewriteRule ^(.*)$ http://www.ochevidets.ru/$1 [R=301,L]
#
# ochevidets.ru
RewriteCond %{HTTP_HOST} ^ochevidets\.ru$ [NC]
RewriteRule ^((.*/)|)$ http://www.ochevidets.ru/$1 [R=301,L]
.htaccess, httpd.conf, .
.htaccess :
Status: HTTP/1.1 200 OK
Server: nginx/0.8.53
Date: Wed, 01 Dec 2010 19:53:24 GMT
Content-type: image/jpeg
Connection: keep-alive
Last-modified: Wed, 01 Dec 2010 09:25:20 GMT
Etag: "62957-1511b-49655e24e2000"
Accept-ranges: bytes
Content-length: 86299
Cache-control: max-age=864000
Expires: Sat, 11 Dec 2010 19:53:24 GMT
.
, . :
. . . . .
.
β
2-3 ( ).
, , β RAM, nginx apache, .
UPDATE :
- mysql dulepov (thread_cache_size = 36, wait_timeout = 300).
- nginx alexxxst
- ab WoZ siege
- alfa curl -I