📜 ⬆️ ⬇️

LAMP + Nginx on VPS is stable and without any extra headache

The task is to deploy hosting of several non-loaded sites on minimal VPS resources. Do it quickly and conveniently with minimal problems in the future and do not fall on peak loads.

Basic principles:

1. OS - Centos-6 86_x64 because it is stable, convenient and easy to upgrade.
2. No self-software. And as they say, “with the command make && make install, any distribution will turn into Slackware.”

A small clarification, at the moment I use the v256 tariff plan of the hosting provider flynet.pro (256MB of RAM) and do not count on a large load, so most of it refers to that amount of RAM, but in general the solutions are easily portable to virtually all tariff plans different hosting providers.
And one more clarification - hosting is done “for oneself”. There are insufficiently described points that should be taken into account if you give access to the administration of sites to outsiders.

Go.
1. Check for updates.
The installation image of the hosting provider may not be particularly fresh.
[root@test ~]# yum update

There is something to update - update. No - rejoice.
')
2. Connect the EPEL repository (http://fedoraproject.org/wiki/EPEL) from which we will install the missing software.
[root@test ~]# rpm -ihv download.fedora.redhat.com/pub/epel/6/x86_64/epel-release-6-5.noarch.rpm


3. We put the software we need
[root@test ~]# yum install httpd mysql-server php vsftpd mc phpMyAdmin php-eaccelerator sysstat crontabs tmpwatch


Briefly about the software:
httpd - Apache standard version for Centos-6 - 2.2.15
mysql-server - Mysql 5.1.52
php - PHP 5.3.2
vsftpd - pretty convenient FTP server vsftpd 2.2.2
mc - some things are more convenient to do in mc than on the command line.
phpMyAdmin - similar to mc. Manage mysql databases in phpMyAdmin is certainly more convenient.
php-eaccelerator - accelerator for PHP. Significantly increases the speed of scripts and reduces the load on the processor. Yes, and in memory.
sysstat - in case we want to see how the system is doing.
crontabs - to perform tasks on a schedule.
tmpwatch - utility to remove obsolete files.

In fact, several more packages will be installed, to those packages that we asked to install everything necessary for their operation will be added.
The result is:
Install 44 Package(s)
Upgrade 0 Package(s)

Total download size: 37 M
Installed size: 118 M


4. With the free command, we look at whether we have a swap and, if not, then create it and connect it. If there is - rejoice and skip this item.
Here an important point - the active use of the swap - is very bad. If there is an active swap, it means you need to optimize or cut something. If you can’t optimize and cut back, you’ll have to switch to a more expensive tariff plan. Another thing to keep in mind is that the hosting provider may be offended by too active use of the swap.
But completely without a swap is also not very good - oom killer is a terrible thing. It can inadvertently kill mysqld and instead of just slowing down your sites will completely lie.
Note - you do not need to swap more available RAM. There will be no benefit from him, but he is eating a place.

Create a swap as follows:
[root@test /]# dd if=/dev/zero of=/swap bs=1M count=256
[root@test /]# mkswap /swap


connect
[root@test /]# swapon /swap

Well, in order for it to be connected automatically we write this command to /etc/rc.local
You can check the availability and use of a swap using the top or free commands.

5. Turn on and run daemons.
[root@test /]# chkconfig httpd on
[root@test /]# chkconfig mysqld on
[root@test /]# chkconfig crond on

[root@test /]# service httpd restart
[root@test /]# service mysqld restart
[root@test /]# service crond restart


6. Create users for sites. I prefer the username to be similar to the site domain.
[root@test /]# adduser testsite.ru
[root@test /]# adduser mysite.ru
[root@test /]# adduser cfg.testsite.ru

Next, create additional in user directories. html (in which there will be the main content of the sites) and log in which logs will be written for this site and set permissions. Permissions set: user - full access, group apache reading and listing of directories, the rest - ficus.
You can set your rights with your hands, or you can use a small script:
cd /home
for dir in `ls -1 `; do
mkdir /home/$dir/log
mkdir /home/$dir/html
chown -R $dir:apache $dir
chmod ug+rX $dir
done;


7. Configure the web server. Rule /etc/httpd/conf/httpd.conf
From the really needing a change, we set up the prefork module so that it initially ate less memory and limited its appetites.
The fact is that Apache is initially configured to run up to 256 of its workflows, while one workflow easily takes 20-40MB (256 * 20 = 5GB) this can easily lead to problems, especially on modest VPS where there is only 256MB of RAM.
Therefore, we limit their number to reasonable numbers based on the available RAM. For example, 5 Apache processes with an average size of 30 MB will take about 150 MB - which is already tolerable.
It was:
<IfModule prefork.c>
StartServers 8
MinSpareServers 5
MaxSpareServers 20
ServerLimit 256
MaxClients 256
MaxRequestsPerChild 4000


It became:
<IfModule prefork.c>
StartServers 2
MinSpareServers 2
MaxSpareServers 3
ServerLimit 5
MaxClients 5
MaxRequestsPerChild 1000


Such a setting will not allow the Apache to breed beyond measure and eat all the RAM. Depending on the actual load, the parameters may need to be revised.
Well, uncomment the line
NameVirtualHost *:80

In order to have many sites on one ip address.

Next, go to the /etc/httpd/conf.d/ directory and configure our sites.
There you can delete welcome.conf which turns off the indexes and issues the “Apache 2 Test Page” page instead.
It should be noted that virtual host configs in this directory are used in turn in alphabetical order.
In order that a user logging on to an IP address on any of our sites does not get to a completely different one (which will be the first in the list) in the conf.d directory, you should put a file with a name, for example, 000-default.conf and such content:
<VirtualHost *:80>
ServerName localhost.local
DocumentRoot "/var/www/html"


and put in the directory / var / www / html / index.html file with the wishes.

Next, for each of our virtual hosts we create a config file using a pattern like this:
<VirtualHost *:80>

ServerName testsite.ru
ServerAlias www.testsite.ru
ServerAdmin webmaster@testsite.ru
ErrorLog /home/testsite.ru/log/error.log
CustomLog /home/testsite.ru/log/access.log combined
DocumentRoot /home/testsite.ru/html/

<Directory "/home/testsite.ru/html">
Order allow,deny
Allow from all




In the same files, to taste, you can add individual settings of any modules.

Restart apache and see if everything works.
[root@test /]# service httpd restart


apache should start normally. In the directories log sites should create 2 log files.
When accessing the server by IP address, the file you put in / var / www / html / should be displayed, and when accessing the site names you should see the contents of the html directory (empty most likely) and entries in the access.log file of the corresponding site.

8. Configure mysql. First we delete the test database and set the root user password to mysql
[root@test /]# mysql


mysql> DROP DATABASE test;
mysql> USE mysql;
mysql> UPDATE user SET Password=PASSWORD('MyMysqlPassword') WHERE user='root';
mysql> FLUSH PRIVILEGES;
mysql> quit


With MySql, the problem is about the same as with Apache - the demands on the RAM which is very expensive on the VPS.
To reduce the amount of memory used by the sql server, we correct /etc/my.cnf as follows:
Add the following to the [mysqld] section:
key_buffer = 16M
max_allowed_packet = 10M
table_cache = 400
sort_buffer_size = 1M
read_buffer_size = 4M
read_rnd_buffer_size = 2M
net_buffer_length = 20K
thread_stack = 640K
tmp_table_size = 10M
query_cache_limit = 1M
query_cache_size = 32M
skip-locking
skip-innodb
skip-networking


and add these lines to the end of the file:
[mysqldump]
quick
max_allowed_packet = 16M

[mysql]
no-auto-rehash

[isamchk]
key_buffer = 8M
sort_buffer_size = 8M

[myisamchk]
key_buffer = 8M
sort_buffer_size = 8M

[mysqlhotcopy]
interactive-timeout


restart mysqld to make sure everything is fine:
[root@test ]# service mysqld restart


It is also necessary to replace that the option “skip-networking” makes it possible to access the server only from the local machine via a socket. If network access is required, this option does not need to be enabled.
Such settings will minimize the memory used by the mysql process and work normally on an unloaded site. But of course you need to look at the statistics of the mysql operation and, depending on your needs, increase the limits here.

Further administration of mysql is more convenient for making through phpMyAdmin.
Now one caveat - by default, phpMyAdmin is available along the / phpMyAdmin path on all of our sites.
To prevent this, we create a specialized site for management (for example, cfg.testsite.ru) and set it up just like the rest.
Then we transfer the entire contents of the /etc/httpd/conf.d/phpMyAdmin.conf file to the configuration of this site, and the phpMyAdmin.conf file is deleted or moved somewhere from the conf.d directory.
After such actions, phpMyAdmin will be available on the path / phpMyAdmin / only on a dedicated site.
Well, in order to be able to enter it in the site configuration file we change
<Directory /usr/share/phpMyAdmin/>
Order Deny,Allow
Deny from All
Allow from 127.0.0.1
Allow from ::1


<Directory / usr / share / phpMyAdmin / setup />
Order Deny, Allow
Deny from All
Allow from 127.0.0.1
Allow from :: 1


on
<Directory /usr/share/phpMyAdmin/>
Order Deny,Allow
Deny from All
Allow from 127.0.0.1
Allow from ...
Allow from ::1


<Directory / usr / share / phpMyAdmin / setup />
Order Deny, Allow
Deny from All
Allow from 127.0.0.1
Allow from your.add.
Allow from :: 1


After that, phpMyAdmin will be available from your ip address.

Log in as a root user with the password that was installed.
To create a user, go to "Privilege" - "Add a new user"
username is arbitrary, I prefer to use the site name to reduce confusion.
Host is local (do we make it for a site that will spin right there?)
Password - generate. (do not forget to copy the password)
Put a tick - “Create a database with the name of the user in the name and grant full privileges to it”
Apply.
As a result, we obtain a user with the name, password and database of the same name chosen by you.

9. It is often more convenient to upload files to the hosting via FTP. for this we have installed vsftpd
edit its config /etc/vsftpd/vsftpd.conf
turn off anonymous login, change
anonymous_enable=YES

on
anonymous_enable=NO


and uncomment
chroot_local_user=YES


Now, in order to be able to log in to the FTP of a specific site, the corresponding user must set a password
[root@test /]# passwd testsite.ru


And do not forget that by default this user with a password can log in via SSH. To disable this feature, the easiest way is to change the user shell.
[root@test etc]# chsh -s /sbin/nologin testsite.ru


Enable and run vsftpd
[root@test /]# chkconfig vsftpd on
[root@test /]# service vsftpd start


Check if everything works.

And lastly, a very simple “operational backup”. By the principle of "backups do not happen much."
It would be better to use something more correct, but a bad backup is still better than a complete absence.
Such a backup can be a good addition to the full backup of a virtual machine from a hosting provider. But, by no means replacing it.
We backup the contents of sites and databases, as well as settings in the / etc / directory.
Create a directory / backup / and set it to the right "700"

[root@test /]# mkdir /backup/
[root@test /]# chmod 700 /backup/


In the /etc/cron.daily/ directory, create the backup.sh file and also set the “700” rights to it.
[root@test /]# touch /etc/cron.daily/backup.sh
[root@test /]# chmod 700 /etc/cron.daily/backup.sh


The file has the following contents:

#!/bin/sh

# html
tar -cf - /home/*/html/ | gzip > /backup/sites-`date +%Y-%m-%d`.tar.gz

#
mysqldump -u root --password=MyMysqlPassword --all-databases | gzip > /backup/mysql-`date +%Y-%m-%d`.dump.gz

#
tar -cf - /etc/ | gzip > /backup/etc-`date +%Y-%m-%d`.tar.gz

# 7
tmpwatch -t -m 7d /backup/


In principle, instead of just backup in one heap, it may be better to backup everything separately, but then there is an opportunity to forget to set up backup of something and regret it when it is needed.
Well, or a backup option "separately" requiring that the username of the site and the database name match:

#!/bin/sh
for dir in `ls -1 /home/ `; do
tar -cf - /home/$dir/html/ | gzip > /backup/sites-$dir-`date +%Y-%m-%d`.tar.gz
mysqldump -u root --password=MyMysqlPassword $dir | gzip > /backup/mysql-$dir-`date +%Y-%m-%d`.dump.gz
done;

#
tar -cf - /etc/ | gzip > /backup/etc-`date +%Y-%m-%d`.tar.gz

# 7
tmpwatch -t -m 7d /backup/


10. Updates.
Do not forget to update the system from time to time.
[root@test ~]# yum update

Due to the policy of RHEL / Centos in relation to the software version of the software after the upgrade will remain the same and inadvertently put the server due to the fact that in the config something has changed very little chance.
The truth is that there is a minus in this approach - in three years Centos-6 will have the same version of software as now. But if our goal is stability - this suits us.

11. Testing.
I highly recommend conducting a site testing after setup.
The first test item is to reboot the server and check that all the required demons start and everything works as expected. I would generally recommend not to chase uptime and reboot after installing or changing the versions of any server software that starts up automatically.
It is better to know that Apache does not start in autorun after a handwritten planned reboot than to find out that the hoster had problems and as a result of your virtual machine's reboot, the sites on it do not work for half a day.
Next is load testing using the ab utility (Apache HTTP server benchmarking tool).
In this testing, we are not so much interested in the number of parrots as the behavior of the server under load. It should not have dying processes and active swapping.
For testing, we need a website hosted on this server in working condition. And the “typical” page from this site. Well, or you can use not the typical, but the most difficult.

For example, I am testing on freshly installed Drupal 7.9

Of all the ab command line ab, we need only 2 parameters -n - the number of http requests -c - the number of simultaneous requests (streams).
During the execution of the test in the second ssh session using top, we observe how the server is doing.

100 requests in 2 threads.

[root@test ~]# ab -n 100 -c 2 testsite.ru


From ab output, I am particularly interested in “Requests per second”, “Time per request” and “Failed requests”, which give a general idea of ​​server performance.

Failed requests: 0
Requests per second: 6.20 [#/sec] (mean)
Time per request: 322.788 [ms] (mean)


It can be seen that the server processes 6 kopecks per second requests and spends 322 milliseconds to generate one page.

From the top output, the memory allocation and CPU usage are interesting.

Tasks: 62 total, 3 running, 59 sleeping, 0 stopped, 0 zombie
Cpu(s): 19.9%us, 5.3%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.4%si, 74.5%st
Mem: 244856k total, 151624k used, 93232k free, 3752k buffers
Swap: 262136k total, 0k used, 262136k free, 76604k cached


Swap: 0k used - sooo good.
93232k free + 76604k cached - actually 170 megabytes of free memory.

100 requests 5 threads.

[root@test ~]# ab -n 100 -c 5 testsite.ru


Failed requests: 0
Requests per second: 6.21 [#/sec] (mean)
Time per request: 804.513 [ms] (mean)

Tasks: 63 total, 5 running, 58 sleeping, 0 stopped, 0 zombie
Cpu(s): 17.5%us, 6.2%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 76.3%st
Mem: 244856k total, 159756k used, 85100k free, 3812k buffers
Swap: 262136k total, 0k used, 262136k free, 76660k cached


The number of requests per second remains the same, but the generation time has increased by more than 2 times - rested on the processor.

And finally, habraeffekt or something close :-)

[root@test ~]# ab -n 500 -c 50 testsite.ru


Failed requests: 0
Requests per second: 6.45 [#/sec] (mean)
Time per request: 7749.972 [ms] (mean)

Tasks: 63 total, 6 running, 57 sleeping, 0 stopped, 0 zombie
Cpu(s): 19.1%us, 5.3%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 75.6%st
Mem: 244856k total, 162740k used, 82116k free, 3884k buffers
Swap: 262136k total, 0k used, 262136k free, 76672k cached


Again, the number of requests per second is relatively stable, but the generation time has become completely sad. But at the same time, Failed requests are zero. Which means that even though slowly, but everything works.
Well, about the memory - at the moment Swap: 0k used, 82116k free, 76672k cached - consumption almost did not grow and, in principle, you can increase some of the limits, but I don’t think it’s worthy of filling up the site at the moment. But later it is worth running out the tests on the completed site and, depending on the results, already adjust the settings.

12. Installing nginx as a frontend.

Why is this necessary.
The main problem lies in how apache handles incoming connections. For each incoming connection, a new process is created or one of the running ones is taken and the connection is transferred to it for maintenance. Until the connection is closed, this process only deals with them.
In principle, everything looks good as long as we have a lot of RAM and / or very fast clients (ab running one of these options from localhost), but things get much sadder if the client is on a slow channel or just not in a hurry. In such a case, it actually blocks one of the processes that are being shut down from the server operation at the time of the request retrieval.
Thus, in theory, having a server on a 100 Mbps channel and one persistent client on a dialup with a reboot, we can get something like DOS - a client in several threads will block virtually all our apache processes, which we have in our memory a very small amount.
This problem is solved by installing some lightweight http server as a frontend. If there is a frontend, all incoming connections are accepted by it, then the request is transmitted to apache and a response is quickly received thereby freeing the apache process for new requests. The frontend, slowly and without wasting extra resources, gives the answer to the client who has already asked him.
As an additional bonus, the front-end can give static content by itself - for example, images, css, etc. unloading heavy apache.

[root@test ~]# rpm -ihv centos.alt.ru/pub/repository/centos/6/x86_64/centalt-release-6-1.noarch.rpm
[root@test ~]# yum install mod_realip2 nginx-stable


In order for apache and our scripts in requests to see the real ip address of the client and not the frontend address, we will have mod_realip2 installed.
edit /etc/httpd/conf.d/mod_realip2.conf, uncomment
RealIP On
RealIPProxy 127.0.0.1
RealIPHeader X-Real-IP


edit httpd.conf and files in /etc/httpd/conf.d/
we change all instructions on port 80 to port 8080
Just need to change three directives:
Listen 127.0.0.1:8080
NameVirtualHost *:8080
<VirtualHost *:8080>


edit /etc/nginx/nginx.conf
user apache;
worker_processes 2;


I use the launch of nginx from under the apache user since initially we gave all rights with the expectation of it.
It is also useful to comment out the access_log directive in nginx.conf in order to avoid double logging.
It is better not to touch error_log - Apache has errors and nginx is different after all.

In the server section, edit the listen directive and set:
listen 80 default


change:
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}

on
location / {
proxy_pass 127.0.0.1:8080/;
}


In the /etc/nginx/conf.d/ directory create the proxy.conf file with the following content
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;


restart apache and nginx
service httpd restart
service nginx restart

and check if everything works.

In general, everything. Now nginx is front-end, accepts all incoming connections and proxies them to the Apache who processes them and quickly sends the response back to nginx, freeing the process for new requests.

The next step to increase performance and reduce resource consumption will be the return of static content directly through nginx.

In addition to the apache virtual hosts, it will be necessary to start virtual nginx hosts and specify what to distribute.
To do this, in the /etc/nginx/conf.d/ directory, create a file with the name of our site and a .conf extension with the following contents:
server {
listen 80;
server_name testsite.ru www.testsite.ru;
location / {
proxy_pass 127.0.0.1:8080/;
}

location ~ /\.ht {
deny all;
}

location /sites/default/files {
root /home/testsite.ru/html;
access_log /home/testsite.ru/log/access_static.log combined;
}
}


In this example, for a site on CMS Drupal, the static content of the / sites / default / files directory is distributed via nginx, and for everything else we are already heading to Apache.
Another option is to replace the location directive with:
location ~ \.(jpg|gif|png|css|js|ico)$ {
root /home/testsite.ru/html;
access_log /home/testsite.ru/log/access_static.log combined;
}

In this case, all files with the appropriate extensions will be given to nginx. But in this variant there is a small minus - nginx does not know how to work with .htaccess files, so if you have any content there that is not viewable by .htaccess - you should refrain from using this option.

It is also worth noting that in this situation we get two logs to one site. Separately, the log of requests for which Apache worked and separately the log of the content of the given nginx.
Alternatively, transfer the access_log directive from the location section to the server section and disable access_log in the virtual host Apache. In this case, only nginx will log.
But to see "how it works," a double log may be interesting - they immediately show what part of the burden on anyone.

For further optimization, it is worthwhile to read the manuals for optimizing specific components and to do it with an eye to the current situation.

UPD: Fixed some typos
UPD: Fixed connection swap, thanks AngryAnonymous
UPD: Added description of the installation and configuration of nginx, thanks to masterbo for the kick in the right direction.
Another version of the backup script from odmin4eg : habrahabr.ru/blogs/s_admin/132302/#comment_4391784

Waiting for critics.

Source: https://habr.com/ru/post/132302/


All Articles