Actually, if we turn to the vendor’s site , we’ll see that:
“1C-Bitrix”: Web Environment ”- Linux is used to quickly and easily install all the software required for the operation of 1C-Bitrix products and solutions on CentOS 6 (i386, x86_64) and CentOS 7 (x86_64) Linux platforms. It is necessary to install on a “clean” CentOS, without a web server already installed.
The structure of "1C-Bitrix: Web Environment" - Linux includes: mysql-server, httpd, php, nginx, nodejs push-server, memcached, stunnel, catdoc, xpdf, munin, nagios, sphinx.
In fact, this software package contains a configured LAMP, a console control panel of the server, plus additional packages necessary for the operation of some 1C-Bitrix modules. All software is configured with the features of 1C-Bitrix, namely:
')
So, we had 2 app servers (let's call them app01 and app01), 2 db servers (db01, db02), 1 server for caching (cache01, you understand), or rather the idea was to implement the cluster structure in a similar way. Under this plan, 5 servers were obtained, with the latest versions of centos7 installed on them (unfortunately, debian, ubuntu, fedora, rhel, and others are not suitable), except for os, nothing was installed on the servers.
Since we collect a cluster, it is necessary to determine which of the servers will be the main one. Due to the peculiarities of balancing requests to the application, one of the servers where httpd will work will also contain nginx. All incoming requests will also receive it, and then redirect the request to one of the available web nodes. We chose the main app01 server.
Further work went according to the following plan:
Installation does not imply supernatural knowledge of linux or administration. We go to each server via ssh and execute the following commands:
cd ~ wget http://repos.1c-bitrix.ru/yum/bitrix-env.sh chmod +x bitrix-env.sh ./bitrix-env.sh
Since we will use this whole zoo as a cluster, then we can configure the servers through the environment menu on app01. To do this, go to the server via ssh, and run the file /root/menu.sh. When you first start, you must set a password for the bitrix user (a similar operation must be performed on all servers where the site is planned to be launched):
Actually, this is the user under which the application will work. After this we see a screen offering to create a pool of servers:
Here we need to select the first menu item. During the creation process, the environment will request the name of the current server, then we specify app01:
After the pool is created, we are returned to the first screen of the environment, but this time there are more items available:
In general, the environment is ready and you can use it. If we do not need a cluster, then we could end it, but we will go further.
Now we need to add all available servers to the created pool. To do this, use the first menu item and see the following options:
Again, select the first menu item, and specify the ip of the new server, its name in the cluster (the same app02, db01, db02, cache01) and the root password from the connected server. Thus, we add each existing server in turn. After all the servers are registered in the cluster, we should get something like this on the main screen of the environment:
Setting server roles until the next step.
Since our application initially runs on the same server, the cluster scaling and management module is disabled, the database is not replicated. The transfer itself is nothing supernatural - packed bitrix and upload folders, removed the database dump.
After the archives and dumps are ready, go to app01, and pull the project code into the default folder of the site in bitrixenv - / home / bitrix / www via git, download the archives and dump of the database, unpack the archives and fill in dump to db on app01, transfer cron entries.
If your application uses additional software, then it's time to install and configure it. In our case, supervisord and RabbitMQ were installed and configured, since The application worked using queues.
There is a small, but important, nuance. When transferring a site to a cluster, it is necessary that the scale and cluster modules are disabled on the site, and in the environment of the cluster to which the transfer is planned, the pool servers are not involved. Cluster servers need to be put into operation only after the site is moved and deployed on the main server. Otherwise, the site will not be able to correctly identify the cluster servers.
After the application was transferred to app01, and we checked the correctness of its work, it’s time to do the most interesting thing - scaling. First you need to install the scale and cluster modules in the 1C-Bitrix admin panel. During installation, nothing special needs to be done, all the work goes on.
Once the modules are installed, go to the ssh connection to the main server, and this is app01, and open the bitrixenv menu (here lies / root / menu.sh). Before proceeding with further configuration, it is necessary to find out one important point - bitrixenv operates with the concept of “server role”. It does not matter what the name of the server in the pool is, because Each server contains all the software that is included in the bitrixenv package, we can always assign one or more roles to it, and we can remove them from it or change them for others. The main roles are mgmt (balancer, i.e. nginx), web (i.e. httpd / apache), mysql_master and mysql_slave (database instance, the slave appears already when we start replication), memcached (server with memcached). The overall picture is now clear, and we decided to start with a memcached server. To do this, go to the point
4. Configure memcahed servers > 1. Configure memcached service
8. Manage web nodes in the pool > 1. Create web role on server,
3. Configure MySQL servers > 4. Create MySQL slave
3. Configure MySQL servers > 5. Change MySQL master
After the cluster is ready, there are some features in the application setup that allow for additional optimization:
In this article, we have reviewed the sequence of actions required to configure a cluster of servers based on bitrixenv, as well as some possible pitfalls. According to the results of work with bitrixenv, and the cluster on it, we can highlight the pros and cons of this approach:
www.1c-bitrix.ru/products/env
dev.1c-bitrix.ru/community/blogs/rns/hidden-features-of-work-with-sessions.php
dev.1c-bitrix.ru/community/blogs/rns/the-use-of-local-caches-in-the-cluster.php
dev.1c-bitrix.ru/learning/course/index.php?COURSE_ID=32&INDEX=Y
dev.1c-bitrix.ru/learning/course/index.php?COURSE_ID=37&INDEX=Y
Source: https://habr.com/ru/post/430080/
All Articles