📜 ⬆️ ⬇️

How to make a cloud (cluster) hosting for a couple of kopecks *

Three years ago I had an interesting task. It was necessary to assemble a platform that combined several racks with servers into a single whole for the dynamic distribution of resources between sites written for the LAMP platform. And so that intervention in the code of sites was minimal, and even better - was absent altogether.
At the same time, no expensive solutions like Cisco Content Switch or a disk shelf with optical fiber can be used - there was not enough budget.
And besides, of course, in the event of a failure of one of the servers, this should not have affected the operation of the platform.

Necessity on invention is cunning


First of all, you need to divide the creation of the platform into subtasks. Immediately it is clear that you have to do something to synchronize data, since the shared disk is not available. In addition, it is necessary to balance the traffic and have some statistics on it. Finally, automating the provision of the necessary resources is also quite a serious task.

Let's start from the beginning, let KO come with me


I had a choice on what to organize the platform. This is OpenVZ and XEN. Each has its pros and cons. OpenVZ has a smaller overhead, works with files and not block devices, but does not know how to run something other than Linux distributions. XEN allows you to run Windows, but more difficult to work with. I used OpenVZ, since it was more suitable for solving the problem, but nobody restricts you in choosing.

Then I divided the server into places under the VDS, one for each core. The servers were different, so I had a set of 2 to 16 virtual machines on each of the servers. In the "average ward" came about 150 virtualok on the rack.
')

How to synchronize data


The next item is the prompt creation of VDS on demand + protection against the failure of any server. The solution was simple and beautiful.
For each VDS, an initial image is created as files on the LVM partition. This image "spreads" on all servers of the platform. As a result, we have backup of all projects on each server (paranoids cry from emotion), and the creation of a new VDS "on demand" is simplified to snapshot from the image and its launch in the form of VDS (it’s just a few seconds).

Base and API


If everything was simple with the integrity of the files, then it was worse with the synchronization of the database. From the beginning, I tried a classic example - master-slave, and ran into a classic problem: the slave was lagging behind the master (yes, and thanks for replicating to one stream, very many thanks).
The next step was Mysql-Proxy. As a system administrator, such a solution was very convenient for me - I set it and forgot it, I just need to update the config when adding / removing new VDS. But the developers had their own opinion. In particular, the fact that it is easier to write a certain PHP class for synchronizing INSERT / UPDATE / DELETE requests than learning Lua, without which Mysql-Proxy is useless.
The result of their work was the so-called API, which knew how to find neighbors with a broadcast request, synchronize to the current state and inform the neighbors about all changes with the base.
But still it is worth exploring Lua and making a native mode of operation, when all requests will be synchronized with their neighbors.

FreeBSD Glory


Balancer - this can be said is a key aspect of the platform. If the balancing server falls, all work will have no meaning.
That is why I used CARP to create a fail-safe balancer, choosing FreeBSD as the OS and Nginx as the balancer.
Yes, dear NLB was replaced by two weak FreeBSD machines (marketers are furious).

And most importantly - how it worked


At the start of the platform, one copy was launched for each site and monit on the balancer monitored, so that the primary copy always worked.
In addition, Awstats statistics analyzer was installed on the balancer, which provided all the logs in a convenient format, and most importantly - there was a script that polled each VDS via SNMP for its load.
As we remember, I allocated one core for each VDS, hence Load Average at 1 - this will be the normal load for VDS. If LA became 2 or higher, a script was launched that created a copy of VDS on a random server and wrote it upstream in nginx. And when the load on the additional VDS fell less than 1 - respectively, everything was removed.

I summarize


If you take a rack with servers and a switch that supports CARP protocol, then to create a cloud hosting service you will need:

* To fill the rack enough amount with four zeros. Compared to brand solutions, where the price of one rack is a sum with six zeros, this is really a couple of kopecks.

Source: https://habr.com/ru/post/102528/


All Articles