To argue that the site should always be available - moveton and banality, but 100% availability, although it is a mandatory requirement, most often still inaccessible ideal. Now there are a lot of solutions on the market that promise the maximum uptime or offer solutions to increase it, but their application is not enough that does not always help, in some cases it can even lead to increased risks and reduced project availability. In this article we will go through the classic mistakes that we constantly face. Most of the problems are elementary, but people admit them again and again.
Prerequisite: before trying to ensure the maximum uptime of the project, you should relate the costs and cost of downtime. Usually, this is very important for companies whose work depends on the work of other companies - B2B solutions, API services, delivery services. Unavailability for even a few minutes will result in at least a load on the call-center from dissatisfied customers. For companies of a different type, for example, a small online store, or a company whose clients work from 9 to 18, the inaccessibility, even for a few hours, can be cheaper than a full-fledged reserve site.
Cloud hosting marketing has firmly enshrined an erroneous concept in people's heads: cloud hosting is not tied to hardware and this means that the cloud infrastructure will not fall. The three 24-hour crashes of Amazon Web Services, the recent cloud4y crash, and the loss of cloudmouse data showed that localizing data and the project itself in one data center is a guaranteed way to get many hours of downtime without the ability to easily lift a project on another site. The law on personal data, in this regard, creates additional problems. We believe that any cloud hosting should go through several major crashes in order to learn how to prevent them (lightning strikes Amazon, network configuration problems related to human factors, etc.), and if Western cloud hosting companies went through this chain of catastrophes, then many Russian sites have yet to do so, and this must be taken into account.
The situation is similar with the “iron” data centers. Often we see a client configuration where several servers are reserved on the same site, in case of a hardware failure of one of them, however, in our experience, network problems, when several racks in one data center or the entire data center as a whole become inaccessible happen much more often than the crashes of individual servers, and this must also be taken into account.
The recommended AWS project operation scheme implies the use of several default zones by default to achieve maximum project uptime.
So, we came to the banal conclusion about the need to have a backup site to achieve the maximum uptime of the project, however, in order to switch to it, the data must be adequate to the production site. What is important here is not the initial creation of a reserve — this is a fairly simple and understandable procedure; synchronization and monitoring of synchronization of further changes is much more important. First of all, we are talking about:
Any monitoring, even the best, cannot guarantee that the backup site will be ready to switch when it is really needed. In our experience, an accident will occur on the first switch to the reserve, and this will happen several more times. In their reports, Stack Overflow says that it took about five switchings to the reserve, before they were convinced that he was now fully prepared to accept traffic after an accident. Therefore, in the plan of work to increase the uptime of the project, it is necessary to include test switchings to the reserve, and take into account that such switchings will lead to an accident. After working out and fixing in the documentation of the switching mechanism, it is necessary to continue to regularly switch to the reserve, in order to make sure that everything is still working.
If the production and backup sites are located within the same hosting company, then it is quite possible that in the event of an accident both your sites will stop working at once. Several major accidents at AWS immediately affected all the availability zones of one region, Selectel fell simultaneously at data centers in St. Petersburg and Moscow, companies could talk about complete isolation, but the cloud4y crash, which led to the complete unavailability of Bitrix24, says that even there are big risks. The ideal, from our point of view, is the configuration where one backup configuration is located in the same hosting company (for using regular backup switching tools, such as VRRP ), and the secondary backup site in another hosting company.
Even the use of the tested backup site and the use of the secondary site in another data center does not guarantee the readiness of the reserve to quickly take on the production load. This is due to the essence of the reservation: the new version of the code, which created a fatal load on the production environment will create exactly the same load on the backup site, and the project will become completely inaccessible. As a simple solution, there should be a rollback mechanism to the previous version, however in the business race for releases it is not always possible, and then we start thinking about another backup site with the previous version. We should also talk about backup : accidental deletion of data on the main site will also affect the backup site, so you should think about deferred (for 15 minutes, an hour) replication, in order to be able to switch to a database that has not yet occurred fatal operation.
But this is not enough. A huge number of projects now use external services to provide their own services. Most use SMS for double authentication, online stores calculate delivery time using delivery services, payment is accepted through third-party payment-gateway-and if these services fall, then it doesn't matter if there is a reservation or not, the project will still be unavailable. We rarely see backup of external services, and, meanwhile, these are exactly the same projects that may have problems with a backup site, or there may be no reserve at all. And in case of unavailability of external service, the service of your customers will also be impossible. We recommend duplicating all critical external systems, monitoring their availability and having a plan to switch them in case of an accident.
This is not all, but at least the basic things. We discuss this in more detail at uptime.community meetings, the next one will be in October, but for now you can chat in the telegrams group .
Source: https://habr.com/ru/post/417323/
All Articles