📜 ⬆️ ⬇️

Experience of transferring web services of the UNION company into a scalable virtual complex

In early March, we reached an agreement in principle with the representatives of Soyuz company on the implementation of activities on the virtualization of corporate IT infrastructure. If it is simpler, with our help, SOYUZ outsources the IT infrastructure, places its own facilities in the Oversan-Mercury data center and carries out a number of other activities, which results in cost savings and a lot of new development opportunities.

This post is dedicated to how we transferred the company's web services to the ICM and the technological features of the project.

image
')


A large number of actions are planned, including the laying of an optical “last mile” from the data center to the company's office. But these are only future plans, but for now we will tell about quite successful transfer of several web sites of the company, in fact united by a single showcase, and the beginning of the second stage of transferring the vast database to new capacities.

Part of the illustrations for the post will be the slides of the presentation made by us at the open table "IT Outsourcing in Russia", organized by Cnews and held in mid-March.

image

image

Initially, the Soyuz sites, which carry different loads, from informing visitors to sales, were combined into a kind of ring and placed on the company's own equipment, located right in the office. In principle, in order that everything worked stably under any load, a lot was done. A relatively wide channel on the Internet is organized, the servers are combined into a cluster distributed system. But during the advertising campaigns and, accordingly, a large influx of visitors to the site, anyway, there were failures. In some cases, it was even difficult to determine exactly what was failing, where that “bottleneck” was located. The company has matured the need to transfer services to third-party capacity. And we just launched the Scalable Virtual Complex (MEC) in test operation and in Soyuz agreed to organize a joint project with us.

In order for the sites to work under any load, it was necessary to consider a separate balancing system, which eventually put everything in place. And, of course, transfer them to the IAC, which we have already told here.

.

MVK is implemented on BladeSystem 7000 blade chassis and Lefthand P4000 storage system from HP.

Web server operation diagram



image

HTTP user requests are handled by a network balancer consisting of nodes that redirect requests to backend servers. All changes made on the test site are available on the main site.

The domain is resolved to two ip-addresses, and each ip-address is configured on its own balancer. Balancers distribute traffic between frontend-servers. Balancing is performed using the Weighted Least-Connection method, that is, each new connection is routed to the node with the smallest number of connections, taking into account the weights.

Yellow and green arrows show possible options for transferring the HTTP client request to the main servers using balancing.

The blue arrow shows the work of the balancer itself and the distribution of requests between nodes.

Few details. The list of used software includes keepalived, vsftpd, nginx, apache and php-5.2.6. Nginx is used to cache data and return static objects. Apache is used for the backend server, mainly due to the presence of a module for sybase operation. Protection from failures of nodes provides keepalived.

In direct work with the site using the technology "network RAID» DRBD (Disctributed Replicated Block Devcie). When uploading pictures and other content to the site, they are synchronized to two servers, since the customer’s business process does not allow for differences in content on the nodes.

image

To increase fault tolerance and maximize node spacing throughout the system, we used VMware DRS balancing technology. If one of the high-availability cluster nodes (HA-cluster) stops working, the balancer and the backend server located on another node of the cluster continue to serve the customer’s site.

In the process of project implementation, we gained considerable experience in system integration and provided the client with customization of the services provided in accordance with the requirements. It should be noted that the process of transferring web services to new capacities was made completely transparent to engineers and invisible to users, with zero downtime. The positive reaction was not long in coming.

Commentary by Roman Shtembulsky, head of the Internet projects of the Soyuz concern.



“Before working with the data center Oversan-Mercury, we already had quite a lot of our own experience in technological support of information systems. At some point, we, as they say, just for the sake of interest, decided to find out what other solutions our solutions may offer. For some reason, more often than not, we received not very convincing retellings of standard sentences posted on the websites of companies. We expected that our designs would not be crippled by the sharp edge of a standardized frame, but at the same time they would not turn into a cloud of abstract and fashionable flexibility. At Oversun-Mercury, we were asked precise questions, understood us exactly, offered an exact solution, and carried out an exact implementation. In the end, we received even more than we understood and planned ourselves - qualitatively and professionally. And this is not what commercial departments like so much when working with customers who, in the end, get a lot, but do not understand how it all works and why they need it all. Oversun-Mercury works differently.

When transferring your own system to MVK, it would be better to immediately get rid of old ideas like the fact that your site, for example, is “lying” somewhere, is located on some server, limited by the physical size of the case, where processors, memory, hard drives, etc. Also, there is no need to multiply and divide everything by 8, 16, 128, 2048, etc. For example, in MVK, you can easily get 19 gigahertz processor power or 17 gigabytes of memory. In essence, all this can be thought of as an electronic broth in which the system is boiling and boiling. You can taste it, cool it or add fire, if necessary, add various electronic spices to taste: to a few gigahertz processor time a pinch of gigabytes of RAM, slice the terabytes of hard drives and mix everything thoroughly with PHP, SQL, XML and HTML. ”

Source: https://habr.com/ru/post/94062/


All Articles