📜 ⬆️ ⬇️

Microservices as architecture: squeeze to the maximum

Not so long ago, I had the opportunity to participate in a conference, at which one of the reports was devoted to testing automation with the use of modern microservice architecture practices.


On one of the slides of this report, from the minuses of the microservice architecture, the complexity of testing was indicated. In most sources, testing of microservice applications is hardly mentioned, so there was a desire, if possible, to understand the capabilities of the microservice architecture (MSA), to understand what should be considered at the design stage of such an application, and how to make life as easy as possible for yourself and your neighbor.


Microservices. Start.


Having rummaged on the Internet I found a heap of information on MSA (MicroService Architecture) and its elder brother SOA (Service Oriented Architecture), including on Habré, therefore I will not dwell in detail on what it is. Here are briefly the basic principles of MSA:



Hence a number of advantages:



And disadvantages:



Now let's try to figure out how best to use the capabilities of the microservice architecture for effective testing.


“What do we need?”


When designing a microservice application, the problem of microservice interaction with each other comes to the front. Since Since each service can be written in its own programming language and use different technologies, the need arises to develop an additional module responsible for communication between services.


If the application is relatively small, then you can get by with the support of REST overlays, which can be sent directly to the services involved. This greatly simplifies the architecture of the application as a whole, but leads to significant costs for information transfer (never use synchronous requests if you want your MSA application to work fast enough!). With a fairly complex application without creating a manager is indispensable.


To increase the stability of even a simple application, it is better to implement a message manager. Such a manager should receive an asynchronous request from one service and transfer it to another. It can be implemented on sockets, web sockets, or any other convenient technology. Requests better stored in queues. With this approach, a simple tool appears to monitor the interaction of services with each other, even if, at first glance, we don’t need this.


Creating a message manager implies that its interface must be standard and supported by all product services. On the other hand, the use of different messaging interfaces for different services will lead to unnecessary complication of the code. A single interface also implies that its design must be ready before coding begins.


"We shared an orange ..."


Now let's consider the fundamental idea of ​​MSA: the interaction of relatively independent services among themselves.


Of the benefits worth noting the possibility of replacing one service to another without having to reinstall the entire application. Among the shortcomings - the services should be a) fairly small, b) fairly autonomous.


The solution here may be the correct code splitting by services. Moreover, the division should not be as in a macro application, by functionality (UI, Network, Back-End computing, DB), but by business logic, for example: processing a login request, compiling a sales report, plotting data from the database. Such functionally complete modules become truly independent and their application becomes obvious. In addition, the general functionality of the application can be easily and painlessly expanded or modified.


How to test it?


If everything was clear with the macro application in terms of testing, then what should we do here? A bunch of services, each of them can "jerk" many others, the data are randomly sent between services ... Nightmare! But is it?


If we did everything right, then we have an application that:



From the point of view of manual testing, working with each service individually is a huge headache. But what a scope for automation!


First of all, let's connect the logger to our message manager to get a clear and understandable log of each service, while the interaction of the services with each other also becomes transparent. So we can quickly identify the problem service and, if necessary, roll it back. In the case of a WEB application, you can implement monitoring, which will inform us in real time about any problems that have arisen.


Since Interval of messages is standard, we do not need to adapt to each service separately, it is enough to use a set of well-known pairs of "request-response", for example, from the same database. And this is the favorite DDT (Data Driven Testing, not to be confused with a rock band and / or a pesticide!), Which leads us to amazing scalability and performance.


By the condition of the problem, each service we have is a separate functionally complete unit. Just like a function or method in a macro application. It is logical if a set of "unit" tests are written for each service. In quotes, because we are testing not methods and functions, but services with somewhat more complex functionality. And again, there is absolutely no need to emulate user actions, it is enough to form a valid REST request. After the implementation of this item, we can say that acceptance tests are designed for each service. Moreover, DDT again suggests itself - one test is applied to different services, only input / output data sets change.


Test stand


Thus, we have very quickly accumulated an incredible number of tests that need to be run somewhere. Naturally, a test run on one server will take quite a long time, which does not suit us at all.


For WEB applications, the solution is obvious: you can deploy a separate pre-configured server for each launch. This will not reduce the load on the server, but will allow to divide the tested services among themselves. If the launch is carried out in a controlled environment, where only the testing service will be the source of new bugs, then the set of tests to be run can be significantly reduced. This is very important at the development stage - when a developer gets the opportunity to test his functionality in interaction with other services, without being particularly distracted by the launch of a full set of tests on his machine.


Full integration testing can be launched at the same time, for example, once a day or if there are a sufficiently large number of changes in services.


We test local applications in the same way, but on different virtual machines. For this it is very convenient to use cloud services. At the same time, in order to reduce the time required to deploy the system, it is possible to prepare in advance an already configured operating system with a pre-installed set of tools.


findings


MSA is a very interesting and flexible architecture for both development and testing. With the right balance of simplicity and versatility, a clear understanding of the structure of the application, you can get good performance with minimal effort.


However, if you make a prekos in one direction or another, then you can dig into the wilds of hard-to-maintain code with the loss of all the benefits provided by MSA, while the overall performance of the application deteriorates.


It is important to understand that in order to successfully and efficiently automate testing of MSA applications, clear and tight interaction between development teams and automators is needed.


What to read:


Microservices (Microservices)
Advantages and disadvantages of microservice architecture
Microservices. How to do and when to apply?


')

Source: https://habr.com/ru/post/303778/


All Articles