📜 ⬆️ ⬇️

Compromise microservices

From the translator: since the publication of the popular article by Martin Fowler “Microservices” ( Habré translation ), enough time has passed for the author to supplement his observations with fresh experience in designing and developing microservices in various companies, and to tell about him in a new post, whose translation appears your attention.
image
Many development teams have found the architectural style of microservices an approach that exceeds the monolithic architecture; other teams found out that microservices for them was an extra burden, undermining development productivity. As with any style of architecture, microservices have their pros and cons. In order to make an informed choice, you must understand these properties and be able to consider them against the background of your own specific conditions.
Microservices give advantages ...... at the cost of
Hard borders of modules
Strong Module Boundaries
Microservices enhance the modular structure, which is especially important for large development teams.
Distribution
Distribution
Distributed systems are harder to program because remote calls are slow and always run the risk of failure-failure.
Independent Deployment
Independent Deployment
Simple services are easier to deploy, and since they are autonomous, there is less chance of a system failure if something goes wrong.
Consistency ultimately
Eventual consistency
Strict consistency support is extremely difficult for distributed systems, and this means that you have to deal with consistency in the long run.
Technological diversity
Technology Diversity
You can mix multiple languages, frameworks and data storage technologies with microservices.
Operational complexity
Operational Complexity
You will need an experienced operating team to manage a variety of services that will be regularly re-established.

Hard borders of modules


The first serious benefit from microservices is the strict limits of the modules. This is an important advantage, although at the same time it is strange - because, in theory, there is no such reason that microservices should have more strict limits on modules than a monolith.

So, what do I mean by hard boundaries of modules? I think most of us would agree that programs should be broken down into modules — pieces of software that are separate from each other. Your modules should be such that, if necessary, I need to deal with only a small part of it in order to make changes, so that I can find the small part I need quickly enough. A good modular structure will be useful in the case of any software, but the importance of this property grows exponentially along with the size of the software itself. Perhaps this importance is growing for the reason that the program development team is also growing in size.

Advocates of microservices will be happy to tell you about Conway’s law, which states that the structure of the software system reflects the communication structure within the company that built it. In the case of large teams - in particular, geographically distributed - it is important to build the software structure in such a way that it reflects the fact that communication between different teams will be more rare and more formal than what is conducted within the usual team. Microservices allow each team to take care of relatively independent units using this communication pattern.
')
As I said before, there is no such reason why a monolithic system should be built without a good modular structure. [1] But numerous observations show that this is done quite rarely, since the Big Ball of Mud is the most popular architectural pattern. The frustration of this phenomenon, along with the equally sad fate of many monoliths, led individual development teams to the path of microservices. The principle of module independence works because the module boundaries serve as a barrier to links between modules. The problem is that in the case of a monolithic system, it is not difficult to crawl under this barrier; Such an act can be a useful tactical technique, providing a shortcut to the rapid development and launch of some system functions, but in the long term it undermines the foundations of the modular structure and significantly impairs the team's productivity. Unfolding modules on various services makes the boundaries harder, creating additional obstacles to lovers of dubious solutions for cutting the path.

An important aspect of this relationship is persistence. One of the key characteristics of microservices is decentralized data management; This principle means that each service manages its own database and any other service must use the API of this service to get to it. This solution allows you to get rid of integration databases , which are the main sources of unpleasant connections in large systems.

It is important to emphasize that it is quite possible to have “fair” borders of the modules inside the monolith, but this requires serious discipline. With due diligence, you can build a large lump of micro-service dirt, but this time you will have to thoroughly try and make much more effort than in the case of a monolith. As I see it, using microservices increases the likelihood that modularity in your project will be all right. If you are confident in the discipline within your team, then perhaps this advantage should not be taken into account; however, as the team grows, discipline will become increasingly difficult - while the importance of supporting the boundaries of the modules will continue to grow.

This advantage becomes a nuisance if you misinterpret boundaries. This is one of the two main reasons for the “ Monolith First ” strategy, and even those who prefer to start right away with microservices should do this only if they are dealing with a well-known subject area.

But I have not finished with cautions. You can tell if the system is successfully using modularity only after time passes. Thus, it is possible to evaluate whether microservices really lead to better modularity of systems, we can only after these systems have been working for at least several years. Moreover, early adopters of technology are usually more talented developers, so we will have to wait even longer before we see what will happen to microservice systems written by average teams. Even then, we have to admit that average teams write average software, so instead of comparing their results with the achievements of top teams, we will have to compare them with similar software in a monolithic architecture - and this is already a complex hypothetical situation poorly suited for evaluation.

All the facts that I am aware of at the moment are preliminary data that I have collected from my friends who are already using this style in their work; according to them, support for the modules has been simplified quite significantly.

One case I heard was quite interesting. The team made the wrong choice by using microservices for a system that was not complicated enough to benefit from this decision . The project was in trouble and had to be rescued, so more people were thrown at it. At this point, the microservice architecture unexpectedly turned a positive side: the system was able to swallow the influx of developers, and the team could manage the development much easier than usually happens in similar cases with monolithic architectures. As a result, the project accelerated to higher productivity than one would expect from a monolith, giving the team a chance to catch up. The outcome turned out to be negative - the final software would cost more man-hours than if it had been built with a monolith; however, microservices showed their ability to increase speeds.

Distribution


So, microservices use a distributed system to improve modularity. But distributed software has a big drawback: it lies in the fact that it is distributed. As soon as you play the map of distribution, you will incur a lot of difficulties. It is unlikely that the community around microservices treats these costs as naively as the movement for distributed objects in due time, but the difficulties do not go away from this.

The first is performance. Nowadays, you will be much less likely to see that the weak point of the system will be the brakes of function calls within processes; however, remote calls are slow: if your service calls five remote services, each of which in turn calls the other five remote services, the response time from them will create terrible delays across the system.

Of course, a lot can be done to reduce the damage from this problem. First, you can increase the granularity of calls so that you need to make fewer of them. This step will make the software model more confusing: now you have to think about how to package your interservice interactions. In addition, this will not take you too far from the original problem, since you will still need to call each interacting service at least once.

The next way to reduce pain is to use asynchrony. If you make six asynchronous calls in parallel, then you slow down to the speed of the slowest call — instead of slowing down to the sum of all their delays, as it was before. This can give a big performance boost, but it will be worth the extra mental effort. Asynchronous programming is a complex subject: it is hard to understand, in its case, it is much more difficult to debug. But in most of the microservice stories that I heard, asynchrony was required to get acceptable performance.

Behind the speed comes reliability. It is expected that functions within processes work as they should, while remote calls may fail at any time. The more microservices used, the more potential points of failure appear. Wise developers know about this and design for failure . Fortunately, the same tactics that are required for asynchronous programming are also well suited for handling failures, and as a result of their use, an improvement in the "elasticity" of the system is possible. However, it is difficult to call this compensation for costs - you still have an additional difficulty in ascertaining the consequences of a failure for each remote call.

And I have listed only two major misconceptions about distributed computing ( in Russian ).

The problem described has its own nuances. First, many of these issues arise in monolithic architectures as they grow. Few monoliths are really contained in themselves; usually there are also other systems present - often legacy systems with which you need to work. Interaction with them involves hikes on the network and getting into the same problems. That is why many people tend to move to microservices as quickly as possible to support interaction with external remote systems. The experience helps to cope with this task - an experienced team will be able to better cope with the problems of distribution.

But distribution is always a cost. I always reluctantly play her card, and, as for me, many too quickly make a choice in favor of distribution, because they underestimate its problems.

Consistency ultimately


I am sure that you are familiar with websites that you need to have strong nerves to work with. You update something on the page, the site updates the page and your update is not. You wait a minute or two, refresh the page, and your update finally appears.

This highly annoying usability problem almost certainly arises from the risks of so-called “coherence in the long run”. Your update received a pink node, but the request was processed by the green node, and while the green node received its update from pink, you were stuck in the inconsistency window . Sooner or later, everything will be consistent, but up to this point you will be wondering if something went wrong suddenly.

Such inconsistencies not only cause irritation, but also bear much more serious threats. For example, business logic can make decisions based on inconsistent information — and when this happens, it will be extremely difficult to figure out what went wrong, because the investigation will start much later than the moment when the inconsistency window closes.

Microservices include consistency problems in the long run — this is due to the “meritorious” perseverance of microservices on decentralized data management. In the case of a monolith, you can update several things together in a single transaction. In this case, microservices require updating several resources, and distributed transactions cause disapproval (for good reasons). So now developers have to be aware of the issues of consistency and figure out how to detect the lack of synchronization of what is happening before making corrections in the code, which they later regret.

The world of monoliths is also not free from these problems. As systems grow, there is an increasing need for caching to improve performance, and cache invalidation is another Complicated Problem . Most applications need autonomous locks to avoid long-lived database transactions. External systems need updates that cannot be reconciled with a transaction manager. Business processes are much more tolerant to inconsistencies than you think, since businesses more often prefer accessibility (business processes have long had an intuitive understanding of the CAP theorem).

Monoliths may not be able to completely avoid inconsistency problems (as well as other issues of distribution), but they suffer from them much less - especially when the systems themselves are rather small.

image

Independent Deployment


Compromises between modular boundaries and the complexity of distributed systems haunt me throughout my career in this business. But the most significant change in the last ten years is the increased role of release in production. In the twentieth century, production releases everywhere were a painful and rare event, when everyone was late at work overnight or on weekends in order to stick an unusual piece of software where he could do something useful. However, nowadays, experienced teams are often released in production, and many organizations practice Continuous Delivery, which allows them to do production releases many times a day.

This shift had a profound effect on the software industry, and it is closely intertwined with the microservice movement. Separate cases of attempts to create microservices were triggered by the complexity of the deployment of large monoliths, when a small change in a part of the monolith could lead to failure of the project deployment as a whole. The key principle of microservices is that services are components that can be deployed separately; therefore, now that a change needs to be made, only a small service needs to be tested and secured. In case of failure, you do not put the whole system. In the end, due to the design for failure, even complete component failure should not hinder the operation of other parts of the system, even with some graceful degradation.

These relationships work in two directions. When it is necessary to deploy many microservices often, it is necessary that the deployment work in concert. That is why fast deployments of applications and fast redundancy of infrastructure are necessary conditions for microservices . For any more complex tasks, you will need to resort to continuous delivery.

The most important advantage of continuous delivery is a reduction in the cycle time between the idea and the working software. Companies that use continuous delivery can quickly respond to market changes and introduce new features faster than their competitors.

Despite the fact that many people call continuous delivery as one of the reasons for choosing microservices, it must be remembered that even large monoliths can also be delivered continuously; The most famous cases are Facebook and Etsy. There are also enough cases when attempts to switch to the microservice architecture stumble over independent deployment, when several services require careful coordination of their releases [2]. Despite the fact that I constantly hear that continuous delivery with microservices becomes easier, I myself am not too convinced of this; modularity is of great practical value - although usually modularity is closely related to the speed of releases.

Operational complexity


The ability to quickly deploy small independent units is a great boon for development, but it places an additional burden on the shoulders of operation, since the good half-dozen applications now turn into hundreds of small microservices. For many companies, the difficulty of regular attempts to cope with a swarm of rapidly changing funds is prohibitive.

This reinforces the role of continuous delivery. If for monoliths, continuous delivery is just a useful thing, almost always worth the effort expended on it, then in the case of a serious microservice architecture, this is already a vital necessity. Without automation and cooperation, which assumes continuous delivery, to cope with dozens of services will be impossible. Operational complexity is also increasing due to the increasing need to manage these services and their monitoring. And again, here these optional indicators of the level of maturity of the monoliths become a necessity if we mix microservices into the matter.

Supporters of microservices like to point out that, due to the fact that each service is small, they are easier to understand. The danger is that the complexity does not disappear anywhere - it simply shifts towards interconnections between services. It can get out in the form of increased operational complexity - for example, if it is difficult to debug the behavior of services. Literate decisions about the division of service boundaries will help reduce the scale of this problem, but borders in the wrong places can make life many times worse.

Managing increased operational complexity requires a multitude of new skills and tools — with more emphasis on skills. Intuition tells me that even with more sophisticated means (now the funds are still immature), the minimum entry threshold for the microservice environment is higher.

But the need for advanced skills and tools is not the most difficult part of managing operational complexity. In order to do this effectively, you need to introduce in your company a devops culture: to seriously improve the degree of interaction between the developers, the operation and all those responsible for software release. Changes in culture in the company is a sore subject, especially for large and old organizations. If you do not make these changes in terms of skills and culture, then in the case of monolithic applications, the process will move with difficulty; in the case of microservices, he will be seriously injured.

image

Technological diversity


Since each microservice is an independently deployable unit, you have considerable freedom in choosing technologies for its construction. Microservices can be written in different languages, use different libraries and different data storages. This allows teams to use the right tool for their work, as some languages ​​and libraries are better suited to solving certain problems than others.

Discussions on the possibility of using different technologies usually focus on choosing the best means for work, often losing sight of the fact that the most important advantage of microservices is much more prosaic and consists in versioning. In the monolith, you can use only one version of the library, and this situation often leads to problems with updates; One part of the system may require an update to use new features, while the same update will break another part of the system. Working with versioning libraries is one of those problems that are becoming exponentially harder as the code base grows.

There is a danger that a large technology zoo could overload the development department. Most organizations that I know of encourage the use of a limited list of technologies. This allows you to use common tools for things like monitoring, which gives services the ability to adhere to a small number of common environments.

Do not underestimate the value of supporting experiments. In the case of a monolithic system, decisions made at the beginning of the decision on languages ​​and frameworks after a while become difficult to change. After a dozen or two other similar decisions, teams can get stuck with technologies that are inconvenient for themselves. Microservices allow teams to experiment with new tools, and for systems to gradually migrate through one service to advanced technologies (when they begin to give advantages).

Secondary factors


Followers of microservices often say that services are easier to scale, because if one service carries most of the workload, then you can only scale it, and not the entire application. However, I can hardly remember at least one serious example that would convince me that this is actually more effective compared to the simple duplication of the entire application.

Microservices allow you to separate sensitive data and use it more safely. Moreover, by ensuring that all traffic between microservices is protected, the microservice approach can make the exploit more difficult. As the importance of the security factor grows, over time this may become an important reason for using microservices. For predominantly monolithic systems, it is not unusual to create separate services for working with confidential data.

Summarizing


Any post relating to any architectural style and having a general character will suffer from the limitations of a general council . Reading posts like this will not give you a concrete answer; However, such articles will help to ensure that you are aware of the important factors that you should take into account for your upcoming selection. Each of these factors will have a different weight for different systems, and sometimes the pros and cons will change places (for example, strict module boundaries are good in more complex systems, but they become a hindrance for simple ones). Any decision you make will depend on the application of the criteria to your specific conditions - on the significance for your system and its influence on it in the future. In addition, our experience with microservice architecture is relatively small. Usually, it is possible to judge architectural solutions not earlier, before the system “grows up” and you understand what it is like to work with it years after the start of development; so far, the community has not had enough jokes about long-lived systems based on microservice architectures.

Monoliths and microservices are not a black and white choice. Both definitions are fuzzy, which means that many systems will lie in the zone of a blurred boundary. There will be systems that do not fit into any of the categories. Most, including myself, speak of microservices in contrast to monoliths, because it makes sense to oppose them to a more familiar style; however, we must remember that there are systems that do not fit into any of these categories. I think of monoliths and microservices as two areas in a single large space of architectures. These areas deserve designation because they have interesting characteristics by which they can be discussed and compared, but no competent architect will consider them as a complete separation of the entire architectural space.

The main conclusion, which should be widely accepted by the public, should be the recognition of the fact that Microservice Premium : microservices impose a price on productivity, which can be compensated only in more complex systems. And if you can cope with the complexity of your system using a monolithic architecture, then you should not resort to using microservices.

However, recently popular discussions around the topic of microservices should not distract from more important problems that lead to success and failure of software development projects.General factors - for example, the quality of people in a team, how well they interact in a team with each other, the degree of accessibility of an expert in the subject area for communication - will be more important than the decision to use or not use microservices. With regards to a purely technical level, it’s more important to focus on such things as clean code, good testing and attention to evolutionary architecture.

additional literature


Sam Newman's book, Creating Microservices, in the first chapter provides a detailed list of the benefits of microservice architecture.

The post of Benjamin Wouten "Microservices: there are no free lunches!" On High Scaleability is known as one of the first and still the best list of shortcomings of microservices.

Notes


1. Some people consider the term "monolith" an insult, always suggesting a poor modular structure under it. In the world of microservices, most do not put this term into the term; they define "monolith" purely as an application, built as a separate unit. Of course, some ardent fans of microservices are sure that most of the monoliths end their way with large clumps of dirt, but I have not met anyone who could refute the thesis that it is possible to build a well-structured monolith.

2. The possibility of independent deployment of services is part of the definition of microservices. Therefore, it is reasonable to say that the set of services that need to be deployed in a specific order is not a microservice architecture. It is also appropriate to note that many teams that are trying to implement the microservice architecture are in trouble because they have to come to a coordinated deployment of services.

Source: https://habr.com/ru/post/261689/


All Articles