📜 ⬆️ ⬇️

9 years in monolith on Node.JS

monolith from https://reneaigner.deviantart.com


A week ago, I spoke at the Node.JS Mitap, and promised to lay out a speech recording for many. Later I realized that I did not manage to accommodate some interesting facts in the regulated half-hour. And I myself like to read more, and not to watch and listen, so I decided to put the speech in the format of an article. However, the video will also be at the end of the post in the links section.


I decided to tell about the topic that has gotten the edge of teeth - life in the monolith. About this on Habré already have hundreds of articles, thousands of copies are broken in the comments, the truth has long died in disputes, but ... The fact is that we have a very specific experience in OneTwoTrip, unlike many people who write about certain architectural patterns in vacuum:



So we have quite a lot of any specificity and real experience. Interesting? Go!


Disclaimer times


This presentation reflects only the personal opinion of its author. It may coincide with the position of the OneTwoTrip company, or it may not coincide. Then how lucky. I work as a technical member of one of the company's teams and I do not pretend to objectivity or the expression of someone's opinion except my own.

Disclaimer two


This article describes historical events, and at the moment everything is completely wrong, so do not be intimidated.

0. How did it happen


Trend query word "microservice" in google:

Everything is very simple - nine years ago nobody knew about microservices. So we started writing, like everyone else - in the monolith.


1. Pain in the monolith


Here I will describe the problem situations that have happened in these 9 years. Some of them have been solved, some have been circumvented by hacks, some have simply lost their relevance. But the memory of them, like battle scars - will never leave me.


1.1 Updating Connected Components



The very case when synergy is evil. Because any component was reused several hundred times, and if it was possible to use it crookedly, then it was not missed. Any action can cause completely unpredictable effects, and not all of them are tracked by units and integration tests. Remember the story about mops, fan and balloon? If not, google. It is the best illustration of the code in the monolith.


1.2 Migration to new technologies


Want Express? Linter? Another test or mock framework? Update validator or at least lodash? Update Node.js? I'm sorry. To do this, you have to edit thousands of lines of code.


Many say about the advantage of the monolith, which is that any edit is an atomic commit . These people are silent about one thing - this revision will never be made .


Do you know the old joke about semantic versioning?


the real semantics of semantic versioning:

major = a breaking change
minor = a minor breaking change
patch = a little-bitty breaking change

Now imagine that in your code almost any little-bitty breaking change will almost certainly come up. No, it is possible to live with it, and we periodically gathered strength and migrated, but it was really hard. Highly.


1.3 Releases


Here it must be said about some specifics of our product. We have a huge number of external integrations, and various branches of the business of logic, which emerge separately rather rarely. I really envy products that actually execute all the branches of their code in 10 minutes in production, but this is not the case. Through trial and error, we found for ourselves an optimal release cycle that minimized the number of errors that would reach the end users:


  1. release is going and half a day passes integration tests
  2. next day it lies under careful supervision on the Stage (for 10% of users)
  3. then there is still a day on production under even more careful supervision.
  4. And only after that we give him the green light in the master.

Since we love our colleagues and do not release on Fridays, in the end, this means that the release goes to the master about 1.5-2 times a week. Which leads to the fact that the release can have 60 tasks and more. such a number causes merge conflicts, sudden synergistic effects, full QA workload on log analysis, and other sorrows. In general, it was very hard for us to release a monolith.


1.4 Just a lot of code


It would seem that the amount of code should not be of fundamental importance. But ... Actually, no. In the real world, this is:



1.5 There are no code owners


Very often there are tasks with an incomprehensible sphere of responsibility - for example, in adjacent libraries. And the original developer could have already moved to another team, or even left the company altogether. The only way to find a person responsible in this case is administrative arbitrariness - to take and appoint a person. That is not always pleasant and the developer and the one who does it.


1.6 Debugging Difficulty


Did the memory run out? Increased CPU consumption? Wanted to build flame graphs? I'm sorry. In the monolith, so much of everything happens at the same time that it becomes extremely difficult to locate a problem. For example, it is almost impossible to understand which of the 60 tasks when rolling out in production causes an increased resource consumption (although locally, on test and aging media).


1.7 Single stack


On the one hand, it’s good when all the developers "speak" the same language. In the case of JS, it turns out that even Backend with Frontend developers understand each other. But...



1.8 Many teams with different ideas about happiness



If you have two developers, then you already have two different ideas about what is the best framework, what standards to follow, use libraries, and so on.
If you have ten teams, each of which has several developers, then this is simply a disaster.
And there are only two ways to solve it - either “democratic” (everyone does what he wants), or totalitarian (standards are imposed from above). In the first case, quality and standardization suffers, in the second - people who are not allowed to realize their idea of ​​happiness.


2. Pluses of monolith


Of course, there are some advantages in the monolith, which may be different for different stacks, products and teams. Of course, there are many more than three, but for all possible I will not answer, only for those that were relevant to us.


2.1 Ease of Deployment


When you have one service, it is much easier to pick up and test it than a dozen services. However, plus this is relevant only at the initial stage - for example, you can raise the test environment, and use all services, except for those developed, from it. Or from containers. Or whatever else you like.


2.2 No overhead data transfer


Quite a doubtful plus, if you do not have highload. But we have just such a case - therefore the cost of transport between microservices is noticeable for us. No matter how hard you try to do it quickly, store and transfer everything in RAM most quickly - this is obvious.


2.2 One build


If you need to roll back at some point in history, then it’s really simple to do it with a monolith - it took and rolled away. In the case of microservices, it is necessary to select compatible versions of services that were used with each other at a particular point in time, which may not always be easy. True, this is also solved with the help of infrastructure.


3. Imaginary pluses of the monolith


Here I brought all those things that are usually considered pluses, but from my point of view they are not.


3.1 Code - this is the documentation


Often heard this opinion. But usually it is followed by novice developers who have not seen files in tens of thousands of lines of code written years ago by departed people. Well, for some reason, most often this point is brought in plus by supporters of the monolith - they say, we don’t need any documentation, we don’t have any transport or api - everything is in the code, it’s easy and clear. I will not argue with this statement, just say that I do not believe in it.


3.2 There are no different versions of libraries, services and APIs. No different repositories.


Yes. But no. Because at a second glance, you understand that service does not exist in a vacuum. And among a huge number of other code and products with which it integrates - starting from third-party libraries, continuing with versions of server software, and not ending with external integrations, versions of IDE, CI tools, and so on. And as soon as you understand how many different versioned things are mediated by yourself, it includes your service, it immediately becomes clear that this plus is just demagoguery.


3.3 Easier monitoring


Simpler. Because you have, roughly, speaking, one dashboard, instead of several dozen. But it is more complicated, and sometimes even impossible - because you cannot decompose your schedules into different parts of the code, and you only have the average temperature in the hospital. In general, I have already said everything in the paragraph about the complexity of debugging, I’ll just clarify that the same complexity applies to monitoring.


3.4 Easier to follow uniform standards


Yes. But, as I already wrote in the paragraph about many teams with the notion of happiness - standards are either imposed totalitarianly, or weakened almost to the lack of them.


3.5 Less chance of code duplication


A strange opinion that the code is not duplicated in the monolith. But I met him quite often. In my practice, it turns out that duplication of code depends solely on the development culture in the company. If it is, then the common code is allocated to all sorts of libraries, modules and microservices. If it is not there, then there will be a copy-peist twenty times in the monolith.


4. Pluses of microservices


Now I will write about what we got after the migration. Again, these are real conclusions from the real situation.


4.1 You can make a heterogeneous infrastructure


Now we can write code on the stack that is optimal for solving a specific problem. And rationally use any good developers who came to us. For an example - here is an exemplary list of technologies that we have at the moment:


4.2 You can do a lot of frequent releases


Now we can do a lot of small independent releases, and they are easier, faster, and do not cause pain. Once we had only one team, and now there are already 18 of them. If they all remained in the monolith, then he would probably have broken. Or people who are responsible for him ...


4.3 Easier to do independent tests


We have reduced the time of integration tests, which are now testing only what has really changed, and at the same time we are not afraid of the effects of sudden synergy. Of course, I had to walk around the rake to begin with - for example, learning how to make backward-compatible APIs - but over time everything settled down.


4.4 Easier to implement and test new features.


Now we are open to experimentation. Any frameworks, stacks, libraries - everything can be tried, and if successful, move on.


4.5 You can update anything


You can update the version of the engine, libraries, anything! As part of a small service, finding and fixing all the breaking changes is a matter of minutes. And not weeks, as it was before.


4.6 And you can not update


Oddly enough, this is one of the coolest features of microservices. If you have a stable working code, then you can just freeze it and forget about it. And you will never have to update it, for example, in order to run the product code on a new engine. The product itself works on a new engine, and microservice continues to live as it lived. Flies with meatballs can finally be eaten separately.


5 Cons of microservices


Of course, the tar was not without a spoon, and the perfect solution, in order to just sit and get paid, did not work out. What we faced:


5.1 Need a bus for data exchange and clear logging.


The interaction of services over HTTP is a classic model, and in general even a working one, provided that there are logging and balancing layers between them. But it is better to have a more intelligible tire. In addition, you should think about how to collect and merge logs among themselves - otherwise you will have just porridge on your hands.


5.2 Need to follow what developers are doing


In general, this should always be done, but in microservices, developers obviously have more freedom, which sometimes can give rise to such things that would give Stephen King goosebumps. Even if outwardly it seems that the service is working - do not forget that there must be a person who keeps an eye on what is inside him.


5.3 We need a good DevOps team to manage all of this.


Almost any developer can somehow unroll a monolith and upload its releases (for example, via FTP or SSH, I saw this). But with microservices there appear all sorts of centralized services for collecting logs, metrics, dashboards, chefs for managing configs, volts, jenkins, and other good, without which you can live on the whole, but it’s not good and not clear. So to manage microservices you need to have a good DevOps command.


5.4 You can try to catch the HYIP and shoot yourself in the foot.


This is probably the main disadvantage of architecture and its danger. Very often, people blindly follow trends and begin to introduce architecture and technology without understanding it. After that, everything falls, they are confused in the resulting porridge, and write an article on Habr "how we moved from microservices to a monolith," for example. In general, move only if you know why you are doing this and what problems you will solve. And what will get.


6 Khaki in the monolith


Some of the hacks that allowed us to live in the monolith a little better and a little longer.


6.1 Lintting


The introduction of the linter in the monolith is not as simple as it seems at first glance. Of course, you can make strict rules, add a config, and ... Nothing will change, everything will just turn off the linter, because half the code will turn red.


For the gradual introduction of lint, we wrote a simple add-on over eslint - slowlint , which allows us to do one simple thing - to contain a list of temporarily ignored files. As a result:



For the year, we managed to bring about half of the monolith code under a single style, that is, almost all of the actively written code.


6.2 Improvements unit tests


Formerly, unit tests were carried out for three minutes. The developers did not want to wait so much time, so everything was checked only in the CI on the server. After some time, the developer found out that the tests had fallen, cursed, opened a branch, returned to the code ... In general, he suffered. What we did with it:


  1. For a start, they started running tests multithreaded. Yandex has a variant of multi-stream mocha, but with us it did not take off, so they themselves wrote a simple wrapper. Tests began to be performed one and a half times faster.
  2. Then we moved from 0.12 node to 8th (yes, the process itself draws on a separate report). This, oddly enough, didn’t give any significant performance gains on production, but the tests were run 20% faster.
  3. And then we still sat down to debug the tests and optimize them separately. What gave the greatest increase in speed.

In general, at the current moment, unit tests are run in the prepress hook and work out in 10 seconds, which is quite comfortable and allows them to run without interrupting production.


6.3 Lightweight Artifact


The monolith artifact eventually began to occupy 400 megabytes. Taking into account that it is created for each commit, the total volumes were quite large. With this we were helped by the diarrhea module, fork of the modclean module. We removed unit tests from the artifact and cleaned it of various debris like readme files, tests inside packages, and so on. The gain was about 30% by weight!


6.4 Dependency Caching


Once installing dependencies with npm took so much time that it was possible not only to drink coffee, but also, for example, to bake pizza. Therefore, we first used the npm-cache module, which was forked and doped a little. He allowed to maintain dependencies on a shared network drive, from which all other builds would later take him.


Then we thought about the reproducibility of assemblies. When you have a monolith, the change of transitive dependencies is the scourge of God. Considering that we were then far behind the version of the engine, the change of some kind of dependency of the fifth level easily broke our entire assembly. So we started using npm-shrinkwrap. It was already easier with him, although the merdzhit of his changes is a pleasure for the strong in spirit.


And then finally came the package-lock and the excellent npm ci command - which was executed at a slightly lower speed than installing dependencies from the file cache. Therefore, we began to use only it, and ceased to store the dependency assembly. On this day, I brought to work a few boxes of donuts.


6.5 Distribution of release order.


And this is more of an administrative hack than a technical one. Initially, I was against him, but time has shown that the second technical leader was right and well done. When the releases were distributed in turn between several teams, it became clearer where exactly the errors appeared, and what is more important - each team felt its responsibility for speed, and tried to solve problems and roll out as quickly as possible.


6.6 Delete Dead Code


In the monolith, it is very scary to delete the code - you never know where it could be tied to. Therefore, most often it just remains to lie on the side. For years. And even the dead code has to be maintained, not to mention the confusion it introduces. Therefore, over time, we began to use require-analyze for superficial search for dead code, and integration tests, plus running in coverage check mode — for a deeper search.


7 Cut monolith


For some reason, many believe that in order to switch to microservices, you need to abandon your monolith, write a bunch of microservices next to zero, and all this will start at once - and there will be happiness. But this model ... Hmm ... is fraught with the fact that you do not do anything, and only spend a lot of time and money writing code that you have to throw out.


I propose another option, which seems to me more working, and which was implemented here:


  1. We are starting to write new services in microservices. Running in technology, jumping on rakes, we understand whether we want to do it at all.
  2. Select the code in modules, libraries, or whatever you use there.
  3. Select the services from the monolith.
  4. We select microservices from the services. Without haste and one by one.

8 And finally


The picture is taken from https://fvl1-01.livejournal.com/


In the end, I decided to leave the most important thing.


Remember:



If something works in other companies - absolutely not the fact that it will benefit you. If you try to blindly copy the experience of other companies with the words “it works for them”, then it will most likely end badly. Every company, every product and every team is unique. What works for some will not work for others. I do not like to say the obvious things, but too many people begin to build a cargo cult around other companies, blindly copying approaches, and bury themselves under false Christmas decorations. Do not do this. Experiment, try, develop solutions that are optimal for you. And only then everything will work out.


Useful links:



')

Source: https://habr.com/ru/post/459206/


All Articles