⬆️ ⬇️

Breaking Up the Monolith: Refactoring the Web Application Architecture





Since its introduction, JavaScript has come a long way from simple web pages to full-fledged applications and even servers. However, the more complex our applications become, the more acute the question of the architecture suitable for them.



Together with Viktor gritzko Grischenko, the creator of swarm.js ( https://twitter.com/gritzko ), we will look at modern approaches to building the architecture of JS applications both on the server and on the client.

')

“When we talk about monolithic Web applications, we usually mean architecture that has already become classic. The so-called layered monolith is well-established in many corporate solutions. Tell us, what disadvantages of this architecture have you had to fight in real projects?



- For the first time I encountered such a classical architecture as far back as in 2000 at the Bank of Russia. And it arose by itself, in the process of emergency implementation. It turned out quite ordinary enterprise Java nightmare, when the system could only be run entirely on the server, with a full database. Something to do with the resulting monolith was already difficult, everything depended on everything. Later I saw the same fakapy in Yandex. This is an inevitable stage if the application outgrows its architecture.



- How to understand that a monolithic project should be divided into services? Are there characteristic signs?



- “It’s time to divide” is from the “time to amputate” series. The division of a large task into small orthogonal subtasks should be carried out at the design stage.



- Node.js is very actively developing, articles and tutorials are quickly becoming obsolete. Are there good practices that are worth today? Perhaps there are reference solutions for building microservice architecture?



I personally think that REST-based microservices are the same minions, only in profile. One way or another, asynchronous communication between subsystems is needed. In the classics, these are message queues, they are used everywhere and always everywhere. Now there are trendy things - Kafka, Akka.



- For a monolithic application, it is usually sufficient to have a load balancer and the required number of copies. But in the case of microservices, you also need to understand which component of the system should be scaled.



- The balancer as such solves the problem only for very simple, ideally stateless applications. Otherwise, synchronization and competitive data access problems begin, a portal opens in the basement and demons climb out of it. A component with one clear function is unambiguously easier to scale. Specialization and economies of scale are the second half of the 18th century, Adam Smith.



In general, the report is not about microservices. I put the ideas of "immobile infrastructure" directly to the frontend, to what works in the browser.



- Ok, let's discuss the code on the client. What problems are relevant now?



“For example, when using fashionable frameworks, the characteristic size of the client bundle is already JavaScript megabytes. Unpleasant, especially on the phone. The process of assembling the front - this is a whole big deal.



Pulling the data to the client is in full swing - first they pulled out the code, SPA appeared, now we gradually pull out the data, we want offline-first, fast response time and other buns.



At the same time, when you start saying that UI components must be pure functions, this is not considered strange. That is, the people are ready.



- Indeed, the size of the average web application is only increasing. Do you have any suggestions how to improve the situation?



- My idea is to strictly divide everything that happens on the frontend into (A) data and (B) everything else (components, code). Moreover, all that is not the data - the function. And functions are also data, if you think about it. When we put the code in the version control system, it becomes data. That is, we have versioned data and versioned functions that we deliver to the client in the same way, we cache and update it the same way.



- And if a little more?



- Let me explain with an example. We send the user a page with a hundred buttons on it. We do not put the code for one hundred buttons in one big bundle and do not download the entire database, and send only what is needed for the render - part of the data and part of the components. If the user starts to poke everywhere - we pull up more data, more components. In this case, what has already been delivered to the client is cached forever. Update as necessary.



This is similar to microservices in part - sorting the mass of the code into immutable / versioned components. By the way, I personally prefer to call it versioning rather than immobility. The version, even though the data, even the code, is by definition immutable, and the focus is precisely this.



Ultimately, the fundamental problem here is one — both the code and the data must work on multiple devices at the same time. We build distributed systems simply because processors are everywhere. Telephone - computer, book - computer, TV - computer. Client-server thinking is a thing of the past, just as mainframes are gone. And what will replace is an interesting question.



- It sounds interesting, but how is it compatible with existing libraries and frameworks? Even for the partial operation of the application will require large basic dependencies (AngularJS, jQuery, ...).



- Well, the preact met the 3KB somehow.

Angular use in such a context is really no need.



- Has this concept already been formed into a separate project? Where can I find out more?



- Fit. Now it is more experiments. By December, tell you what happened.








In addition to the report of Victor at the conference HolyJS (December 11, Moscow, Radisson "Slavyanskaya"), we recommend to pay attention to the following reports:



Source: https://habr.com/ru/post/313286/



All Articles