📜 ⬆️ ⬇️

“Profit is great. We got a lot of freedoms that we didn’t have before, ”- Vladimir Plizga about microservices

Now it is very fashionable to implement microservices, but not everyone is good at it. In particular, when it comes to large enterprises and banking systems. Someone over the years can not cut your monolith, someone is not sure about fault tolerance and so on.


Today we will talk about the implementation of microservice architecture in the Center for Financial Technologies (CFT) - a group of companies working in the field of information technology for the financial sector since 1991. That is, it is an organization where product quality is extremely important, real money depends on it.


In turn, Vladimir Plizga for the last 6 years has been immersed in developing the backend of Internet banks and related services at CFT, where he actively drowns for microservices and other fashionable things. To talk with him, I came straight to the CFT office, made a selfie and a mandatory photo of a red elephant :-)


Topics covered:




(on the left - Vladimir, on the right - olegchir )



- My name is Vladimir Plizga, I have been working at CFT for several years now. I mainly do Java backend development. Prior to that, very little worked in other areas, but this can be attributed to the student and post-student period. To be more precise, now I am engaged in the development of financial services - and not by some brokers and quotations, as they often think when I hear the word "finance", but by Internet banks and related services. For example, such tools as gift card tracking sites - when people are given a branded universal card instead of cash, where they can pay wherever they accept these cards. Attached to it is an application with which you can control the costs of this card, view history, activate, receive other profits.


- This certificate card is renewable?


- It depends on the modification of the product. Most of them are not replenished. Its very acquirer, the one who will give it, once replenishes the card for some amount that is comfortable for him, usually from 300 rubles to 15 thousand rubles, gives it, that's all. There are separate modifications where it is possible to replenish. But this is not a certificate, but a payment card, this is a full-fledged such financial product. But it is rather a side, and the main thing for us is just prepaid cards, that is, those cards that are for people a kind of addition to the wallet, either to electronic or to real. There you can in any convenient way to make money buzz, and then - to pay them even on the Internet, even in stores, even through the payment office itself (we often call it not the “Internet bank”, but the “payment office”) for all sorts of needs, ranging from basic payment services such as “pay garden”, “pay utility bills” and so on - a total of more than 5 thousand services are supported - and ending with all sorts of payments with AliExpress and others. In principle, any electronic.


My role in all this lies in the fact that at the moment the architectural decisions about the backend pass through me. This is all about microservices. Now we are on the way of leaving the monolith to microservices, we are gradually cutting off pieces and we don’t add new functionality to the monolith in order to follow the rule (someone who is already bored): to get out of the pit, you must first stop digging it. I also deal with other things, for example, code review, many business tasks pass through me, including fundamentally new functionality. Well, related cases in terms of performance, architectural improvements ... In a sense, the typical work of a backend developer.


- Are you a straight architect-architect, drawing cubes in the editor?


- No, because I myself still grow from the application developers, those who directly write the code for someone invented cubes, I feel closer to pushing from practical goals and reality. Sometimes you have to raise the level of abstraction and try to spread it out for some cubes and diagrams, but I try not to soar in these clouds, but to be more close to the realities with which we have to work. If these are microservices, then it is designated by specific concepts and paradigms specific to it. I try especially not to go into abstraction.


- But still, draw UML-diagrams?


- Yes, we draw, but do not go into extremes of support for their syntax, correctness and so on.


- That is, they are not the source of code generation.


- No, they are not.


- I just had this experience too, and the problem arises when you have a huge number of microservices, you want to have a complete picture of what is happening. How to keep this whole picture, how to fix it before your eyes?


- At the moment we go from the opposite. We do not generate microservices by the picture, but by microservices we support the picture. We have a scheme, in a sense, this is some kind of UML. But let me emphasize that we focus primarily on the ease of its perception, support, and do not zeal at supporting syntax and rules. At the moment we are not much in this direction, it is more than satisfying to us. Our scheme contains both hyperlinks to documentation, and is in itself replete with some kind of conventions, there is a legend that, where and how it is designated, and thanks to this, we are in it normally oriented. There are no problems.


- Is it generated dynamically from source?


- No, we support her in Confluence. There is a Gliffy plugin that allows you to maintain a picture quite easily at the level of abstract shapes. Dragging a single block does not force you to manually redraw any links. Literally, these basic things are still satisfying. Due to the fact that we divide the plots into separate diagrams, we do not need to support the whole picture, because it is also unsuitable for perception in this form. There are blocks in which you can fall through and see another, isolated area, which is also suitable for direct perception.


- Do you have a system designed so beautifully, competently and carefully that these blocks give a sane structure?


- At the moment - yes. I will not brag too much, I think that now it is explained more by the youth of the system than by the competent architecture :-) Of course, we are trying to support it. A lot of problems that may arise - we present them a priori, prepare, arm, but in this global form the task has not yet risen, since we are not talking about hundreds of microservices yet.


- Now how many microservices are there?


- Units of tens.


- Already not bad. For many, this is not :-)


“But we're still at the beginning.”


- What are your tasks in this beginning?


- Tasks in terms of technical development, architectural?


- Tasks in terms of anything. You're an architect here!


- Though I am an architect, but first of all I start from the needs of the business, as long as they are first-choice for us. Business wants value right now. As we try to be agile, we assume that some things may not even be useful. If we invest in something, we obviously understand that we are experimenting, testing, testing the ground. Based on this, we need to build our development and our application so that any refinement would be as cheap as possible - so cheap that it would not even be a shame to throw it away. Of course, in the absolute this can not be, but nevertheless, thanks to the current direction, we can allow ourselves in one form or another to cut some piece incrementally, without injecting it into the monolith, where it will grow roots and change everything around it, but make such a slapstick next. To play, to see how it will behave, what it will represent from the point of view of value to the end user, and then, if necessary, throw it out. Based on these three priorities: speed, cheapness and independence from other components - we are trying to develop components further.


In what way is this manifested purely technically, if we move from loud words to deeds. We must try not to build up the monolith. It is rather large in our case, we call it “the core”, everything that concerns it is “nuclear”, “nucleus”, something else ... In general, there is no need to go there whenever possible. Naturally, building communications with external clients for us (this is both the web and various mobile platforms) is necessary without the participation of this core and its satellite satellites. That is, it was possible to disconnect all connections directly through the entry point so that there at the route level, at the load balancer level, switch this traffic and somehow control it more flexibly than when it is poured into the core in a single stream.


Plus, we try to build the development itself so that it is as cheap as possible. We have a whole microservice - it is full-fledged, working, but does not do anything. And it serves the only purpose - to be a template for creating the rest. We stupidly copy-paste it, he is already able to do everything, everything is already uplifted in him, set up, he is adapted to the CI test, adapted for rolling on the battlefield, and it is already quite easy to push off from him. This was done deliberately so that the developers should have minimal effort to start new microservices, up to the point that they played today - then turned and threw it away.


- This template is based on what? On something common, like Spring Cloud, or something different?


- According to ready-made Cloud solutions ... Perhaps we will not even go down this path, but we will use separate libraries. This is, first of all, Eureka, Zuul - standard solutions for the registry of services, entry points.


- Do you already use it, or is it the direction of development?


- Already using. The entry point and registry services are already in operation. To break in this solution, we turned on not the combat traffic from the end users, but the auxiliary from the call centers, the one that is not so active and loaded. As for the base of microservice itself, this is Spring Boot, as, in principle, for all the others. I do not know if he can be considered "naked." Of course, with some additional frameworks, most often - frameworks of the same Spring. We mostly rest on the decisions of the spring.


- But did you screw it yourself or did you take what was? For example, Spring Cloud Netflix. That is, annotations, RestTemplate, etc.


- Yes. Now the experience of our microservices is that the entry point, that the service registry - from the point of view of the application code - they are very small. Almost everything comes down to the fact that we have added an annotation, and the rest of the beauty lies outside of the Java code: these are build scripts, start scripts, and everything related to its environment and adaptation to our realities, CI / CD and further down the devops. However, at the entry point, we also had our own filters for additional access control, security issues, and ensuring backward compatibility of the API - in general, we have our own dopilki. But again, even the concept of the filter itself is an abstraction taken from the Zuul library itself, which is drawn into Spring Cloud.


- Got it. Are you going to use circuit breakers like that?


- Yes, we are. But there is such a situation, that we are going to get ready, and we are already using it ourselves. Just Zuul, in the delivery of Spring Cloud, already includes Hystrix, and it is already working, and we have to deal with its settings, take into account that its timeouts are configured, to put it mildly, in non-trivial ways, some squats have to be done. We use it, but I cannot say that we do it as consciously as possible and control everything.


- Was there any desire to finish something? In Eureka, for example. Or in Zuul.


- Zuul, fortunately, was flexible enough that such a desire did not arise. Oh, no, I'm lying. I have a pool rekvestik on GitHub. Although no, I apologize, now this is just a matter, and the pool-request is in development. In general, it turned out that Zuul out of the box does not support custom HTTP statuses. By “custom,” I mean those that are not standardized by the RFC itself for the HTTP protocol. 431 - no need, no way specified, no light anywhere. It turned out that in Zuul they were built inside in the guts that the incoming code, which is proxied through the entry point to the outside, would necessarily correspond to one of the enumeration elements sewn into the same source code of Zuul itself. And when HTTP status comes for it, for which there is no maping, it falls. And it falls very badly, there is no workaround. I wrote to them, they answered - well, yes, probably, so, try to fix it. I just discovered with surprise for myself that this is the place where the main failure occurs, it carries a purely debugging role. It is designed to bite out a piece of the request in order to correctly log it, and this “correctly” implies that it is not the bare HTTP status that will be logged, but the element of this very listing.


- It is necessary to deploy the wrapper.


- And it turns out that due to debugging, which can be turned off, but at this level it still works, the whole application crashes. In fact, the severity of this problem has fallen. We decided not to contact custom HTTP statuses.


“Why do you need them at all?”


- There was one idea, if you're interested - I will tell you in a few words. With their help, we wanted to convey the nuances of the errors that occur or, in general, the answers from the final microservices. In our case, each business scenario implies a lot of all sorts of deviations from the result. And for the client it is often important to know how the business scenario and its every step ended. This can be either the need to retry, later or right now, and the repetition is both user-conscious and automatic - by the client. This may indicate the need to display a dialog box with a suggestion to change something there. If you give a specific example, you can take a deposit from a third-party credit card, from which we transfer money to our prepaid cards - there are several requests in this process, and some of them are able to branch further actions on sub-scenarios. And we just wanted to create such things with custom HTTP statuses. With your own, to negotiate directly with you at the API level (we have the entire API documented in Swagger) and write it in the following format: status + what the client needs to do. The topic did not go for several reasons. There were many different reasons, but one of the main reasons: at that time, the existing monolith had an established API, it was customary to transmit hues of errors not in the form of the HTTP status, but in the form of fields, in the body of the response.


- Maybe it's better to put the HTTP headers?


- It would be possible to have headers, but in any case, this would require a significant change in the API, which is backward incompatible. It was possible to pervert and make it backward compatible, but with this we would have to continue to live indefinitely, since we have many fairly large clients. Having weighed all this, we realized that it is easier now to dodge and adapt to the old API, than to pull along a huge amount of additional logic to maintain backward compatibility with those customers who have not yet crawled to the new API. It was all not easy, a lot of additional circumstances that were brought to us by the developers of client parts, but in the end we settled on this solution: just transfer in bodies.


- You said about Swagger. What is your practice of using it? How good is it? Are there any pleasant or unpleasant moments?


- Enough and those and others. I dare say that a little more pleasant, since we are already on it. First of all, the most important thing that we got (although the merit of Swagger is not so great here) is the simplicity of the description. It so happened that the monolith we have built on the Play Framework. In Play, though there are plugins for generating swagger documentation, but, as always happens, for historical reasons, different people wrote them at different times, and these plugins are not suitable for anything, they do not work for us. Therefore, the documentation had to write manually. Straight to specification, straight sit and write with your hands.


- Like most of the world now does.


- But, no matter how cool, agree that you write the same code, why else sit and repeat your hands?


- And once again in jawadok. And the fourth time in the tests.


- Yes! What for? I really wanted to save on this, so when we started on the microservice path, we didn’t invent any bicycles, we analyzed several solutions. Currently, the most mainstream, most developed solution seemed SpringFox. Do you know anything about him?


- There is nothing. Now I know the name.


- This generator Swagger-based documentation descriptions of controllers in the Spring itself. In particular, in Spring Boot. But in fact, under Boot it is not very sharp, but is focused on SpringMVC, where all the annotations come from. But with Spring Boot is friends. The idea is that the API structure, a set of methods, their parameters are already described in the controller, in the form of its methods and annotations, and they can be extracted. This is the lion’s share of work, the skeleton of documentation. All that is missing here is a verbal description. But it is obvious that it is easy enough to add annotations in the code itself. SpringFox goes exactly this way - it takes Swagger annotations (more precisely, from the swagger-annotation subproject, it is part of the generator), applies these annotations to the controllers on Spring (SpringMVC, or SpringBoot in this case). Then interprets on the fly, generates the Swagger-specification. And what is another beauty - right in the composition of microservice, exposes another endpoint (not on a separate port, but in a separate context), on which Swagger UI opens (these beautiful green HTML forms), where all the documentation is presented, ready not only to read it, but also to be able to play with it - the “try it out” buttons work right out of the box to send a request to microservice. At first glance, it looks really like magic: no need to manually write a specification, and at the same time it is available. There is a separate URL where you can request it and then feed it to some SOAP UI or Postman. All this is already there. Plus, on the port of each microservice documentation is deployed (no need to look somewhere, remember), suitable for reading. One more plus - any change in the code will entail a change in documentation. It is necessary to try to write something and not to notice that in the same place you write about the same in the form of text. This, of course, is a significant plus of such a decision.


But, I confess honestly, it is imperfect. Not everything is smooth. SpringFox itself is well-versed in annotations and in Spring's query structures, but not perfect. In particular, there are problems with collections. When collection types are returned from methods, it is crooked in the UI. In addition, it allows tyunit not all the parameters that provides naked Swagger - there is a Wishlist that should be taken into account. There are a number of rough edges. But as a whole, quite a working approach. But the truth, and for this we had to regroup. If earlier we did not hesitate to add some additional business logic to the controllers themselves (validation, for example, if it wasn’t done at the annotation level), now I’d have to say a lot at the command level. To say that the controllers for us are the input point and source of documentation. No business logic applied validation, for example, should not be there. Otherwise they become unreadable, on each method a bunch of annotations. Instead of being the same @RequestMapping method, @ApiOperation, @ApiResponse and so on suddenly appear - a bunch of different annotations.


- Half of the sheet of code annotations filled?


- In reality it happens even more. But due to the fact that we agreed that the business logic is delegated to individual services, this improved test coverage and made it possible to draw a clear boundary. Here we have controllers, and they solve two problems: the source of documentation and communication with the outside world. And do the rest in application logic in services.


- Programmers always write this documentation, or do you have documentaries?


- No, we write ourselves, only developers.


- And is this normally always happening, without rough edges?


- Here, in microservices, it is very easy for us. All the roughness was when the developers manually led the documentation for the monolith. Constantly something was forgotten, something was constantly leaving, obsolete, mobile developers were cursing, because the documentation was irrelevant - that was really tin. And now, when it happens, the developers are only happy.


- What products do you have for your country?


- To Russia.


- And in what language is the documentation written?


- In Russian. We have our own team, there are no foreigners, including in remote offices, so we are all in Russian.


- This documentation, it turns out, is generated for work within the company? Or do you outsource it to outside contractors too?


- We had this experience, but not now, and our contractors are Russian speakers.


- Cool, very lucky.


- I know!


- Have you watched this SpringFox code? If you really want it, you can patch it at all, or is there a hell of tin?


- Yes, you can. There are, of course, non-trivial places. But in general, it is written normally. I can't say it is great, but not bad. A couple of times the hand has already been raised to fix any slippery places, but there has not yet been time. And secondly, quite active guys stand behind him and they themselves saw him pretty well. Not as active as we would like, but some updates regularly roll out. True, so far, not one of our Wishlist has been covered.


- Regarding HTTP: are you trying to impose your requests on any particular methodology, for example, on REST?


- Yes, we are trying. At one time, at the very start of the movement to microservices, we rested very strongly on this. Discussed, pronounced, there were heated debates.


“But it's usually very difficult, to pull an owl on a globe.” How are you with this?


- We have this as usual, historically. If you remember, I have already mentioned the forced support for backward compatibility with the previous API, on which we have a huge fleet of clients. Hundreds of thousands of people on the phones installed. And all our attempts to be restful in all respects slipped from the globe due to the fact that we were forced to maintain backward compatibility with that API. And then the API, to put it mildly, is not quite restful. It uses only POST and GET, regardless of whether you delete or edit the entity. Such a concept as an “entity” around which the entire REST is being built is not particularly adhered to. There, this approach is more like in Java itself - there is some method, let's shove its name into the address itself. Here it is present in the address itself, getBySomething. This approach is not terrible, it is working, normal. With REST, it turns out, of course, more concise, cute, but I didn’t feel any kind of applied value, despite the fact that the API is big enough, an adult. As it seemed to me, he risked becoming talkative due to the fact that there is no single paradigm, but in principle it turned out badly . And now, despite the fact that we keep some REST chips in our head and try to use them, the same orientation on the essence, we try to design the addresses of the methods so that the essence is at the head of the corner and then it is then subdivided into some the subtleties following on a slash deep into request, all the same it is necessary to limit itself to these aspects of backward compatibility. That is, we have such a semi-REST.


- Stateless or stateful?


- Microservices are now all stateless. With one thing, perhaps there are questions, because the next step in the near future will be the deployment in the orchestration system. We are planning to roll it all out on Kubernetes in Docker containers. There is one of the main requirements that we agreed with our operation, which, in principle, is very natural - stateless microservices. The reasons are clear: so that you can drop any of them at any time and immediately pick up another one so that he can pick up requests. That they were as easy to lift and damage.


- And if all the same there will be services that are heavy, what to do with them? If k8s drops the service by 400 gigs in the heap ...


- What to do with this data?


- What to do with the data, how to design services so that they experience such migrations?


- There are several approaches. One of those practiced by us (although I can not say that he is good in this case) is to use distributed storage. So that the place that you called hip was not just hip, but also stored somewhere between microservices. The classic solution (I don’t know how classic it is for everyone, but for us it has become such) is Hazelcast. Hazelcast-based distributed storage. Now it works like a monolith. It is clustered, it has shared distributed storage. Any piece of data is stored on all nodes. If one node falls, no one will lose this data (of course, if they managed to replicate).


- What is the degree of replication?


“Now I’ll definitely not tell you.”


- Let's just say, is it fullink or lower?


- Depends on the data. In some case, we use full, and for less critical data, which is easy to repeat, there is 2/3. That is, the data is stored on two of the three nodes (if there are three nodes in the cluster). But here I can lie in something. This is one option.


The second option is if microservice has hip swelled to 400GB, and all this is valuable data, is microservice okay? Should he really act like that?


- He is no longer “micro”, this is “fatty service”.


- It seems that it needs to be turned into several other “micro” ones. Or there should be a persistent pad under it, which will protect it from such things. Clearly, it will make it less agile, but it depends on the value of the data. Maybe it makes sense to keep them separate somewhere. Not necessarily in a full-fledged DBMS, it may be worth using some more lightweight solutions. But we have not yet encountered such a problem, we keep them small enough. There is no one that fell and pulled a bunch of valuable data to the bottom.


- What will happen if in the middle of the microservice chain something that is actively used by neighbors falls in whole or in part? Will there be any graceful degradation or something?


- In most cases, no, but in some places we still insure. But not by the means that are built into the same Spring Cloud Netflix, when folbek in Hystrix is ​​used - we have known about them, tried it, but so far there has been no need. Instead, we manage our own means, right in the code we know what can be done in such a situation. Somewhere we allow the request to end in favor of another request. Or we pull some data from the cache, if they are there and it is permissible to take them older than the latest ones from the database, which we went to and broke off. It all depends on the situation. But these places are not very many.


“I just talked with different people, and there are two main strategies for people who don’t have honest GD / DR: if we have a billion Chinese people and a piece of the microservice graph falls, either we just lie and don’t even try to work a billion Chinese people, or“ everything is lost “We start to run around in horror, shout and kick admins. How are you with this? What will it be with you?


- I think it will be closer to the first :-)


- In the second case, when you have no workaround, somehow you need to arrange the prod so that it never falls at all.


- Well, that sounds yes, good. But in fact, how feasible is it?


- Can I do this? Are you doing this?


- No, as you can see. Currently not yet. We are trying instead to get by with more point solutions. Or we protect individual pieces, the most critical ones, for example, tried to cluster the same monolith. It will not sustain a billion, but it can take a triple load at the expense of elementary clustering. - -, , - , - ( ) — - . .


— , — .


- Yes. - , , , , , , — . , .


— , - ? .


— . SQL- Apache . , (, , , ..) — . - . , — . , , , . .


— - - ?


— . , - — , Java — . JAMon, SpringBoot Actuator (, ). , — , , — . , — , , , , . , . - , . , .


— - -, : , ?


— . - , , , . , ( ), , , - . — ( , ), — , , , , … . , , , , , . , , .


— , ?


- Not. , , . , — , , . Machine Learning, .


— , .


— . , , , . , , .


— ?


— , , . Java Melody — , Spring Boot. , - , , . , . , JAMon — , Zabbix. JAMon — - -, , … , , .


— , 20 : , , ? , , ?


— … , ?


— , . - — . Java - , . , , , , API, , , — .


— , . , . . , . , . . — , . . , — Ops DevOps . , — , , . . , . , Ops' — , , . , - , , , . Ops-, . , , , , , . CI. Jenkins, .


— , API, , — , . , , . . . , , , , - . , . — , , ( , ). , . - , . , , , — , . , . , , .


, . , - -300. -400 «» , -300 , . (102 ), . , , , . , , , , , , . , . , , , — , , , . —— , . , , , . That is quite enough.


— , , ?


— , , — , , . , . . , , , , . , , , , . , , , , , , . , , - : , , API — . .


— , , , , . , , 100 . . ?


— . - - , . . API, , , , -, . . , , , , - ( ). , — — API. , SpringFox Spring Boot . , API . API Swagger UI, : , API, ? , , - . , , . - . , , , Confluence, . , . , , , - , Swagger UI.


— , HTTP, REST .


— , , . , , — , , .


— - , HTTP/REST? , , SOAP.


— , , « ». — , , . , 95% — , . . . SOAP — , , , .


— - ?


— , , , TCP-. , :-) , — - , , .


— , - -. HighLoad, , .


— , , , , HTTP, XML. WSDL, . - , . « req, — ans». . .


— , . ? — , 4 ? ?


- I am afraid to surprise you, but for some reason, in our financial services, videos do not transmit four gigabytes each! But I understand your question, just small. We recently solved a similar task, and we did not have to transfer such files to external systems using any of the protocols I have listed now. We, as a rule, exchange documents, XML. If they are due to something and grow - due to digital signatures and content. As for the interaction with end clients from the API, then yes, you have to accept files, document scans, statements, and so on. Such things we solve with the usual multipart / form-data, we simply distill the data, and everything that controls - so that in no place in the microservice are the entry points or some intermediate places - so that there is no buffer anywhere in which the file would be wholly. We work at every moment only with a piece. A piece received - a piece processed. If it is necessary to put in the database, CLOB and BLOB (depending on whether binary or text data) is entered under this and is written in the form of an input or output stream. We are only convinced that at any point in time the file is not entirely in memory. Because there is one file, there is one file - and the hip is over. There is no magic here, in fact, everything is already there, we tried to apply it correctly, and we try to customize the same parsers. Not always, if there is a risk of a large file - we set up the stream parsers, not the DOM, which pull everything in at once.


- Globally, how do you see the direction of development of the idea of ​​microservices? Not only maybe in my company, but in the whole world? What should people look at now? What is good? What do you respect?


- The first thing that comes to mind is respect for the opinionated approaches, on the basis of which Spring Boot is built, among other things. Approaches and tools (no matter for microservices or related tasks), based on the aggregation and synthesis of someone else's experience. As our experience shows, this is almost the most invaluable addition that you can get from third-party tools. When they are not just designed “saw what you want, here we have provided you with some basic abstractions, and now get up what you want.” Here is a prime example - Ant was like that. Very powerful, very flexible tool, but whatever you do, write everything from scratch, and you constantly accumulate it ...


- Or makefiles.


- Yes, makefiles. Here Gradle, in my opinion, a very good solution. They have put a lot in there of what has already proved itself, confirmed it as a working solution. Starting from an elementary directory structure, which they consider default, why declare it explicitly and reinvent the wheel? This also leads to the zoo, when people do not have enough experience, and then it turns out that it does not correspond to the generally accepted one, and a complete collapse begins. Therefore, these silent opinionated approaches are extremely valuable both for beginner teams and for continuing ones, because they allow (sometimes without even realizing it) reusing the vast experience of other people who have already gone this way and collected rakes and minefields. . Therefore, even without focusing on any specific tools or services, I see the superiority of opinionated approaches as global ideas.


- Explain the word opinionated, so that we understand exactly what you are talking about now.


- An opinion based approach. For example, remembering how the Spring Boot guys wrote, why they developed it exactly the way we have it now - they didn’t invent anything new, but summarized their experience. According to them, the basic project, the product should look like this. Default. This does not mean that it will look like this anyway, and if you do not agree, you will go through the forest. This means that if you do not want to bother, it will look like this. This is their opinion. And then you have the opportunity to podtyunit it somehow, fix it, fix it - to fit your needs, already under your opinion. In this combination of rich experience and flexibility lies the maximum value of such solutions.


It is not always good. An example of the first Play (with the second worked a little) - it is also opinionated, but there the guys had a very specific opinion. I do not know what the hipsters wrote it :-)


- Is that the one in Java yet?


- Yes. Didn't work with him?


- I worked, but it was so long ago, my whole life was gone.


“There, if you remember, for some reason they do not accept the standard structures of Java packages.” They believe that if you put something in a bag, then put it right here, right in the root. Here you are called controllers, not some kind of org.company.app.controllers, but simply controllers - how much blood we drank!


- How to resolve clashes when there is no structure? You can organize the structure in the class name ...


- Ha, “excellent” solution! And such tin there very often. Or, for example, they believe that the entire web is static. Therefore, all the methods of all controllers there are static. Well, in their opinion, it is. This is just an example.


- But they thus reduced the complexity of the development, focusing on a particular piece. And if you came out of this happy piece, you are a kapets.


- Our experience has shown that it is very easy to get past this piece. As soon as your application turns from a hamster or a laboratory work into something a little more serious - everything is right there, you are right behind this zone.


- And so they wrote the second Play, and even rewrote it on the rock. By the way, I truly understood that you are using the first Play in the monolith? So what, will you continue to use it or will you take something instead?


“Instead, we will gradually mow the pieces out of the monolith until its value drops to zero.”


- But still a web framework is still needed to show your face?


- No, not needed. In our case, the monolith is only the backend core. And the front part is isolated by another satellite application. And secondly, for the web part, we meet the individual guys who live completely in their own world and saw the web part as an independent web application on React.js. We provide them with only API - no templates, HTML, nothing there doesn’t fly - a bare API that they use in their application. Mobile phones, of course, the same thing. We are not tied to any framework here, as long as the API complies. And the API is HTTP.


- It sounds like a fairy tale, many will burn with envy.


- Well, I don’t know ... We suffered this, the first version of the Internet bank was old, and we rewrote it a long time ago, it was on the Wicket framework, heard about it?


- Yes, this is a terrible framework, I worked on it for many years, I hate it with all my soul.


- I can understand you, I also caught him pretty. Do you remember that there is a concept of a component, it fits into HTML, there is a wicketId, which is then generated on the fly, replaced, that's all? We had this, and we left it. We understood where we were going, what we were leaving, what we were tired of. In principle, the framework is not hopeless, it is even good somewhere, just everything has its place.


- As far as I understand, Wicket is very difficult to scale.


- Honestly, not at all.


- They have a IClusterable interface in their class hierarchy, i.e. theoretically you can work from him, drag Terrakotu, but the developers themselves have never rested on it.


- We didn’t even get to that point - we just rewrote everything and began to cluster normally.
Although some of our applications still work well on Wicket.


- We have been talking for more than an hour. Were there any ideas to make a report out of all this?


- Not yet. For me, all this is so close that it all seems to be some ordinary things. I will come, tell you what everyone does? But on the other hand, others come, tell, and nothing.


- There is a big difference: for others it still does not work, but you succeed.


- Just besides what I was talking about here, attention is taken up by other things. For example, the topic about which I will talk on JBreak. And I have a more recent project, where I developed a tool for aggregating server logs in a test environment. In connection with the development of microservices, it has become important to watch the logs all in one place, how they are written, and right now, and not some indexed ones. On the one hand, a bunch of solutions like ELK, Logstash, Kibana and others have already been invented, but as experience shows, on the test - this is shooting at the sparrows from a cannon. It is necessary to customize formats, indexers, they do not always work in realtime. The Christmas Tree itself is, I apologize, three applications, each of which must be closed. You need some kind of more lightweight solution that can be quickly fixed and which allows you to quickly and easily see everything in the logs on the server. And, obviously, on a remote server, which does not even have a UI. All that can be done there is to call tail.


- And for that you need SSH access, right?


- They can only be accessed via SSH, well, via SFTP, there are no other options. And it was just this task that I had already encountered for a long time, made a simple utility that at that time simply poured these files and returned it in a web page that is open in your browser. And she writes right in realtime. Such a tail, only browser. No applications need to be put, it just gets and pollit. But this was not enough when it came to microservices. It took some decision to aggregate them. And just at Spring Boot, I filed an application that launches tail, listens asynchronously to it, and as soon as messages appear there, aggregates them and sends them to the Angular application on a web page. Since I’m not a web developer, I didn’t bother, I wrote everything down on Angular.


- You know, web developers also sometimes write on Angular :-)


- Yes, just recently it is considered that Angular is already all wrong, something more recent is needed.


- Second or first?


- I’m the first because StackOverflow had the most questions about it. Actually, different passing problems are solved there. How to recognize timestamps so that in the final log everything must be in order, despite the fact that they are written unevenly. So that they do not overlap, i.e. if a message is in the message - so that in the middle of it someone else's message does not creep out of another log. There is an aggregation, recognition of timestamps, and all this is arranged in a sequence.


- Well, everything seems to be discussed. The next step is to come to your report. Thanks for the answers, Vladimir!


Minute advertising. Vladimir is the speaker at our conference JBreak 2018 (which will be held this Sunday ). In his talk “Side Effect Injection, or Virtuous Crutches,” he will talk about the Side Effect Injection approach, and we will admire the compilation options for Java code, pick up a single byte code modification case in the JVM, prepare a formal Java grammar and see it all with the real example. applications. There are discussion zones at the conference, so after the report, it will be possible to meet with Vladimir and discuss various issues - not only Side Effect Injection, but also, for example, microservices. Tickets can be purchased on the official website .

')

Source: https://habr.com/ru/post/349954/


All Articles