📜 ⬆️ ⬇️

“To become a good systems analyst, you need 5–10 years of experience” - an interview with Alexey Shipilev from the Java Performance Team

On the eve of the Joker 2015 Java conference, which will begin tomorrow, I am publishing a great interview with Alexey Shipilev , engineer of the Java Performance Team from Oracle, one of the coolest and most well-known performance experts in the world. And of course, an excellent speaker.

We talked with Alexey in detail:


Here is a video of our conversation. More than an hour long, you can listen on the road.
')


Below the cut - the decoding of our conversation for those who are not very video.

About changes in String


- Alexey, you used to tell a lot about Performance, and lately you have been doing a lot of the String class. Tell me, please, what is the reason?

- I also tell about String from the point of view of Performance, because I participate in projects that are aimed at optimizing strings. There are many small optimizations done daily in this area, but we are making two major changes.

The first is Compact Strings . The characters of a large part of the strings that are in Java applications are placed almost in ASCII. This means that each character in a String can spend 1 byte, not 2, as the char specification requires. Today, inside a String, the internal storage is an array of char. It is highly optimized, and people expect high performance from it. Therefore, in order to try to compress these strings, you need to make two representations of this String: one view is typical, char array, and the second is byte array, in which each byte corresponds to a specific character. And there you need a lot of performance work, which will provide, as it is written in the release criteria, “non-regression”. So that users who switch from Java 8 to Java 9 not only do not experience pain, but experience happiness and performance boost from this feature of ours.

And there are a lot of different moving pieces, about how the “library” works, how and what is done with string concatenation, how runtime handles the whole thing. In general, there are many small details.

- And not dumb at all? String class - it goes through the whole JDK

- It's dumb! In fact, you can even see that under the original proposal from the guys from the Class Library Team, there are a few funny comments. One is from Martin Buchholz (Martin Buchholz) from Google, who is known for being one of the JSR-166 maintainers. Martin once said something like “this is a difficult task, good luck!”. The second comment is from me, something in the spirit: “Well, I, guys, do not believe that this can be done, because the devil knows it ... String is such a class. It's really dangerous to touch him. ”

Now we spent half a year on careful prototyping of this change, accurate measurements, developing an understanding of how all this affects the performance, choosing the right code generation strategy ... And now I'm pretty happy, because I understand that this change, which seemed dangerous, became clear after all this work. Knowing all the pros and cons helps, and with him everything is quite normal.

- And now what will happen to those who, to speed up the performance of their code, brazenly climbed into the char array that lies in the string?

- They are waiting for a surprise. According to the specification, no one guarantees that inside the string is a char-array. I think that people who climb into the guts in order to win some kind of performance, already initially subscribe to the fact that they have to follow all the changes.

- What is “no performance regression”? Is there a set of tests, are there any criteria for its assessment?

- There is a formal and informal attitude to this. The informal attitude is that we have a large community, which is used to the fact that the major release works at least no slower than the previous major release. Because otherwise there are problems how to migrate to the new version. Therefore, when you develop a new feature, you remember that you can’t just fail a performance, you have to win somewhere.

And there are formal criteria that suggest that you can squander a performance by so many percent on such and such workloads. But this is the internal affairs of the development team, not OpenJDK. Rather, the relationship is just that - we understand that the community wants Java not to regress. We also want Java not to regress. On this unspoken assumption and our development keeps.

- Do you somehow communicate with large vendors, with manufacturers of enterprise servers that have strings on lines and strings?

- It's pretty simple. We work in a large corporation, we have a bunch of internal customers. And they are quite enough. It only seems that the open-source is suspended in the air, which is being developed by itself, by hacking enthusiasts ...

Usually the opposite is true. There are major players who finance the development for some of their internal reasons. Why did Oracle buy Java from Sun and start investing in its development?

The fact is that when you invest in a platform, you give your development organization big bonuses. If you have a product in which there is a performance problem, and you know that it is localized at runtime, you can come to your developers and say: “Guys, here's a bug! Come on, fix it fast. ” And the developers understand that they are paid a salary for it and they go to repair it.

About system developers and their careers


- You're a super famous person. Surely, every day they write to LinkedIn a bunch of headhunters and try to outbid you. Why are you still in Oracle?

- You know, I have been trying to answer this question for a long time and constantly answer others. I read a blog or some book that if you look at how product layers are located in our industry, then we see that there is hardware, there are operating systems, libraries, applications, etc. And if you look at the population of developers in all these layers, you get an inverted pyramid.
Below are people who are engaged in a low-level foundation. And these people are very few. For objective reasons - there is a high enough entry threshold, we need a lot of experience and a good education so that you can do something reasonable there.

- In other words, to improve the Intel processor, you need ...

- It takes a long time and a lot to learn. Much more than, for example, in order to write an Android application. In order to become a good systems analyst, you need 5-10 years of industrial experience. When you have already gained this experience, it seems that your place in this ecosystem is precisely at this low level. You can go from this level and write some enterprise applications, of course, but this means that you are substituting all those people who are near you right there digging into it all below. Down there is a significant shortage of personnel. There is so much work there - just darkness. I am more or less soberly trying to assess the ratio of the work that we can do and that which is needed. And this ratio is one to 10, or even to 100.

- That is, people really miss?

- Yes, and this is not a feature of Java. This is a feature of this system level. There are few people, a lot of work, so there are good interesting tasks. This makes it possible to choose which tasks you want to do. Of the 100 tasks that you have you can choose the tasks that give:


- In general, this is a very interesting question - the question of money. Usually, the fewer people, the more expensive they are. I have the feeling that in our industry somehow all this is differently arranged. So?

- No, system analysts are just expensive. Good system analysts are very expensive. And the most important thing is that there the offer is substantially less than the demand. And much less competition. There is no such thing that I am sitting in my garage and sawing my startup, praying that someone in the next garage would not come up with exactly the same idea of ​​a startup and would not come out with it ...

In my field, all people who are doing something similar to my business can be counted in one letter. And there is no particular competition. So, let's say, fun industry conferences like the same JVMLS are a fun sight.

When you read any programmer's forum, there, for example, litigation “which language is better?”, “Which platform is better?”, Etc. And when you are in the JVMLS company, when people who really work at the “lower level” are sitting next to each other, there is agreement and mutual understanding. Because everything is in the same boat, everything is wildly overloaded, everyone has similar problems ...

This is such a great psychotherapeutic session. There is no “beating the heel in the chest,” there are no statements that “we have done better, and you are all sucks.” There, naturally, understanding and fraternity modulo some personal likes and dislikes.

- And where can a modern student or a recent graduate learn system programming? I have a feeling that in modern universities they teach by textbooks and templates of the 80s.

- And this is not very bad, because fundamental science remains fundamental. And does not depend on market conditions. There are classic books, textbooks, which have become classic in university programs. Maybe not in the Soviet and non-Russian universities. You can just take the program of some Stanford or MIT, and see what they read, what textbooks they read it.

- For example, Computer Science 101 ?

- Computer Science 101 is the usual introductory course. And there are specialized courses and textbooks for them. If there is no possibility to watch a course online, you can simply find a textbook, which is usually a classic work. And right on it to learn. I in Russia know quite a few schools that do this. In Novosibirsk, for example, there was a school in its time, which Academician Ershov still did, a school of compilers.

- Does this have to do with Excelsior in its current form?

- Excelsior, I think, was born there precisely because there were people with sufficient expertise. They could get bored and make products of this kind ...

And if after the textbooks you want to practice, then in most companies that are engaged in this kind of programming, there are open positions of interns and juniors. You can practice to feel the difference between what is written in the textbook and what is happening in reality.

- Did something like this happen to you?

- Yes. As a student, I was an intern in the Intel team that was involved in Java runtime. On the one hand, I studied at the university and read just books of some Muchnik, Hennessy-Patterson , Tanenbaum and others, and on the other hand, I had a real product at work that I could try to transfer the idea from the textbook. Or just read the project and understand that this is what you read about two months ago in the textbook. Such is the theory and practice.

This is similar to what is called the “Physics and Technology System”: when the first three courses provide the fundamentals (mathematics, physics, computer science) to students, and then they are sent to conditionally basic departments in scientific institutes, industrial productions, where they are under the wing of a practitioner. supervisors are doing real scientific work.

In our industry, an effective tool is about the same - a student needs to get a base in the university, and then join some organization in which to start using his knowledge in practice. Yandex, ABBYY, Sbertech and other companies, as far as I understand, are moving in exactly the same direction. They want to take students with them and train them.

- The problem is known. The shortage in the industry is growing faster than academic institutions manage to deliver personnel. Industry growth is about 15% per year. And that's a lot.

- I would share, nevertheless, the industry of applied software and the industry of system software. The size of the industry of application software swells or shrinks depending on what is happening on the market: the willingness of investors to invest in certain areas or start-ups.

And as far as I can see, there is always a demand for system-level programming, because this is the foundation on which everything works. And there is no large fluctuation in demand, because the people who make the platforms are always needed.



Pro technology sharing, "scientific" and "product" development


- Is Java actively “tyrit” technologies, features and ideas from other languages?

- I would not call it “tyrit”, because in technologies, for example, runtime, a significant part of the development is done either at the academy, or in semi-academic R & D Labs, which publish scientific articles or technical reports about how “we tried this idea.” Burned out / not burned out. ” People who implement runtimes read these articles: “Yeah, that suits us, let's try to implement it with us.” It is clear that one article will be read by people who write runtime for different languages. The main foundation of this kind of development is the semi-scientific-semi-industrial alloy R & D.

- Do you have such laboratories in Russia now?

- For such work requires a unique set of skills. I do not know the full laboratories that would be based in Russia, rather individual people who participate in such developments.
If we are talking about R & D labs that are funded by the industry, then the border between research and application implementation is rather fragile. Often they are the same people.

- It is known that many engineering problems are finally solved by scientists, and many scientific problems are solved by engineers. In applied mathematics this happens all the time.

“At Oracle Labs, there are people who are more focused on science, who try solutions apart from the product. And their work result is technical reports or articles. And there are product teams that, in an attempt to improve the product, take ideas from such scientific articles. At the same time, quite often some solutions are born inside the product itself, which are then noticed and the writers are trying to develop. This is such a symbiotic process. This is why R & D is funded by large corporations.

- That is the feeling that companies are investing in it?
- This is not a feeling, I just see it.

About who moves OpenSource


- A little bit back to the question of open source, and in particular Java. On the one hand, the vendor develops and moves it, i.e. Oracle, on the other hand, is a community that exists separately from organizations. In the Java ecosystem, there is Doug Lee, who are strongly moved by Concurrency, but it does not work in Oracle. How unique is this situation when the leader of the region is outside the organization?

- Not unique. For example, things that are associated with ports on alternative architectures are not actively developing Oracle. ARM64, for example, is mainly driven by Red Hat, because Red Hat is interested in this thing. There is also a conditional Intel, conditional AMD, which are also interested in making improvements in code generators.

- And they can be seen at all? Do people from Intel and AMD come to you, do they say “here's your optimization for our latest processor”?

- They can't say that. They say: “Look, if we generate like this, it will be better. Here are our performance data. ” And if compilers say that this change is really successful, then all this is accepted.

- And what percentage of people in Java-organization is engaged in low-level tasks?

- I never considered. If it is rude, I would say that people with whom I more or less often come into contact in my small area of ​​activity of the JDK, there are probably about fifty of them. I, frankly, do not know how many people are engaged in some other features. You can, of course, look at the workchart and evaluate it, but there will be many who do exactly the development.

- Hundreds of people?

- Hundreds of two or three, I think.

- According to your feelings, a low-level piece is a large percentage of people?

- Yes, big. But still this task is so complicated that these people are still not enough.



About the complexity of low-level tasks


- Why is the task difficult?

“Mostly because there are so many moving parts.” A lot of things you need to know, you need to be prepared for. You write a compiler, for example, and you need to know errat'y processors. To know, firstly, that they exist at all (and for many the news that there are bugs in processors), that you will need to look at these bugs in order to understand that your compiler is not always responsible for non-standard behavior. You should know, be able to and understand that the bug that you are fixing now may be related to some wild interference of past code transformations that were made up to the part for which you are responsible. You need to know this whole stack in depth.

Rantaymes by themselves are products in which it is possible to isolate components into a large cell, but in fact these components are very closely connected with each other. If you want to fix bugs, it often means that you need to fix bugs in different components, and if you are engaged in performance, then you will be working on a bunch of components at the same time.

I am happy as a child when my some performance patch takes up five lines in any one file, because this is a cool performance change, it is obviously correct and helps. But big good performance changes usually require changes in all the small places of this whole large product. Therefore, this product you need to know, and the product is huge.

- Does this mean that he is badly zadizaynen?

- Not.

“Why then do such things happen?” Locality is considered to be one of the criteria for good design: in order to eliminate a problem, I want to dig in one place and not in ten.

- It all works great on paper. In reality, two things happen: first, when you start chasing a performance, it turns out that it is necessary for abstractions to take place in some particular places, because that is how the gain comes out. And secondly, bugs pop up.

You know something about the platform, about how the platform behaves, that the processor, for example, accurately sends from the register to the register, when you say “move” to it. If you are based on this assumption, you can write a beautiful compiler, but then suddenly it turns out that there is an error in the processor. What will you do to fix it? You drive a crutch into the code generator. Because it is a practical solution to a practical problem.

- And if the client has a farm, and on the farm there are 1000 broken processors ...

- If you read the source code HotSpot JVM, then you can see there are different horrors and most importantly - many of these horrors are signed. For example, you can find comments in the spirit of "this code is written ugly, but it is written ugly for such and such reasons."

- Is this rule generally respected?

- Usually respected. When you fix such bugs, you are expected to write why you actually did such a Jesuit thing.

And in such places it is usually written something like: “A naive person could assume that it was possible to write differently here. But here it is impossible to write otherwise, because the transformation in that place will make this graph of a certain kind. And in general, go and read the bug at this link, there you will see a fifteen-page epic about why this thing does not work the way it should work. ” That is such small details.

Hardware Transactional Memory


- Returning to errata and processors. In the classic book of Herlihy and Shavit there is a separate chapter dedicated to the Hardware Transactional Memory. Could you tell us a little more about transaction memory?

- The rationality of transactional memory is that there is a synchronization problem when you need to make a coordinated change in several places in memory. When you need to make an atomic change in one place of memory, then you just do an atomic operation.

Another thing is when you need to do some kind of non-trivial transformation, which you do a few readings, a few entries, you need to make this whole block atomic. You can make a lock, say that “we captured the lok here, everything was done under the carpet, the lok was released,” and this works from a functional point of view.

Another thing is that on these locks there will be what? Contention! Therefore, I want to get some hardware mechanism, which you can say “at the beginning of the transaction, remember what we had, then under the carpet I will do something in the state of the machine, and when I make the transaction commitment, then all this state of the machine will seem atomically to everything else ". This is all a transaction. I do not just write and read individual, I have a whole transaction that immediately publishes all of this state or does not publish anything at all.

- This is similar to CAS , which changes the link.

- Yes, you can do it with CAS, but problems arise there - you have to do the wrappers there, which you will CAS, and then you can do it with bare memory.

- And the wrappers entail Allocation, Memory Traffic ...

- Yes. HTM helps to avoid all this, to do without the extra creation of wrapping objects. You can say that “now I started a transaction, I made 10 storages in memory, I commited a transaction, and either all of these 10 stores are visible to me, or none are visible.”

- How in the databases?

- Yes. Such a memory is therefore called transactional, because it is a transaction. But for this you need hardware support. Because software implementations of transactional memory already existed.

Software implementations - they are slower significantly, so you need hardware support. It is necessary somehow to explain to the hardwarder that if I execute the command “start a transaction”, at this moment the machine says “ok, I’ll count that everything that is done after the start of this transaction is not visible to anyone yet, but I will publish everything at the commit”.

- Is it right in the assembler code there are instructions like "start a transaction"?

- xstart. Then you say: xcommit, and you hardwar says, did it work to commit the transaction or failed. xabort. Everything is fine. Everyone was waiting for hardwork transactional memory to appear. Azul in their Vega has long been doing transactional memory, and they have achieved, as they say, success with the operation of this haddvarnoy HTM memory in Java.

- Only now Vega died.

- Vega is not what died. Gil Tene , CTO Azul Systems, said that x86_64 became so close to Vega in performance characteristics that it was not economically profitable for Vega to support.

- Yes, iron is expensive to support.

- Yes, and why? When there is a hardware vendor that has a factory and who sells everything to you for pennies. Compared to the cost of its own microprocessor manufacturing, these are mere pennies.

And everyone was delighted, and began to make their small prototypes. Even we. The natural place where you can use HTM in Java is when you have, say, a synchronized block trivial, in which there are 2, 3, 4 stores. You start a transaction at the entrance, at the exit you omit it. The semantics is exactly the same.

- That is, JIT suddenly says that now we are not using lokas, but using such cunning transactional instructions?

- And it was even made almost in the eight. But then suddenly, like a bolt from the blue, it turned out that some dudes found a bug in the hardware implementation of this very HTM in Haswell. And since this bug, as I understand it, was already in silicon, it cannot be fixed.

- That is, shemka curve?

- Yes. Therefore, Intel apologized to all and released a microcode update in which HTM was turned off. Because HTM is an optional feature, and to use it, you must check the processor flag. You can release a processor microcode update that says "I do not support." This is a mistake by a hardware manufacturer, but such things happen sometimes.

- With the Pentium III was a famous processor tip. Apparently, this happens every 10 years.

- Not with the Pentium III, but simply with the Pentium, in my opinion. Despite the fact that many people, like me, bought Haswell, including to try the TSX, but then suddenly it turned out that your processor turned into a pumpkin ...

- Did you update the microcode in the end or scored?

- It will automatically update.

- How does this happen?

- Update the operating system when loading. She has a special bag in which the microcode lies.

- That is, for this BIOS is not necessary to update?

- I understand that it is done through the BIOS / UEFI, the operating system tells the processor: "Here is your new microcode."

- Is it actually already at all?

- Yes, this is a normal strategy in terms of functionality, because there was a bug that damages memory. Better slower, but more correct.

- And what are the future plans for Intel on HTM?

- They have a new revision. In my opinion, called Skylake. Haswell was advertised as "a processor that supports hardware transactional memory", and Skylake is advertised as "a processor that finally correctly supports transactional memory." Let's see how this time will be.



About the development of the Java-community and the benchmark war


— Sun , , Sun. Oracle , . , Oracle, , Java-. ?

— , . Sun, , , , . , - JavaOne, Sun , , Sun , - . , . ?

— 5 .

— ?

— 2011 .

— Oracle Sun?

— 2009-2010 .

— . Oracle, Sun, : «, , . , JDK 7 JDK 7, ».

— , .

— , Sun, .

— Intel, Harmony, . . JDK, -.

- Yes. OpenJDK 2007 , Harmony 2004 2008. , . , JVM.

Oracle, JRockit , . IBM, J9. SAP , JVM. , .

«-». Java , , , . , .

- — - , , . And that's fine. -, , , , . , HashMap' , . , , , .

, , - , . OpenJDK - . . , - , , . .

— OpenJDK ?

— GCC, LLVM . , , . , , , . , , OpenJDK .

. , Sun Oracle. . , , . , - , — . , LinkedIn, Twitter . , , , .


— immutable-. , , GC . immutability ? Who is right?

— . , . — . — . , . , , . , by construction , , , , - .

, — . , , , , Scala . , , , , — . : « Scala , , , . , . , » — .

Java . Java, - . , , - , Unsafe … — , . , . , . , «». , , , , — .

— , . Java Tech Days 2011 , , . , immutability City Legend , , …

— . « — , CAS — ». «Java – , Scala – ». . , - — , , . , , , .

, , , , . , , , , . , , , , . . , , , , .

, , , , 100% . , , , ..

, , — , « , ». Not. , «», «». , – «» «»? . , , «» «». . ( , , ) .

Unsafe


— Unsafe. . ?

— , - , , . , - , , «» «». .

, , , , , , , .

, Unsafe . , , . , - unsafe — . Java , .

StackOverflow : «, ! , !». . , Javac , API … «» , , - !

, , Unsafe, - . , JDK. , API , , API , . : « -, ? 8 , , , , , ».

, , . - , , , , . — , . — , , . , , .

— , 2 , . -, , Java . , «Java 8» — , «JDK 8» – . -, , . Sun Oracle, « »?

— , « »? javac : «sun.misc.Unsafe is proprietary API and can be removed in future release». Release Notes , , sun.misc. , API, , . Oracle , Unsafe , API. , unsafe , , , unsafe, fallback', unsafe — .

— Unsafe . - , , , , , Hazelcast. , , … , . , — . , …

— , . , Hazelcast , Oracle, sun.misc.Unsafe , Joker, : « , unsafe!». , Unsafe . , , - -.

, . , , , , , API, . , . , API , .

— . , , , , . – … , -, - .

— , Mechanical Sympathy Unsafe. , : «, . , , API. , -, API, , -, ».

: , ? , API , , API , . API, , - . , API, «» .

JMH,


- Then this is the question. You have been active in the development of Java Microbenchmark Harness (JMH) for the last three years. Most commits are made by you, this is practically your project, which is now under the auspices of OpenJDK. So?

- This is a project that is serviced by a performance team.

- And there is one problem with him. Say “it's faster than that. And this is proof ”- this is not proof, this analysis does not end there. It seems that most people around it just do not understand.

- With JMH, the story is as follows. You should always ask yourself the question - who benefits. Why does the performance team do JMH? She does it in order to facilitate our own activities, because we have research that inevitably leads to benchmarks. In order to do this research, we need tools, but all this is not limited to tools. Because the main thing you can get from your experiment is not numbers. The main thing you can do is extract knowledge from numbers. As a rule, in order to extract more or less reliable knowledge, you need a system of theories, each of which must be confirmed by these experiments. There is nothing new in this - you take the philosophy of science and it is also exactly made.

Performance artists in relation to the product - this is how naturalists in relation to nature. We build high-level models and do it through experiment. JMH helps to make experiments, and the main word in this phrase is “helps”. But does not make experiments for you. It helps you not to step on the obvious rake in benchmarking, in order to save time and deal with the unobvious specific things of your experiment. So that you can quickly write [start - approx. Ed.] benchmark, spend the remaining time trying to figure out how to write this benchmark correctly. Instead of spending hours trying to fix a stupid mistake with a dead code elimination, for example. That's what it is for.

It helps to solve easy problems that still need to be solved in each experiment. It does not help to solve the specific problems of a particular experiment. Analysis is needed in order to understand what kind of insight you can give from this data, whether you can trust this insight, what place of this insight is in the general system of your knowledge.

We often sin with this, because when you write a blog, you make a benchmark of a feature - and we show experiments there that specifically target this feature. In fact, behind this experiment there is a chain of these links. For example, we know that this harness behaves normally on this machine, that we have calibrated this machine, that it does not fall in the thermal shutdown. Or we know that such an approach to testing works - we have validated this separately. In principle, this can also be written in a blog, but then it will not be a blog, but a whole book about what kind of staged experiments we have done before, in order to make sure that the experiment we are doing now is trustworthy. You can not just put a benchmark and by default assume that the benchmark is correct. It does not happen. The benchmark is correct only if it gives you insights, together with everything else that you already know.

- Now, see what happens: I need to know about the performance, but I'm not a very big dock in this. I refer to some authorities, to you, for example. I make a benchmark, measure, build a hypothesis about the performance model, about specific instructions. I take the assembler listing, I find (or I don’t find) there these instructions from my application - and voila, I checked my hypothesis. The focus is this: to understand exactly what instructions to look for in this very assembly listing - you need to have a background, you need to be able to solve the same problem without JMH, the solution of which I try to avoid using JMH. So?

- If you want an answer to a question in a complex topic, in which you do not understand - you must find a person who understands this topic and ask him.

My company teaches me one simple thing: if I, say, answer a letter with a legal question and start writing “I am not a lawyer, but ...”, then I have to close this letter, pick up the phone and call the real lawyer, who knows the correct answer to this question. Exactly the same story with performance. You can say: “Of course, I’m not a performance player and I’m not a big dock in this thing, but it seems that performance data shows this.” But at this moment you have to do one of two things: either you have to become a performance artist and interpret this data normally, or you have to go to a person who understands this question and ask his opinion.

This is one of the reasons why there are performance teams in large organizations that seriously invest in performance. Because people who know how to answer such questions are kept there. Not because they are very smart, but because they have experience, a system of knowledge. Their job is to keep this general knowledge in my head. And to say that I will close my eyes, look at this data, they will confirm my assumption - this is confirmation bias .

For example, I will not speculate on which of the application servers is better. I do not know the answer to this question, I understand this, like a pig in oranges. Of course, I worked with the application servers, but there are a lot of unobvious things. And there are experts who cook in this, know what is happening there, and they can ask for advice. And for me it will be the height of vanity to write, for example, in a blog: “But I ran something on Glassfish, something on Weblogic, Glassfish fell from me with such an exception, and Weblogic with such-and-such an expression. And from this I pretend that Weblogic is better or Glassfish is better. ” Well this is nonsense!

For some fragmentary information, I am trying to extrapolate a huge amount of knowledge that I should have, in theory, be getting for decades. So the maximum that you can do is to become a professional in the field in which you need answers, or you can ask professionals in this area. Everything else is a shaky ground.

Source: https://habr.com/ru/post/268847/


All Articles