📜 ⬆️ ⬇️

Performance and runtime at the JPoint 2018 conference

We all have some expectations for conferences. Usually we go to a very specific group of reports, quite specific topics. The set of topics differs from platform to platform. This is what javista is interested in now:



The conference program is designed so that for each of the topics they try to select at least one good report. The JPoint is held for two days, there will be about forty reports, so all the main issues will be covered in one way or another.


In this small post I will talk about those reports that I liked as a person who goes mainly to reports on performance and runtime.


Scaling, clusters and all that will not be considered here, suffice it to say that it is (Christopher Batey from Lightbend will tell about Akka , Victor Gamow from Confluent will tell about Kafka , and so on).



Disclamer. The article was written on the impressions of the contents of the program on the official site . Everything written down is my own thoughts, not quotes from reports. There may be (and certainly there are) incorrect assumptions and inaccuracies in the text.

Performance


Remember the comic article "Java with assembly inserts" ? In the comments, apangin said he would give a talk about VMStructs. It is said - done, here it is: "VMStructs: why does the application need to know about the internals of the JVM" . The report focuses on using VMStructs, a special API of the HotSpot virtual machine, through which you can learn about the internal structures of the JVM, including TLAB, Code Cache, Constant Pool, Method, Symbol, etc. Despite its “hacker” nature, this API can be useful normal program. Andrey will show examples of how VMStructs helps in the development of real tools (which they use in Odnoklassniki).


The second report, "Hardware Transactional Memory in Java," is made by Nikita Koval, a research engineer in the dxLab research group from Devexperts. If you were at JBreak earlier this month, you may notice that there he spoke about completely different things (about writing a fast multi-threaded hash table using the power of modern multi-core architecture and special algorithms). The same report will discuss transactional memory, which gradually appears in modern processors, but which is not yet clear how to use an ordinary person. Nikita needs to talk about how to use it, what optimizations are already in OpenJDK, and how to perform transactions directly from Java code.


And finally, "Enterprise without brakes . " Where are we without a bloody enterprise! Sergey Tsypanov deals with performance issues at Luxoft, in the domain of Deutsche Bank. The report will look at patterns that kill the performance of your applications — light enough to be found on code review, but complex enough to be underlined in red by IDE. All examples are based on the code of applications running in production.


Profiling


By eye profiling caught on three reports. The first report is by Sasha Goldstein, "Linux container performance tools for JVM applications" . Sasha is a serial creator of performance hardcore. Last year at JPoint, he gave a great talk about using Berkeley Packet Filter for JVM (I desperately recommend watching the recording on YouTube), and it was only a matter of time before he gets to the detailed parsing of the containerization. The world goes to the clouds and dockers, which in turn brings us many new problems. As you can see, most low-level debugging and profiling systems, when applied to containers, are overgrown with various features and jambs. Sasha will consider the main scenarios (CPU load, IO responsiveness, access to shared databases, etc.) through the prism of using modern tools on the GNU / Linux platform, including BCC and perf.


"We profile up to microseconds and processor instructions" - the second profiling report by Sergey Melnikov from Raiffeisenbank. Interestingly, before taking up the low-latency code in Java, he worked at Intel as a performance engineer of compilers for C / C ++ / FORTRAN languages. This report will also perf! :-) There will be more about the hardware features of the processors and Intel Processor Trace technology, which allows you to take the next step in profiling accuracy and to reconstruct the execution of the program section. There are quite a few such reports (for example, you can find the Andi Kleen report at the Tracing Summit 2015), they usually leave a lot of questions and do not shine with practicality in relation to Java. Here we do not just have a person who has visited both worlds (both Intel and Java in the bank), you can still find him in the discussion area and ask uncomfortable questions.


The third report is "Universal Profilers and Where They Live . " It is made by Ivan Uglansky, one of the Excelsior JET developers (certified Java SE implementation based on optimizing AOT compilation), who is involved in runtime: GC, class loading, multithreading support, profiling, etc. The essence of the report is that they recently needed to build a profile of applications running on Excelsior JET. This should be done on all supported systems and architectures, without recompiling the application, and even with acceptable performance. It turned out that the usual methods of profiling at the same time for all these points are not suitable, so I had to invent something of my own. Ivan will tell you what profiling methods are suitable for AOT, what you can afford if you profile the code from within the JVM, and what you have to pay for the versatility of the profiler.


Custom Runtime


Ranttime, in short, is the thing that takes your high-level code in the JVM language, turns it into a low-level one (machine code, for example) and controls the execution process. Usually there is some kind of assembler, compiler, interpreter, virtual machine. Runtime features define the performance features of application tasks.


The first thing that looks at the program is the Alibaba report about their JDK . Who hasn't dreamed of making your own JDK with blackjack and corutin? But it is clear to everyone that this is hellish work, pain and suffering. But in Alibaba it turned out. Here is what they have:



Yes, we (the general public using OpenJDK) will soon have Project Loom . But there is a nuance - the development of Corutin is in Loom secondary to the main goal - the Fayber. Fayber require delimited continuations , but it is not necessary that they soon, or ever at all, appear in the public API. It seems that in Alibaba, all this has already been filed by yourself.


As far as I understand, this is not a report from the category “use our closed proprietary JDK”, but a guide for people who are going to master the development of similar features, or to deal with their absence in OpenJDK. For example, tools for profiling depend on the areas to be profiled and workloads — they will be different for each product. The reporter from Alibaba will not so much talk about his tools, but about the workload classification process, which leads the development of such tools in the right direction.


By the way, since we started talking about the Korutins. They appeared in Kotlin starting from version 1.1 ( in experimental status ), and there will be a report by Roman Elizarov from JetBrains about them. The novel will tell about the evolution of approaches to asynchronous programming, about their differences and similarities. Plus, we will hear the official position, why what is now in Kotlin is better than the familiar async/await .


In order not to go far, Alibaba JDK is not the only representatives of unusual ecosystems. Of course, there is a report about Azul Zing, and as many as two about OpenJ9 ( one , two ).


All reports about the insides of Azul products have for me a certain shade of sadness, because never in my life did I have to enter the circle of the elect, who use their steep, but very expensive solutions. Therefore, for me, their recent report has more theoretical significance as a source of information about technologies competing with our native OpenJDK. Now in OpenJDK the AOT theme is actively developing - OpenJDK JDK 9 already had a built-in AOT (only for 64-bit Linux), there is a SubstrateVM , and then it will be only better, until the implementation of the Metropolis project. Unfortunately, not everything is so simple with AOT in Java, everything is screwed up to a part of modern infrastructure (remember Nikita Lipsky's epic report about a crookedly designed OSGi ?). Azul already has some kind of ready-made AOT solution called ReadyNow , embedded in their Zing, trying to combine the best qualities of JIT and AOT - that's what this report will be about.


As rightly noted in the comments, you need to submit a speaker. In short, Douglas Hawkins is a leading developer in Azul, he has been involved in Java for 15 years, has participated in various fields: bioinformatics, finance, and retail. The more he lived in Jawamira, the more he went into the guts of the JVM, and once he just went to Azul to work on Zing, and became the main developer of that very ReadyNow. That is, this is a person who has visited both sides of the barricade: both as an applied developer, and as a system developer, and as a result he has a very unique experience.


On the other hand, OpenJ9 can be downloaded right now . Since IBM opened its virtual server in the Eclipse Foundation, there has been a lot of hyip around it. In the mass consciousness there is a certain set of ideas and facts about the fact that it can replace HotSpot, that while the libraries from OpenJDK can be easily reused, what should decrease the amount of memory consumed, and even shift something to the GPU ... and, in general, everything. (By the way, the GPU is generally generally presented as black magic - good, at the past Joker Dmitry Alexandrov did an excellent report “Java and GPU: where are we now?”. There is no video yet, but you can look at the slides ).


The first report, "The Eclipse OpenJ9 JVM: a deep dive!" tells Tobi Ajila, an IBM-based J9 developer working on Valhalla and Panama , with a long track record like interpreter enhancements, JVMTI and lambdas. Apparently, there will be a description of some technical features of OpenJ9, thanks to which you can disperse your cloud solutions and other performance-critical pieces. The second report, "Deep dive into the Eclipse OpenJ9 GC technologies", is led by the garbage collector architect in OpenJ9, also from IBM - there will be a very pragmatic story about four garbage collection policies, where they should be used, and how it all works under the hood. I hope that after listening to these reports, the aura of magic around OpenJ9 will be slightly reduced.


Conclusion


During these two days you can visit 12 reports. Of these, 3 keyout is common to all, so you need to make a choice 9 times. If you select reports only from this list, then you can make 7 decisions out of 9. The remaining two - to taste (do you also need to have an outlook on "universal" topics?). Some reports intersect with each other (the hardest choice is at 13.45 of the first day - between profiling of Sasha Goldstein’s containers, Nikita Koval’s hardware transaction memory and Kotlin Roman Yelizarov's Korutins). There is an idea that from the point of view of a person interested in performance and runtime, the program is composed well enough to be interesting from beginning to end. See you at the conference!


I remind you that less than a month is left until JPoint 2018. Tickets can still be purchased on the official website .


')

Source: https://habr.com/ru/post/351078/


All Articles