📜 ⬆️ ⬇️

JVM Memory Allocation

Hello! Translation of today's material, we want to coincide with the launch of a new stream at the rate of "Java Developer" , which starts tomorrow. Well, let's start.

JVM can be a complex beast. Fortunately, much of this complexity is hidden under the hood, and we, as application developers and those responsible for the deployment, often don’t have to worry about that much. Although due to the growing popularity of application deployment technologies in containers, it is worth paying attention to the memory allocation in the JVM.


')
Two types of memory

The JVM divides memory into two main categories: heap and non-heap. A heap is a piece of JVM memory that developers are most familiar with. Objects created by the application are stored here. They stay there until they are cleaned by the garbage collector. Typically, the heap size that an application uses varies with the current load.

Memory outside the heap is divided into several areas. In HotSpot, you can use the Native memory tracking (NMT) mechanism to explore the areas of this memory. Note that although NMT does not track the use of all native memory ( for example, the allocation of native memory by third-party code is not monitored ), its capabilities are sufficient for most typical Spring applications. To use NMT, run the application with the -XX:NativeMemoryTracking=summary parameter -XX:NativeMemoryTracking=summary and using jcmd VM.native_memory summary see memory usage information.

Let's take a look at using NMT using our old friend Petclinic . The diagram below shows the JVM memory usage according to NMT data (minus its own NMT overhead) when running Petclinic with a maximum heap size of 48 MB ( -Xmx48M ):



As you can see, off-heap memory accounts for most of the memory used by the JVM, with heap memory accounting for only one sixth of the total. In this case, it is approximately 44 MB (of which 33 MB was used immediately after garbage collection). Memory usage outside the heap totaled 223 MB.

Native memory areas

Compressed class space : used to store information about loaded classes. Limited by MaxMetaspaceSize . The function of the number of classes that were loaded.

Translator's Note

For some reason, the author writes about “Compressed class space”, and not about the entire “Class” area. The “Compressed class space” area is part of the “lass” area, and the MaxMetaspaceSize parameter limits the size of the entire “Class” area, not just the “Compressed class space”. To limit the “Compressed class space”, use the CompressedClassSpaceSize parameter.

From here :
If UseCompressedOops is turned on and UseCompressedClassesPointers is used for class metadata ...
This is a compressed class pointers (the 32-bit offsets). CompressedClassSpaceSize and is 1gigabyte (GB) by default ...
The compressed class metadata

If the UseCompressedOops parameter is UseCompressedOops and UseCompressedOops used, then the class metadata uses two logically different native memory areas ...

For compressed pointers, a memory area is allocated (32-bit offsets). The size of this area can be set to CompressedClassSpaceSize and the default is 1 GB ...
The MaxMetaspaceSize parameter refers to the sum of a compressed pointer area and an area for other class metadata.



Differences

Compared to the heap, the memory outside the heap changes less under load. As soon as the application loads all the classes that will be used and the JIT fully warms up, everything will go into a steady state. To see a decrease in the use of the Compressed class space , the class loader that loaded the classes must be deleted by the garbage collector. This was common in the past when applications were deployed in servlet containers or application servers (the application class loader was deleted by the garbage collector when the application was removed from the application server), but with modern approaches to application deployment this rarely happens.

JVM configuration

Configuring a JVM to efficiently use available RAM is not easy. If you run the JVM with the -Xmx16M and expect that no more than 16 MB of memory will be used, then an unpleasant surprise awaits you.

An interesting area of ​​JVM memory is the JIT code cache. By default, the HotSpot JVM will use up to 240 MB. If the code cache is too small, JIT may not have enough space to store its data, and as a result, performance will be reduced. If the cache is too large, then the memory may be wasted. When determining the size of the cache, it is important to consider its effect on both memory usage and performance.

When working in the Docker container, the latest Java versions are now aware of the limitations of container memory and are trying to change the size of the JVM memory accordingly. Unfortunately, a large amount of memory is often allocated out of heap and insufficient in the heap. Suppose you have an application running in a container with 2 processors and 512 MB of available memory. You want to handle more load and increase the number of processors up to 4 and memory up to 1 GB. As we discussed above, the heap size usually changes depending on the load, and the memory outside the heap changes significantly less. Therefore, we expect that most of the additional 512 MB will be provided to the heap to cope with the increased load. Unfortunately, by default, the JVM will not do this and will allocate additional memory more or less evenly between the heap memory and the non-heap memory.

Fortunately, the CloudFoundry team has extensive knowledge of memory allocation in the JVM. If you download applications to CloudFoundry, then the build pack will automatically apply this knowledge for you. If you are not using CloudFoudry or would like to understand more about how to configure the JVM, then we recommend reading the description of the third version of Java buildpack's memory calculator .

What does this mean for Spring

The Spring team spends a lot of time thinking about performance and memory usage, considering the possibility of using memory both in the heap and outside the heap. One way to limit off-heap memory usage is to make parts of the framework as versatile as possible. An example of this is using Reflection to create and inject dependencies into the bins of your application. Through the use of Reflection, the amount of code the framework you use remains constant, regardless of the number of beans in your application. To optimize the startup time, we use the cache in the heap, clearing this cache after the launch is complete. Heap memory can be easily cleared by the garbage collector to provide more available memory to your application.

Traditionally we are waiting for your comments on the material.

Source: https://habr.com/ru/post/445312/


All Articles