📜 ⬆️ ⬇️

Node.js: managing the memory available to applications running in containers

When running Node.js applications in Docker containers, traditional memory settings do not always work as expected. The material, the translation of which we are publishing today, is dedicated to finding an answer to the question of why this is so. Practical guidelines for managing the memory available to Node.js applications running in containers will also be provided here.



Review of recommendations


Suppose a Node.js application runs in a container with a set memory limit. If we are talking about Docker, then to set this limit the option --memory could be used. Something similar is possible when working with container orchestration systems. In this case, it is recommended that when launching a Node.js application, use the --max-old-space-size option. This allows the platform to inform about how much memory is available to it, as well as to take into account the fact that this volume must be less than the limit set at the container level.

When a Node.js application runs inside a container, set the capacity of its available memory according to the peak usage of the active memory by the application. This is done if the container's memory limits can be configured.
')
Now let's talk about the problem of using memory in containers in more detail.

Docker memory limit


By default, containers have no resource limits and can use as much memory as the operating system allows them. The docker run command has command line options that allow you to set limits for memory usage or processor resources.

The container launch command may look like this:

 docker run --memory <x><y> --interactive --tty <imagename> bash 

Note the following:


Here is an example of a container launch command:

 docker run --memory 1000000b --interactive --tty <imagename> bash 

Here the memory limit is set to 1000000 bytes.

To check the memory limit set at the container level, you can, in the container, run the following command:

 cat /sys/fs/cgroup/memory/memory.limit_in_bytes 

Let's talk about the behavior of the system when specifying the memory limit of the Node.js application with the --max-old-space-size key. At the same time, this memory limit will correspond to the limit set at the container level.

The fact that the name of the key is called “old-space” is one of the fragments of the heap managed by V8 (the place where the “old” JavaScript objects are located). This key, if you do not go into details, which we touch below, controls the maximum heap size. Details about the command line keys for Node.js can be found here .

In general, when an application tries to use more memory than is available in the container, its operation is terminated.

In the following example (the application file is called test-fatal-error.js ) in the list array, with an interval of 10 milliseconds, put the objects MyRecord . This leads to uncontrolled growth of the heap, simulating a memory leak.

 'use strict'; const list = []; setInterval(()=> { const record = new MyRecord(); list.push(record); },10); function MyRecord() { var x='hii'; this.name = x.repeat(10000000); this.id = x.repeat(10000000); this.account = x.repeat(10000000); } setInterval(()=> { console.log(process.memoryUsage()) },100); 

Please note that all the examples of programs that we will consider here are placed in the Docker image, which can be downloaded from the Docker Hub:

 docker pull ravali1906/dockermemory 

You can use this method for independent experiments.

In addition, you can pack the application in the Docker container, build the image and run it with the memory limit:

 docker run --memory 512m --interactive --tty ravali1906/dockermemory bash 

Here ravali1906/dockermemory is the name of the image.

Now you can start the application by specifying a memory limit for it that exceeds the container limit:

 $ node --max_old_space_size=1024 test-fatal-error.js { rss: 550498304, heapTotal: 1090719744, heapUsed: 1030627104, external: 8272 } Killed 

Here the key --max_old_space_size is a memory limit, specified in megabytes. The process.memoryUsage() method provides information about memory usage. Values ​​are expressed in bytes.

The application at some point in time is forcibly terminated. This happens when the amount of memory used by them passes a certain limit. What is this border? What are the limitations on the amount of memory you can talk about?

The expected behavior of an application running with the --max-old-space-size key


By default, the maximum heap size in Node.js (up to version 11.x) is 700 MB on 32-bit platforms, and 1400 MB on 64-bit ones. About setting these values ​​can be read here .

In theory, if you set a memory limit using the --max-old-space-size key that exceeds the container’s memory limit, you can expect the application to be terminated by the Linux OOM Killer protection mechanism.

In reality, this may not happen.

The real behavior of the application running with the key --max-old-space-size


The application, immediately after the launch, does not allocate all the memory whose limit is specified using - --max-old-space-size . The size of the JavaScript heap depends on the needs of the application. The size of the memory used by the application can be judged based on the value of the heapUsed field from the object returned by the process.memoryUsage() method. In fact, we are talking about the memory allocated in the heap for objects.

As a result, we conclude that the application will be forcibly terminated if the heap size is greater than the limit set by the --memory key when the container is started.

But in reality this may not happen either.

When profiling resource-intensive Node.js applications that run in containers with a given memory limit, the following patterns can be observed:

  1. The OOM Killer is triggered much later than the moment when the heapTotal and heapUsed are significantly higher than the memory limits.
  2. OOM Killer does not respond to exceeding the limits.

Explaining the behavior of Node.js applications in containers


The container monitors one important indicator of the applications that it runs. This is RSS (resident set size). This indicator represents a certain part of the virtual memory of the application.

Moreover, it is a fragment of memory that is allocated to the application.

But that's not all. RSS is part of the active memory allocated to the application.

Not all memory allocated to an application may be active. The fact is that "allocated memory" is not necessarily physically allocated until the process really starts using it. In addition, in response to requests for memory allocation from other processes, the operating system can flush the inactive parts of the application's memory to the paging file and transfer the vacated space to other processes. And when the application needs these fragments again, they will be taken from the paging file and returned to physical memory.

The RSS indicator indicates the amount of active and available memory for the application in its address space. It is he who influences the decision to force the application to shut down.

Proof of


â–ŤExample No. 1. An application that allocates memory for buffer


In the following example, buffer_example.js , a program is shown that allocates memory for the buffer:

 const buf = Buffer.alloc(+process.argv[2] * 1024 * 1024) console.log(Math.round(buf.length / (1024 * 1024))) console.log(Math.round(process.memoryUsage().rss / (1024 * 1024))) 

In order for the amount of memory allocated by the program to exceed the limit set when the container was started, we first start the container with the following command:

 docker run --memory 1024m --interactive --tty ravali1906/dockermemory bash 

After that, run the program:

 $ node buffer_example 2000 2000 16 

As you can see, the system has not completed the program execution, although the memory allocated by the program exceeds the container limit. This happened due to the fact that the program does not work with all allocated memory. The RSS feed is very small, it does not exceed the container's memory limit.

â–ŤExample number 2. Data buffer application


In the following example, buffer_example_fill.js , the memory is not just allocated, but also filled with data:

 const buf = Buffer.alloc(+process.argv[2] * 1024 * 1024,'x') console.log(Math.round(buf.length / (1024 * 1024))) console.log(Math.round(process.memoryUsage().rss / (1024 * 1024))) 

Run the container:

 docker run --memory 1024m --interactive --tty ravali1906/dockermemory bash 

After that, run the application:

 $ node buffer_example_fill.js 2000 2000 984 

As you can see, even now the application does not end! Why? The fact is that when the amount of active memory reaches the limit specified when the container is started, and there is a place in the paging file, some of the old pages of the process memory are moved to the paging file. Released memory is available to the same process. By default, Docker allocates a space for the paging file that equals the memory limit specified by the - memory flag. Given this, we can say that the process has 2 GB of memory - 1 GB in active memory, and 1 GB in the paging file. That is, due to the fact that the application can use its own memory, the contents of which are temporarily moved to the paging file, the size of the RSS index is within the limit of the container. As a result, the application continues to work.

â–ŤSample number 3. An application that fills a buffer with data, running in a container in which the paging file is not used.


Here is the code we will experiment with here (this is the same buffer_example_fill.js file):

 const buf = Buffer.alloc(+process.argv[2] * 1024 * 1024,'x') console.log(Math.round(buf.length / (1024 * 1024))) console.log(Math.round(process.memoryUsage().rss / (1024 * 1024))) 

This time we will launch the container, explicitly setting up the features of working with the paging file:

 docker run --memory 1024m --memory-swap=1024m --memory-swappiness=0 --interactive --tty ravali1906/dockermemory bash 

Run the application:

 $ node buffer_example_fill.js 2000 Killed 

See the Killed post? When the value of the --memory-swap key is equal to the value of the --memory key, this indicates to the container that it should not use the paging file. In addition, by default, the kernel of the operating system in which the container itself is running can dump a certain amount of anonymous memory pages used by the container into the paging file. We set the flag - --memory-swappiness to 0 , disable this feature. As a result, it turns out that inside the container the paging file is not used. The process ends when the RSS exceeds the container's memory limit.

General recommendations


When Node.js applications are launched with the --max-old-space-size key, the value of which exceeds the memory limit set when the container was started, it may seem that Node.js is “not paying attention” to the container limit. But, as can be seen from the previous examples, the obvious reason for this behavior is the fact that the application simply does not use the entire heap volume specified using the --max-old-space-size flag.

Remember that an application will not always behave in the same way if it uses more memory than is available in the container. Why? The fact is that the active process memory (RSS) is influenced by many external factors that the application itself cannot influence. They depend on the system load and on the characteristics of the environment. For example, these are the features of the application itself, the level of concurrency in the system, features of the operating system scheduler, features of the garbage collector, and so on. In addition, these factors, from launch to launch of the application, may vary.

Recommendations for configuring the size of the Node.js heap for cases in which this parameter can be controlled, but not for container-level memory constraints



Recommendations for configuring container memory limits for cases where this parameter can be controlled, and the parameters of the Node.js application are not.



Results


In Node.js 12.x, some of the problems discussed here are solved by adaptively adjusting the heap size, performed according to the amount of available RAM. This mechanism also works when running Node.js applications in containers. But the settings may differ from the default settings. This, for example, occurs in cases when the key --max_old_space_size used when the application was --max_old_space_size . For such cases, all of the above remains relevant. This suggests that the person who runs Node.js applications in containers should be attentive and responsive to the memory settings. In addition, knowledge of the standard restrictions on the use of memory, rather conservative, allows to improve the performance of applications due to a deliberate change in these restrictions.

Dear readers! Have you encountered memory shortage problems when running Node.js applications in Docker containers?



Source: https://habr.com/ru/post/454522/


All Articles