📜 ⬆️ ⬇️

Vertical Scaling in Java Cloud

I want to share the results of internal tests of vertical automatic memory scaling in Jelastic - cloud hosting for Java applications.
In this example, the behavior of a web application that runs on a Tomcat server is simulated. The measurements were carried out at various loads. The load was changed by increasing the number of connected clients. Measurements of readings were carried out through the admin of the application owner.

image

Server implementation


Each new user in the session is put an array of bytes of 10 MB.

<html> <body> <h1>Hello!</h1> <% byte[] data = (byte[]) session.getAttribute("test-data"); if (data == null) data = new byte[1024 * 1024 * 10]; request.getSession().setAttribute("test-data", data); %> Your session ID <%=session.getId()%><br/> Your session data size <%=data.length%> bytes<br/> </body> </html> 

')

The arrival of customers


The graph shows three waves of customers - 100, 300 and 1000 respectively.
  1. The first wave of 100 clients (1 stream at a time) consumes approximately ~ 1.3Gb. After that, there is a session timeout (10 minutes) == all data from user sessions is unloaded and memory is returned to the platform.
  2. The second wave of 300 clients (in 10 streams) ~ 3.5Gb. Again after the session timeout, the memory returns to the platform.
  3. The third wave of 1000 clients (in 100 flows) ~ 11Gb. After the session timeout, the memory returns to the platform.
As can be seen from the screenshot, the application takes from the platform only what it really needs. A memory that is no longer in use returns it. Payment will be made upon the use of resources.

Zoom settings


In the right column, the first row indicates the number 3/64 (current / limit) - the memory consumption in the cloudlet (the minimum indivisible piece is 256Mb).
The application can scale from 1 to 64 cloudlets (from 256Mb to 16Gb). Scale factor settings for this application = x64 times.

findings


Although it is not likely that the web application loads 10Mb of data for each user into memory, the test nevertheless perfectly demonstrates the potential of Jelastic. In our case, the download of such a volume of data into the session was done to speed up the process of memory growth and to reduce the required number of clients for good memory pumping.

For more realistic indicators, everyone who read the title of this post participates in testing on real conditions. The top image loads 200Kb into each user's session. If the topic is interesting, the results of the real test will be posted later. I hope the established limits in 16Gb are enough for Habr users.

- UPDATE

The results of the experiment




  1. The first peak - this is load testing for this article - took 11Gb of memory.
  2. The second peak - the release of this article to the main page - took 2Gb of memory.
  3. The tail - a residual phenomenon, a drop in activity - took 0.6Gb of memory.


On the basis of the fact that 200kb of memory was pushed into memory for each unique visitor, at the same time no more than 5,000 people were present on the server for 10 minutes at the most peak moment.
The results show that you need to be able to load the server :), load emulation is much more swung memory than reality.
In both cases, the dynamic memory scaling worked out, the final price will be calculated relative to the actually consumed cloudlet parrots .

Source: https://habr.com/ru/post/118702/


All Articles