Good day.
Almost immediately after installing and configuring CI / CD according to the instructions from the previous post , the team had a question about how to correctly perform integration testing. We already had experience running test dependencies in docker containers, but this became problematic since the build itself was now running in a container. In this post I would like to describe two possible ways of integration testing inside the container that came up to my team.
By definition, integration testing is called testing, which tests the operation of an application with its dependent components. Examples include databases, queues, and other services.
As part of testing, we wanted:
Based on these requirements, we immediately discarded the idea of having permanent common installations of test databases and queues due to problems with the division of resources between assemblies, the complexity of maintaining and changing this infrastructure.
In the java ecosystem there are quite a few libraries that run dependencies for the test:
This approach is as simple as possible and satisfies most of the requirements described earlier, but is not universal in terms of adding new test dependencies (mysql?) Or using specific or many versions of dependencies.
Well suited for simple services with 1-2 dependencies.
Docker is a logical way to resolve the shortcomings of the previous approach: you can find (or create a Docker image) for any dependency with any version. Probably some of the dependencies are launched in production using the same images.
If locally launching an image (or using docker-compose somewhat) doesn’t cause problems, then CI will have difficulties as the build itself takes place in a container. Although it is possible to run docker in docker, but this is not recommended by the dind creator himself . The preferred way to get around this problem is to reuse the already running docker process, which is often called sibling docker. To do this, the child docker process needs to use /var/run/docker.sock
from the parent. In the previous post, this has already been used to publish docker images with the compiled application.
It was decided to use the testcontainers library as it:
Well suited for more complex services or for services with special dependency requirements.
Next, you should pay attention to the resource consumption of the project assembly (which can increase significantly after adding integration tests).
Currently, the build does not indicate the required amount of memory and cpu shares, which may have two potential problems:
To limit the resources of the container in the hearth can be using the structure
resources: requests: cpu: 1 memory: 512Mi limits: cpu: 1 memory: 512Mi
Jdk9 and above already have container support (-XX: + UseContainerSupport (enabled by default), works in combination with -XX: InitialRAMPercentage / -XX: MaxRAMPercentage)
A full example can be found here .
Jdk8 requires either update 131 and above with the -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
flags enabled -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
to read available memory from cgroup and not from the host machine, or each time to specify the available heap size manually using Xmx
.
An example is available here .
It is worth noting that kubernetes knows nothing about the resources spent on containers launched using testcontainers or sibling-docker. To work correctly in this situation, you can reserve resources in the maven container, taking into account all test dependencies.
Integration testing when launching a build in a container is possible and is not a difficult task.
An example of an application with integration tests using testcontainers can be found here and the configuration for running Jenkins on kubernetes here .
Source: https://habr.com/ru/post/451588/
All Articles