📜 ⬆️ ⬇️

JUnit in GitLab CI with Kubernetes

Despite the fact that everyone knows very well that it’s important and necessary to test your software, and many have been doing it for a long time, in the open spaces of Habré there wasn’t a single recipe for setting up a bunch of such popular products in this niche as (our favorite) GitLab and JUnit . Fill this gap!



Introductory


First, I’ll outline the context:
')

What will the overall sequence of actions look like?

  1. Application assembly - we will omit the description of this stage.
  2. Deploy the application to a separate Kubernetes cluster namespace and launch testing.
  3. Search for artifacts and parsing a JUnit report by GitLab.
  4. Delete previously created namespace.

Now - to the implementation!

Customization


Gitlab ci


Let's start with the .gitlab-ci.yaml fragment describing the deployment of the application and running the tests. The listing turned out to be rather voluminous, therefore it is thoroughly supplemented with comments:

 variables: #   werf,    WERF_VERSION: "1.0 beta" .base_deploy: &base_deploy script: #  namespace  K8s,    - kubectl --context="${WERF_KUBE_CONTEXT}" get ns ${CI_ENVIRONMENT_SLUG} || kubectl create ns ${CI_ENVIRONMENT_SLUG} #  werf   —    .   # (https://werf.io/how_to/gitlab_ci_cd_integration.html#deploy-stage) - type multiwerf && source <(multiwerf use ${WERF_VERSION}) - werf version - type werf && source <(werf ci-env gitlab --tagging-strategy tag-or-branch --verbose) - werf deploy --stages-storage :local --namespace ${CI_ENVIRONMENT_SLUG} --set "global.commit_ref_slug=${CI_COMMIT_REF_SLUG:-''}" #   `run_tests` #      Helm- --set "global.run_tests=${RUN_TESTS:-no}" --set "global.env=${CI_ENVIRONMENT_SLUG}" #  timeout (  )      --set "global.ci_timeout=${CI_TIMEOUT:-900}" --timeout ${CI_TIMEOUT:-900} dependencies: - Build .test-base: &test-base extends: .base_deploy before_script: #     ,   $CI_COMMIT_REF_SLUG - mkdir /mnt/tests/${CI_COMMIT_REF_SLUG} || true #  , .. GitLab      build-dir' - mkdir ./tests || true - ln -s /mnt/tests/${CI_COMMIT_REF_SLUG} ./tests/${CI_COMMIT_REF_SLUG} after_script: #        Job' # (, ,  ) - type multiwerf && source <(multiwerf use ${WERF_VERSION}) - werf version - type werf && source <(werf ci-env gitlab --tagging-strategy tag-or-branch --verbose) - werf dismiss --namespace ${CI_ENVIRONMENT_SLUG} --with-namespace #   ,      allow_failure: true variables: RUN_TESTS: 'yes' #    werf # (https://werf.io/how_to/gitlab_ci_cd_integration.html#infrastructure) WERF_KUBE_CONTEXT: 'admin@stage-cluster' tags: #     `werf-runner` - werf-runner artifacts: #     ,      #     — ,     paths: - ./tests/${CI_COMMIT_REF_SLUG}/* #      expire_in: 7 day # :       GitLab' reports: junit: ./tests/${CI_COMMIT_REF_SLUG}/report.xml #        #         —   -  stages: - build - tests build: stage: build script: #  —     werf # (https://werf.io/how_to/gitlab_ci_cd_integration.html#build-stage) - type multiwerf && source <(multiwerf use ${WERF_VERSION}) - werf version - type werf && source <(werf ci-env gitlab --tagging-strategy tag-or-branch --verbose) - werf build-and-publish --stages-storage :local tags: - werf-runner except: - schedules run tests: <<: *test-base environment: # " "  namespace' # (https://docs.gitlab.com/ce/ci/variables/predefined_variables.html) name: tests-${CI_COMMIT_REF_SLUG} stage: tests except: - schedules 

Kubernetes


Now, in the .helm/templates directory, create YAML with Job, tests-job.yaml , to run the tests and the Kubernetes resources it needs. Explanations see after listing:

 {{- if eq .Values.global.run_tests "yes" }} --- apiVersion: v1 kind: ConfigMap metadata: name: tests-script data: tests.sh: | echo "======================" echo "${APP_NAME} TESTS" echo "======================" cd /app npm run test:ci cp report.xml /app/test_results/${CI_COMMIT_REF_SLUG}/ echo "" echo "" echo "" chown -R 999:999 /app/test_results/${CI_COMMIT_REF_SLUG} --- apiVersion: batch/v1 kind: Job metadata: name: {{ .Chart.Name }}-test annotations: "helm.sh/hook": post-install,post-upgrade "helm.sh/hook-weight": "2" "werf/watch-logs": "true" spec: activeDeadlineSeconds: {{ .Values.global.ci_timeout }} backoffLimit: 1 template: metadata: name: {{ .Chart.Name }}-test spec: containers: - name: test command: ['bash', '-c', '/app/tests.sh'] {{ tuple "application" . | include "werf_container_image" | indent 8 }} env: - name: env value: {{ .Values.global.env }} - name: CI_COMMIT_REF_SLUG value: {{ .Values.global.commit_ref_slug }} - name: APP_NAME value: {{ .Chart.Name }} {{ tuple "application" . | include "werf_container_env" | indent 8 }} volumeMounts: - mountPath: /app/test_results/ name: data - mountPath: /app/tests.sh name: tests-script subPath: tests.sh tolerations: - key: dedicated operator: Exists - key: node-role.kubernetes.io/master operator: Exists restartPolicy: OnFailure volumes: - name: data persistentVolumeClaim: claimName: {{ .Chart.Name }}-pvc - name: tests-script configMap: name: tests-script --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: {{ .Chart.Name }}-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Mi storageClassName: {{ .Chart.Name }}-{{ .Values.global.commit_ref_slug }} volumeName: {{ .Values.global.commit_ref_slug }} --- apiVersion: v1 kind: PersistentVolume metadata: name: {{ .Values.global.commit_ref_slug }} spec: accessModes: - ReadWriteOnce capacity: storage: 10Mi local: path: /mnt/tests/ nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kube-master persistentVolumeReclaimPolicy: Delete storageClassName: {{ .Chart.Name }}-{{ .Values.global.commit_ref_slug }} {{- end }} 

What resources are described in this configuration? When deploying, create a unique namespace for the project (this is already indicated in .gitlab-ci.yaml - tests-${CI_COMMIT_REF_SLUG} ) and roll it into it:

  1. ConfigMap with a test script;
  2. Job with a description of pod and the specified command directive, which just runs the tests;
  3. PV and PVC , which allows you to store test data.

Pay attention to the introductory condition with if at the beginning of the manifest - accordingly, other YAML files of the Helm chart with the application must be wrapped in the reverse construction so that they do not deploy during testing. I.e:

 {{- if ne .Values.global.run_tests "yes" }} ---    {{- end }} 

However, if the tests require some infrastructure (for example, Redis, RabbitMQ, Mongo, PostgreSQL ...) - their YAMLs can be turned off. Deploy them in a test environment ... of course, tweaking as you like.

Final touch


Because assembly and deployment using werf so far only works on the build server (with gitlab-runner), and the pod with tests is run on the wizard, you will need to create the /mnt/tests directory on the wizard and give it to runner, for example, via NFS . A detailed example with explanations can be found in the K8s documentation .

The result will be:

 user@kube-master:~$ cat /etc/exports | grep tests /mnt/tests IP_gitlab-builder/32(rw,nohide,insecure,no_subtree_check,sync,all_squash,anonuid=999,anongid=998) user@gitlab-runner:~$ cat /etc/fstab | grep tests IP_kube-master:/mnt/tests /mnt/tests nfs4 _netdev,auto 0 0 

Nobody forbids making an NFS-ball directly on the gitlab-runner, and then mounting it in pods.

Note


You may ask, why complicate everything with the creation of Job, if you can just run the test script directly on the shell runner? The answer is quite trivial ...

Some tests require access to the infrastructure (MongoDB, RabbitMQ, PostgreSQL, etc.) to check the correctness of working with them. We make testing unified - with this approach, it becomes easy to include such additional entities. In addition to this, we get a standard approach in the deployment (even if using NFS, additional mounting directories).

Result


What will we see when we apply the prepared configuration?

The merge request will show summary statistics on the tests launched in its last pipeline:



You can click on each error here to get details:



NB : An attentive reader will notice that we are testing a NodeJS application, and in the screenshots - .NET ... Do not be surprised: just as part of the preparation of the article, there were no errors in testing the first application, but they were found in another.

Conclusion


Apparently, nothing complicated!

In principle, if you already have a shell-builder and it works, and you do not need Kubernetes, screwing testing to it will be an even simpler task than described here. And in the GitLab CI documentation you will find examples for Ruby, Go, Gradle, Maven and some others.

PS


Read also in our blog:

Source: https://habr.com/ru/post/460897/


All Articles