In a recent article, our comrade actopolus talked about how we learned to use Postman to implement functional testing of our project API. Having learned how to write functional tests, and having written them about a hundred and fifty, we decided that it was the very time - the time to tie these tests to our CI assemblies.
In general, initially the process of integrating Postman tests into assemblies could be divided into 3 simple steps:
However, we did not take into account one very important nuance - we did not have a tool for measuring the coverage of our code with Postman-tests. Without information about how well we cover the test code, it was difficult for us to understand where we are now and what we need to strive for. Consequently, the plan was supplemented by another item:
After unsuccessful attempts to give birth to a hedgehog to fasten the c3 library from codeception to the newman-tests, I decided that it would be faster to invent a bicycle to write my library to collect newman test coverage (I agree, it sounds very profane).
“Why pull a cat by the balls?” - I thought and decided to start writing a scoop to measure coverage, especially since most of the work (perhaps all 90%) for this purpose is already implemented in the php-code-coverage library, which uses basically xDebug . It remains only a little to shift it in their own way.
So, our shocker will consist of 2 parts. The first will be responsible for collecting and preparing reports on the files and lines launched during the test, the second will be a CLI application that will collect all the reports together and format it in the specified format.
In fact, php-codecoverage is an add-on over a pair of drivers to choose from (phpdbg, xDebug). The point is simple, you initialize the script to collect information about the lines being executed (and not being executed), and at the output you get an array with this data. The php-codecoverage library is designed to make sexual arrays of these arrays in xml, html, json and text formats from these arrays. It is also divided into 2 parts, and also one part is engaged in collecting information, and the second is formatting.
$coverage = new Coverage();
In order for all this wealth to work, we had to add a marker to our tests.
It looked like this:
If it’s interesting to a respected reader, we’ll tell you about how this shocker works in a separate short article.
1. So, first, the code base is merged from the GIT repository to the BAMBOO agent, and then the project builds on it.
In our case, the composer is built and the configuration files are processed under the Development environment. It is at this stage that we replace the PHPNEMWMAN_OFF header value with PHPNEWMAN_ON in our tests (this is because the build plan is designed to measure coverage, but you shouldn’t do this in a build plan that aims to build a project, because measuring coverage significantly slows down the build process).
sed -i -e "s/Phpnewman-Off/Phpnewman-On/" ./code/newman/collection.json
2. The next step is collected project poured into the repository of artifacts. This is done in order not to collect it every time, for each individual task.
3. After the assembled project is safely merged into the artifactor, the next task also safely merges it from there and unloads it onto the test backend.
4. The following task also merges the project from the artifactor and runs newman tests on it. It should be immediately noted that these tests will not go to the localhost bamboo-agent, but to the test backend where we poured the project a step earlier. Tests run in the docker container.
docker pull docker-hub-utils.kolesa:5000/build/nodejs/newman:latest # docker run \ # --rm \ # --volume $(pwd):/code \ # /code --volume /etc/passwd:/etc/passwd:ro \ # passwd --volume /etc/group:/etc/group:ro \# group --user $(id -u):$(id -g) \ # , --interactive \ # - docker-hub-utils.kolesa-team.org:5000/build/nodejs/newman:latest \ run collection.json --folder Tests -r junit,html --reporter-junit-export _out/newman-report.xml --reporter-html-export _out/newman-report.html -e _envs/qa.json -x
run collection.json # collection.json --folder Tests # json-(collection.json) -r junit,html # ( !) 2 --reporter-junit-export _out/newman-report.xml # , --reporter-html-export _out/newman-report.html # , -e _envs/qa.json # json -x # exit-code
BRANCH_NAME=$(echo "${bamboo.currentBranch}" | sed 's|/|-|g' | sed 's@\(.*\)@\L\1@') # echo "BRANCH NAME IS $BRANCH_NAME" # ssh www-data@testing.backend.dev "php /srv/www/$BRANCH_NAME/vendor/wallend/newman-php-coverager/phpnewman --collect-reports merge /srv/www/$BRANCH_NAME/phpnewman --clover /srv/www/$BRANCH_NAME/newman/_output/clover.xml --html /srv/www/$BRANCH_NAME/newman/_output/html" # scp www-data@testing.backend.dev:/srv/www/$BRANCH_NAME/newman/_output/clover.xml ./clover.xml scp -r www-data@testing.backend.dev:/srv/www/$BRANCH_NAME/newman/_output/html ./ # ssh www-data@testing.backend.dev "rm -r /srv/www/$BRANCH_NAME/newman/_output/html && rm /srv/www/$BRANCH_NAME/phpnewman/* && rm /srv/www/$BRANCH_NAME/newman/_output/clover.xml" #
Important!
In our project, I have divided into 2 different build plans tests without coverage measurement and tests with coverage measurement enabled. Coverage is measured only for the master branch. Runs a build plan with coverage measurement every day on a schedule. All this is done due to the fact that the tests with the included coating take much longer !
Summing up the work done, I want to point out just a few facts.
Fact one. There is nothing difficult in screwing a new tool you like to Continuous Intgration. There would be a desire.
Fact two. If something in the tool is not out of the box, then it is not at all necessary that it will be difficult and dreary to modify it yourself. Sometimes, if you look at it, everything is solved in a couple of dozen lines of code. Add to this a huge profit from the use of the tool when everything is working. Well, plus to all this is another reason to pump their skills.
Fact three. No one argues that newman is a panacea for all ills, and that he is the best as a tool for functional testing. However, we tried - and we liked it, especially after it was bolted to CI!
And, of course, we will be happy if our library benefits you. And if you need to modify it - feel free to contribute!
Source: https://habr.com/ru/post/353902/
All Articles