📜 ⬆️ ⬇️

Continuous Python Test Testing

The programmer is a lazy beast, so everything that will be done more than once must be scripted.

I have been picking TDD for some time and the task of constant quality control becomes more and more important for me. Especially when adding new developers to the team.

First, I ran the tests with my hands: save, switch, $ nosetests. Then code quality checks were added to the tests and I had to put everything into the script:
pyflakes *.py
pep8 *.py
pylint *.py
nosetests


It’s terribly lazy to run the script every time, so a small shell on inotifywait started running tests and checks after each save:
while true; do
inotifywait -e modify project/*.py -qq; clear
./do_tests
done


Then I became more or less pleased with what was happening and even relaxed for a while. But after all, the programmer, besides being lazy, is also proud, so I want to show the results to someone. To keep a history of what is happening (which helps a lot when the boss comes in and asks, “well, what have you been doing for the last month?”) There is already a version control system. But it only shows what has been done and does not give an overview of the success of each revision. It turns out that the code is lying, but it is not clear in what condition it is and what else it is necessary to do.
')
In addition, it is rather difficult to follow colleagues who can also do something and forget to run tests, as a result, the repository contains broken code that did not pass code review, and during the next pull, clusterfuck can suddenly start.

And here, very soon, kmmbvnr @ lj released a screencast in which he demonstrated test integration for django-projects with the Jenkins (formerly Hudson) subject. I looked at all these beauties, charts and reports and also wanted everything to sing and play. But he has django-jenkins, as the name implies, embedded in jungle and generate reports using a clever system. My project is not old enough, and most likely it will not grow - this is a rather trivial WSGI application that is growing rapidly. I had to lift everything from scratch.

Sunday I killed it, but on the whole, everything is pretty straightforward, and now I have pretty reports:



What's inside?

0) Automatic getting fresh versions of turnips. The system is very tightly integrated with the mercurial, so all edits and commit messages are visible and accessible there.

1) Build a project from scratch, as if it were rolled out to a combat server. The environment is created, modules are pumped out and put in, and so on. The system itself is located on the server of the developers in the conditions close to the battle ones: frya, isolation of Python projects, hemorrhage with installation.

2) Run the tests. Together with them, verification of the test coverage of the code occurs immediately. As a result, files of reports about swamped tests and coverage are created. The green chart on top - the number of test scores is shown in the same red. The bottom chart shows the coverage. Coverage can be viewed directly in the files with backlit unchecked lines.

3) A quality check is performed: pep8 and pylint carefully put pressure on the developer’s brains, seeking order in code, variable names and other things. Red broken line, which will always be like hinting that it would be time to take out the garbage.

4) The developer who commits the broken code will be automatically spoiled by the system in email and jabber (personally and in MUC), and then by the lead programmer, who will also receive a similar complaint. Because no one expects the Spanish Inquisition!

As a result, we have a system that takes on a large part of the routine with several people at once.

Give two!

The system itself comes in the form of a java / war package . You do not need to install anything, but you will need a JRE (under Frew, I installed Diablo-1.6). On the site, you can poke a link and immediately run it at home. It starts simply: java -jar jenkins.war. There are still options for specifying ports and all that. I recommend to bind on lokalkhost and wrap in nginx. Who knows what? From the root to run strongly advise not.

On the fra, it refused to demonize me, so I wrapped it in supervisor . Then he was still useful. Installing is simple - pip install supervisord.

All settings are made already from the browser after starting the Manage Jenkins menu.

The first step is to plug in the plugins. They swing and put themselves, you just need to place a tick.

I have disabled the rest of the ones already installed, for nefig.

Some plugins without restarting will not appear in the settings.

Settings

Immediately it is worth doing enable security, and choose the appropriate authorization method. I took the "Project-based Matrix Authorization Strategy". You do not need to create a user, but you need to restart. When you re-enter, you will be prompted to admin immediately. It’s better to call it admin. Otherwise, then you can get confused as to who is who, because it keeps a list of project participants looking at the commits.

“Prevent Cross Site Request Forgery exploits” is a good thing, but it didn’t work for me, I had to turn it on.

JDK and other “installations” do not need to be configured, everything works as it is.

Shell executable should be set to / usr / local / bash or / usr / bin / bash. In short, the full path to the shell, and then you never know what it will run there ... With a strong desire, you can even assign a python, but this is inconvenient.

Jabber and Email section will help you configure the Cap and the Advanced button.

Project Setup

Which here are called Job. Specify the job name and mark «free-style». Then the most delicious, to which I killed half the weekend.

I have a lot of builds and I do them often, so it seemed logical to set a storage limit of 10 pieces / 30 days. If that - you can always click Build Now.

I put the code on a private turnip in bitbucket because they are free and pretty fachasty at the same time. Git I dislike a pythonist on his head and githab does not have free private rap.

The Repository URL will be like this (yes, it is necessary to pass authorization in it, which is why your installation of jenkins should be immediately password-protected): [code] https: // username: password@bitbucket.org/username/uniproxy [/ code]

Repository Browser - bitbucket.

Build triggers - poll SCM. Schedule in cron format: [code] * / 5 * * * * [/ code]
Probably one could still do the trigger remotely, but I'm too lazy. Well, then every developer needs to later carry configs with this trigger ... And the basket is always in the same place and is accessible everywhere.

Steps of the Inquisition Execution

Project files are organized as follows:
.
buildenv.sh
pip-reqs.txt
pylint.rc
project/*.py
reports/*
venv/*

The last two are created by the script. I laid out the main ones on the githaba .

In the screencast, I saw that everything was in one step, but I decided to break it down into three logical ones: build, check, test.

The first one is quite simple: “./buildenv.sh” is the one that lies in the project and prepares virtualenv. It would be possible to copy it here, but this is not a trude DRY. would have to keep duplicates of the same.

The second is more complicated:
#!/usr/local/bin/bash
venv/bin/pep8 --repeat --ignore=E501,W391 project | perl -ple 's/: ([WE]\d+)/: [$1]/' > reports/pylint.report
venv/bin/pylint --rcfile pylint.rc project/*.py >> reports/pylint.report
echo "pylint complete"


The first and last lines are connected with the fact that if pylint finds something to complain about, then the whole step will be considered overwhelmed. And pylint is worse than the most evil teacher, ALWAYS will find something to complain about. In the config to it I registered that some errors are not interesting to me. pep8 complements pylint, but pays more attention to design. Some warnings are disabled (the length of the lines checks pylint and there it is set in the config).

Please note that all scripts run from venv, which were installed in the previous step. If something falls apart there, everything else will collapse and a big red ball will hang on the board of shame.

The third one runs the tests and collects the coverage:
venv/bin/coverage run --include 'project/*.py' project/tests.py --with-xunit --xunit-file=reports/tests.xml --where=project
venv/bin/coverage xml -o reports/coverage.xml


Here you have to be very careful with the paths. If they are not from the repository root, but from the project, the display of the line coverage will not work. can not find the source. After thoughtful smoking options, googling and commenting, I still picked up the working version.

Reports

Metrics are collected, now they need to download and show. It can still do a lot of useful things there, but now we are simply abusing it on the subject of beautiful graphs. Before the first build, it will say that there are no such report files, but this is not a problem.

We include the “Publish JUnit test result report” and, although they are not any JUnit, we specify the nosetests ** / reports / tests.xml, which means “from the turnip / task root in the reports catalog.

The next item is “Report Violations” and there is pylint. Despite the fact that the name of the field is “XML filename pattern”, and pylint does not produce any xml, we also indicate the file ** / reports / pylint.report where they pair the pep8 developers in every possible way criticize the sloppy developers’s code . This is very helpful in the morning to get involved in the work: he came, looked at the schedule of violations and while he corrected, even though he remembered what he wrote yesterday.

Well, the last. The Publish Cobertura Coverage Report is in ** / reports / coverage.xml. I have no idea what Cobertura is, but it understands the Python coverage format correctly.

With pylint and coverage, you can set the boundaries of "weather", which will affect the overall state of the project. Default values ​​seem to fit.

Koder bird proud - do not kick, do not fly

At the very end there are notifications via email and jabber, you can customize to taste. The only thing is that they must first be configured in the main integration server config, otherwise it will not work.

Let's go and waved his hand

Settings are recorded, you can send to the assembly. Either manually through the Build Now button, or by committing and pushing something into the turnip. At the bottom of the sidebar, a new task and a progress slider will appear in the Build History by clicking on which you can go to the “Console Output” section of the current build and watch it live. When everything is finished, in the Status there will be all sorts of things on which you can climb and see how what is where.

Bonus!

Each time recreating the venv it pumps all the modules with pypi and does it for a very long time. In addition, it is traffic. With a bit of searching, I found in pypi collective.eggproxy , which caches the proxy repository that mimics pypi.python.org/simple. Runs simply as `eggproxy_run`. It has no help and by default it puts everything in / var / www, which is not good. After reading its docks on the site, you can learn how to make a config file to configure the paths and ports. It also did not want to be demonized, so after Jenkins the supervisord was sent into the arms.

buildenv.sh is already trained to adapt to the presence / absence of this proxy, everything is simple.

Credits

kmmbvnr @ lj: How to start testing and getting pleasure from it . In addition to integration with Jenkins, django-any is also described. All dzhangistam strongly recommend to read and use.

Setting up a python CI server with Hudson , a still up-to-date post with the right advice, based on which this interpretation of mine was made.

Even for testing the project, a simple wrapper over WSGI applications was used , which allows you to test and debug without wsgi-containers and manual work in a browser without too much hemorrhoids.

Source: https://habr.com/ru/post/114745/


All Articles