📜 ⬆️ ⬇️

How to stop being afraid and fall in love with frequent releases

I must say that, in principle, developers tend not to pay attention to such trifles as tests and warmth. And the situation in the picture below is not even a joke, but the situation is witnesses, I think there were many.

image

I'll tell you a little about the development at my current place of work.
')
We use a flexible development methodology ( Agile ), and at first we practiced Scrum , and now we are working on Kanban . Initially, I myself was quite skeptical about such an event, but literally after a couple of weeks I was convinced that it was working.

The description of such development processes is a topic of a big post, maybe even not one, besides, there is a large amount of literature on the net, so I will not tell here what it is, I will limit myself to the fact that these methodologies require frequent releases, in case of a scram release occurs after the iteration is completed (the iteration time is selected by the team itself and usually takes 1 to 3 weeks), in the case of kanban, the feature is launched into production immediately after it has been done (well, tested, of course).

Frequent releases are not just good, they are great: a streamlined and frequent release process saves the developer from the fear of updates, and the customer sees progress.

This development process superimposes perfectly on Continuous Integration. Books are also written about him and long articles, in brief, this is the execution of frequent automated project builds for the early detection and solution of integration problems ( wikipedia ).

We develop a distributed system, we use perl as our backend (API), client code is written in javascript, and jenkins ci is used as a continuous integration server.

For those who are not familiar with it, I will explain on fingers what jenkins is: this is a service with a web interface where tasks for building are configured. The task is a step-by-step instruction that the server must execute to build any package, in our case it is rpm. For example, it usually happens like this:


Creating an rpm package requires writing a spec file , which is also a set of instructions for building and installing the package on the target machine.

So: there is a CI server on which the rpm package is built, the package then goes to the yum repository, and there is a destination server or cluster of servers on which this package is installed.

In our project we use modules with CPAN - code reuse is good, but it entails some overhead, for which many people dislike perl.

Dependencies

Consider an example: we needed to use a certain hash algorithm, a rather trivial task, take the module on the CPAN and set it in our developer environment:

# cpanm Some::CPAN::Module 

after which we write the code:

  use Some::CPAN::Module; sub encrypt { my $string = shift; return Some::CPAN::Module::mkHash($string); } 

and cover it with dough:

  ok(encrypt("test") eq Some::CPAN::Module::mkHash("test"), "encrypt function"); 

the code works, is covered with dough, we are very happy with ourselves. We commit it and tries to assemble the package. The build is tuned in our minds: at the assembly stage of the package, unit-tests are run, which is very convenient: if they do not pass tests, it means bad code, there is no sense to assemble further.

And then the trouble: the test fails - on the assembly server there is no necessary module.

Or another situation: we do not have tests, and this code leaves the package in the repository and we get a non-starting service, but already on the production server, where the error is much more difficult to catch.

To build and pass tests, add a dependency to the spec file:

  BuildRequires: perl-Some-CPAN-Module 

To deploy to the destination server, add:

  Requires: perl-Some-CPAN-Module 

If you are lucky, then the desired rpm is already in the yum repository, if not, you need to build the package and place it in the yum repository.

So, the package in the repository is placed on the build / production server, but one problem, it has a different version: during the time we wrote the code and tests, the author of the module fixed the critical vulnerability in his module and released a new version, which suddenly for some reason became incompatible with the previous one, the test still fails, the assembly failits, the service does not start.

Solution: find out the version of the module with which our code works, collect rpm with this version and specify the exact version of the module in the special file.

  BuildRequires: perl-Some-CPAN-Module = 1.41 Requires: perl-Some-CPAN-Module = 1.41 

Another problem may occur if the module with CPAN has its own dependencies, for example, Some :: CPAN :: Module depends on Other :: CPAN :: Module, as a result on the developer environment and on the build server after the package installation packages are installed again parted and fayl test, the service still does not start. And there may be more than a dozen such modules in a real project. In narrow circles this is called dependency jellyfish.

Solution: you can also track dependencies on dependencies and specify all of them in a spec file, but this solution requires a lot of time and effort for tracking and subsequent assembly. Which is limited by our development methodology.

An alternative solution is possible: raising your own CPAN mirror, for example, using pinto , but this is not the topic of this article.

There is another solution.

The main problem of CPAN is that it was invented a long time ago and everyone who at least once laid out his module on CPAN understands what I am writing about. Fortunately, modern tools have recently begun to appear for perl.

Carton

I want to tell you about a tool called Carton . Its author, Tatsuhiko Miyagawa, is also the author of a bunch of useful utilities, including starman and cpanm.

What is a carton? This is the dependency manager for pearl-barley modules. It was written under the impression of the Bundler for Ruby.

Its essence boils down to the fact that a file is created in the project directory in which the dependencies are declared - cpanfile . In this file, the developer lists the list of required modules and their versions:

  requires 'Some::CPAN::Module', '1.45'; 

then the command is run:

  carton install 

and the necessary module is placed in the local / lib directory and a cpanfile.snapshot is also created, which stores information about the modules installed with CPAN. This file must also be added to the VCS, so that later on the build server when starting carton install dependencies from snapshots arrive, thereby guaranteeing the identity of the environment.

All you need to do now is add the local / lib directory to @INC .

You can specify the exact version that is needed or the valid range of versions:

  requires 'Some::CPAN::Module', '>= 1.41, < 1.46'; 

Moreover, you can specify modules that are specific to different environments, for example:

  osname 'MSWin32' => sub { requires 'Win32::File'; }; on 'test' => sub { requires 'Test::More', '>= 0.96, < 2.0'; recommends 'Test::TCP', '1.12'; }; on 'develop' => sub { recommends 'Devel::NYTProf'; }; 

Carton works in harness with cpanm, another useful utility from the same author, we can be grateful to her for directly installing the modules.

Now that we have all the dependencies in the cpanfile, the snapshot of the development environment in cpanfile.snapshot and all of this is placed in the VCS in the spec file before running the tests, you can add the carton install command.

The same command can also be added to the installation stage when the package is deployed to the server for which the rpm package is intended.

But we went further.
image
The author of Carton has the following vision of his utility: Carton is placed on all the necessary servers, then the capistrano utility executes the command immediately on all the necessary remote machines:

  # cap carton install 

A bit strange, you need to download a whole heap of dependencies from the Internet or from a local CPAN mirror, and this can take quite a long time, but what if at that moment there were any network problems?

There is a more consistent way, we can install the modules at the stage of building an rpm package. And we decided to create a über-rpm-package, in which we put all the dependencies at once. This package is declared to be a package dependency with a code, and it can be delivered both on the build server for passing tests during the build and in the production.

Such an approach is popular for projects, for example, on scala - a über-jar is created in which all dependencies of the project + code go to go even further - they create a über-binary in which all dependencies are already compiled.

Now that we have set up and automated the build process, we may not be afraid of releases and jellyfish dependencies, but focus on the algorithms and the product that we produce :)

Source: https://habr.com/ru/post/198376/


All Articles