In this article we decided to talk a little about the means of continuous integration (CI), which we use in the company of Dnevnik.ru, and share some small developments in this direction. Most of the material may seem banal advertising of the selected engine CI or an attempt to call holy war (and not one), but this was not the purpose. The article is also not a guide or description of any fitch and can be regarded as an article from a cap - the main thing is that it should be interesting and spark a discussion.
Jenkins
One of the first tasks that I set for myself when I joined the company about 3 months ago was the introduction of “normal” (from my point of view) CI practice. Not that in the company then there was no similar practice, but at that time CI in the company was represented with the help of the well-known
Jenkins . This is an open source project, which is a branch from another big CI tool -
Hudson . Last time I worked with Hudson in the 2008th year, and the experience was sharply negative. I was not scared by the interface, I was able to survive the complexity of the settings, but the situation, when eventually it all falls and stumbles out of the blue 20 times a day, did not become for me a positive characteristic of a well-tested stable system. As it turned out, time does not heal, and these problems have not gone anywhere - even after 4 years and changing the name.
The main idea behind both the Hudson and Jenkins is simply the most generalized engines for CI. Their main strength should have been their extensibility:
- Take the engine
- We put a thousand plug-ins for all occasions
- We are trying to set it all up
- Type of profit
The main point that many tend to forget is that plugins are not always good, and often even harmful. Plug-ins, as a rule, are written by other people, and you have no idea: how well each of them was tested, whether it contains something unexpected in the code - you can give examples of mass. Therefore, I was always amused by people who hung their systems with plugins, like Christmas trees, and then cursing brakes and falling into their loved ones: FireFox, Miranda, Visual Studio (add to taste).
The use of proven sources and the principle of minimalism always saved me from such troubles. Unfortunately, in the case of Jenkins, it was not so simple. Originally based on the idea of ​​a maximum generic solution, but, nevertheless, belonging to the world of Java, for the .Net world, he, out of the box, did not know how to do anything.
')
Want to build .Net projects? Put a third-party plugin. Do you want LDAP authentication? Put a third-party plugin (about crutches for pulling email addresses from LDAP, I generally keep quiet). Even Subversion out of the box is not supported, you need to install another plugin again - and of dubious quality and with a very poor set of functions. In general, all this resulted in a titanic effort to configure and support the plug-in farm (about 50 in total). And, as I have already noted, the code can be poorly tested, so an attempt to update the version of a plugin could easily lead to the collapse of all Jenkins. It is no coincidence that it provides the ability to make a downgrade right on the page of the management of plug-ins (the downgrade button of Jenkins is also present) - all this is connected precisely with the problems of stability and compatibility.
As a result, stability issues, poverty infrastructure for the .Net world, as well as the very significant costs of support and administration of the builds negated all the benefits of Jenkins as a free CI system.
Team city
Immediately make a reservation, I did not suffer for a long time with the choice of the engine. I immediately knew that I would use
JetBrains Team City . The reasons are quite simple:
- I know him very well and introduced him to the process many times;
- It is easy to administer;
- Out of the box is everything you need for .Net projects;
- Excellent integration with IDE and developer environment;
- The project template engine (this was very lacking in Jenkins);
- Jet Brains license policy is quite convenient: the free Professional edition of Team City has only two limitations: 20 build projects and 3 build machines;
- Well, and other goodies.
The most important reason I always remember about is: open source is for the poor. Then I immediately urge all fans to participate in a special Olympiad not to take this phrase literally. I give it a deeper meaning. Open Source is good, and the salvation for many companies, but the essence is always the same: an open source solution in 90% of cases will lose a similar commercial development. That is why I did not even look towards
CC.Net and others.
A legitimate question may arise: why not Atlassian Bamboo or TFS? Everything is very simple. Choosing TFS would mean burning bridges for other technologies, and I am not a supporter of such measures and I am sure that our project will soon go beyond the 100% .Net solution. Yes, it can be used for other technologies, but only with the use of crutches, and again losing time.
Atlassian Bamboo is not bad in itself, excellent integration with Jira, a good UX. But the lack of pre-commit builds and poor integration with the developer’s environment, the lack of support for NuGet and other little things tipped the scales in favor of Team City.
Surprise will be caused by the fact that inside Team City uses the same approach, for which I hayal Jenkins / Hudson is a plugin system, and all its functionality is represented exclusively by plugins. Yes, this is true, but with one exception: all plugins included in the standard TeamCity delivery are tested by JenBrains. It affects the commercial nature of the product, they can not afford to throw on the market a handful of handicrafts. People pay money for it, which means that the requirements are much more severe.
Plus, I always liked the products of JetBrains - this is what the phrase user experience for them is not an empty phrase. From the very beginning, you are taken by the handle and carried out through all the thorns of installation and administration with maximum comfort. I appreciate such care and I think that this is exactly how professional products should be made.
Implementation
I will not paint the installation and configuration of TeamCity, this is not the purpose of this article and not very interesting.
Let me just say that I put the system on Windows Server 2008 R2, and used MS Sql Server 2008 R2 as the database. One feature quickly emerged. In the database schema for Ms Sql Server, they do not support unicode everywhere. This was especially noticeable when the developers wrote comments to the commit in Russian. This problem was solved simply enough. In the database, in the
vcs_history table, the
description column type was changed from
varchar (max) to
nvarchar (max) . Yes, this may cause problems with upgrades to subsequent versions, but it was necessary.
The main difficulty was not the Team City setting, but the developers themselves. More precisely, an attempt to make them perceive the crash of the CI build as an extraordinary event, which should be fixed immediately. Here there can be only one recipe: careful documentation of the process and personal control of execution. After some time, people understood what was required of them, and now I don’t even have to control it.
The first build we installed is the usual integration .Net solution build. In order to more correctly understand who exactly laid down the build and what, the policy was set up - to build a build for each commit in svn.
This was not enough, since a simple integration build did not check Asp.Net compilation errors.
Having a little discussion on this topic, we decided to add a new
CI Build configuration to all web application projects and set the next
target in the project files
<Target Name="AfterBuild" Condition="'$(Configuration)' == 'CI Build'"> <AspNetCompiler VirtualPath="temp" Clean="true" PhysicalPath="$(ProjectDir)" /> </Target>
Thus, each developer could run Asp.Net compilation on the project before the commit.
There was a problem with the integration build. When the Asp.Net compilation was added to it, the full build time increased from 3 minutes to 20 minutes, which nullified all the advantages of the build per commit. We needed to receive compile error messages as quickly and efficiently as possible. Therefore, it was decided to split the integration build into 2 parts:
- Msbuild compilation
- Asp.Net compilation
TeamCity supports the so-called snapshot dependency. In short, it works like this:
- First, the usual integration build of the project is executed, as a separate build configuration.
- If successful, another build is launched that performs the asp.net compilation of the project, moreover, it is launched on the same snapshot of the sources, on which the preliminary integration build was performed - i.e. on the same source revision.

Thus, we did not load the machines with unnecessary Asp.Net compilation in the event that the integration build failed. And get msbuild errors much faster.
In addition to integration builds and running tests, we use TeamCity to build installation packages. The built-in artifact system allows you to specify what will be an assembly artifact and, if necessary, archive it. These artifacts can be downloaded directly to the UI.

Or get through the REST API.
Prior to TeamCity, the package was built using a script called PowerShell, which simply used 7Zip and instructed the archiver through the command line what types of files should not be included in the archive (since the number of file types to be included in the archive was much greater). Therefore, the system of artifacts caused a slight disappointment. First, none of the built-in archivers showed a good compression / speed ratio compared to 7Zip. The closest was tar.gz, but just got close. Secondly, it was impossible to prescribe constructions in the artifact script to exclude files of certain types from the archive, which was extremely inconvenient and made it necessary to prescribe everything that needs to be included (by the way, you can vote for
this feature ). In addition, the archive size was about 500 Mb, which made the UI think about the eternal.
We buried the idea of ​​using the artifact system, and still use PowerShell, the benefit is that TeamCity has a built-in runner for it.
Nuget
One of the features that was of great interest to me was the declared support of the NuGet server inside TeamCity. I have long wondered about the idea of ​​raising a corporate NuGet server in order to prevent uncontrolled adding dependencies to a project and just for internal libraries that would be more convenient to use via NuGet. So the possibility of using for this purpose the CI engine itself - instead of the shared folder on the network - seemed promising.
TeamCity provides two NuGet Feed'a: one guest, without authentication, the second with the authorization type installed on TeamCity.

Feed can be connected to both Visual Studio and the NuGet Restore Package (NuGet.targets file) to eliminate the possibility of adding a new dependency on the sly.
<ItemGroup Condition=" '$(PackageSources)' == '' "> <PackageSource Include="http://teamcity/guestAuth/app/nuget/v1/FeedService.svc/" /> </ItemGroup>
Unfortunately, the spoon of tar is still present. There is no UI for managing packages in the embedded NuGet server. Moreover, the basic idea that JetBrains puts into it is that it should be used only for NuGet packages that are unique to your company. In their words, they are
not going to compete with the official NuGet server .
You can add the NuGet package to the feed, but for this you need to make it appear in the list of artifacts. What, actually, I did.
All necessary packages were added to a separate repository in SVN, and the new build configuration simply added all the content to the artifacts.

And as a result.

It worked, and all the packages became available in feed.

The only problem was removing the packages from the feed. We decided this with the help of the built-in cleaning tool TeamCity.

Thus, when deleting from svn, the package will never loom in the list of artifacts for previous builds - hence, it will disappear forever from feed.
In order to turn your library into a NuGet package, it is enough to use a special runner
NuGet Pack , which is very easy to set up.
Afterword
This is far from everything that has been done and what is planned in the area of ​​CI: automatic deployment of an application or database, autogeneration of test data, static code analysis, etc., are still out of this article, and we will try to describe our approach in the following.
PS As described at the beginning of the article, it is not an advertisement, and the company Diary.ru does not receive any license concessions from JetBrains.
The author of the article: Alexander Lukashov, the head of the development department of Diary.