We do not like to wait in line, we want to place an order online, we are not ready to buy a ticket at the box office, let everything be in the application, in electronic form. And here there is an important "But"! We all want to here and now, but that it works without failures, like a clock. Pizza delivery was carried out on time, the place in the cinema coincided with the confirmation received. What in all this variety of applications and services plays a key role?
Of course, this is a test environment, without which a quick release of a quality product is impossible! Modern testing tools burst into our lives as a hurricane and literally in a few years changed our capabilities. We have mastered virtualization and containerization by leaps and bounds, tried the Selenium line, argued about the advantages and disadvantages of the Docker.
Why was all this necessary and what did we come to?
What future awaits us?
')
Let's talk "for testing" with the guru of the profession. Let's go from A to Z on the toolkit. Igor Khrol and Anton Semenchenko will help us in this.
We stock up on coffee, tea, other drinks and start. The conversation will be long.

So, Igor Khrol is a specialist in test automation at Toptal. Igor has extensive experience with most popular tools (Selenium, HP QTP, TestComplete, JMeter).
- Igor, good afternoon. I propose to begin our conversation. The first question will be formulated in the following way: more and more companies are moving away from the “checked on my computer” option to creating full-fledged testing departments with highly qualified specialists in this field. Do you see this trend, or continue to save on testing?Kind. I allow myself to disagree with the question itself. Classical testing departments that accept a specific product assembly for testing and issue a list of defects no longer correspond to the current speed of business and software development. We become more Agile (the word is already quite worn out) when there is a testing specialist within the project team who is ready to quickly help the development. Of course, testing engineers communicate with each other, but there is no department as such. In this format, there are many advanced companies (as an example, it is very well written about Spotify here). Testing is becoming increasingly integrated into the development process.
Life becomes very fast, so you need to quickly change, quickly roll out new releases, customers do not want to wait a week. The formalized procedure, when collected, delivered, received a test result a week later, does not work so well anymore.
Pro savings. I would not say that it is present, especially in the context of the environment. The cost of iron in recent years has fallen, and significantly. And the amount that a company may lose due to bugs is disproportionately higher. Therefore, the option of checking only on your computer may occur in some cases, but definitely not a trend. I don’t know companies that, in order to save money, don’t buy a server or donate money to create a good test environment.
- The first testing tools began to appear in the mid-90s. Do you agree with the statement that active development began in this particular period, or did it become just the foundation for the “construction” of high-tech products of today?The first xUnit systems appeared quite a long time ago (
Wikipedia says about the year 1989) and so far there has not been such a large number of user interfaces, which is probably enough. Then, in the late nineties, the beginning of the two thousandth, when more user interfaces appeared, the first tools for UI began to appear. One of the first in my practice was
WinRunner (appeared in 1995) and
Watir (there are
versions from 2005 ).
Then and now
Testing has always been, because if you wrote or did something, you need to check it out. In honor of this, and the day of the tester, originating
in 1945, the year .
If we talk about the subject of the test environment, then I would not say that there are special tools for preparing the test environment. What we do is use approaches for deploying production environments: Docker, Puppet, Ansible, and other related solutions. They were created to have a reproducible result. As an effect, we can clone our production environment and test it safely, as closely as possible to reality.
Previously, these were instructions on a hundred sheets, and new environments for testing took place for months, but now the approaches are much better. Everything became automated, everything in the code: launched a script - the environment is configured. Therefore, I wouldn’t call these tools a testing tool, it’s more a DevOps admin theme.
- Igor, please tell us about your first “acquaintance” with decisions in this area. When did you, for example, become familiar with the VMware platform? Do you currently use these software products?If we talk about VMware, then indeed it was quite a good help, then, when we need to test products in different operating systems. To date, it has evolved into clouds, for example, on Amazon or Google Cloud. If I need a test environment, then I launched a script or wrote a Slack bot, and I have a working server.
I used a couple of hours / days / weeks and muffled this thing. For continuous integration, in
Toptal this is also automated to the maximum. I pushed the code into github, but it’s somewhere there itself raised the necessary number of servers to Google Cloud, drove the tests and wrote me in a pull request if there were any regression problems. Locally, sometimes you have to raise virtual machines to test some specific things: for example, understand how the xlsx report will look like in Microsoft Office in Windows.
- Many developers like to divide the test environment into clean (with pre-installed OS and minimal software) and dirty (as close as possible to the prod version). Do you think this distinction is appropriate? Do not you think that such options can be a huge amount, and it is better not to consider them only in this vein?Looking for what tasks. If you run unit tests, then you need a minimal set of software: something to run your code (JVM, interpreter ...) and that's it. If you are testing a separate microservice or a component of the system, then perhaps you don’t need to start a lot of things, but have only what interests you to work. Practice shows that to have a certain staging or preproduction (who he calls), as close as possible to the military environment, is very useful for the final checks, acceptance tests. The maximum proximity will be in everything: the same hardware, in minor versions and software patches, a full set of data, and so on.
- The process of preparing the test environment earlier and now is very different? What tools are appropriate and in what cases? Those. Do I need to choose a solution depending on the size of the company, or are the tools now easily scalable?When I started to work, the test environment was created according to some instructions. Or in general, unknown way by some admins. As they mature, testers have already begun to test these instructions, to test the deployment process. Gradually, everything went towards greater automation and, as a result, reproducibility of the result, reduction of the human factor. Now the environment setting is rarely done by hand - most likely it will be ansible, well, puppet or something similar. As a result, the test environment is as close to production as possible, and we do not test what is not on prod.
- Did you use / use container technologies in your daily work? Have you watched a Red Hat rancherOS or Atomic solution?The docker / container theme and add-ons above them are booming now. Does it affect the test environment? Of course it does. Get a working system for tests is already possible in a couple of clicks, which pleases. In Toptal containers are used mainly for testing:
- maximally prepared container shortens preparation time for launch
- if several components are needed for integration testing, then they are easy to get by running several interconnected containers
- After all the actions with the creation of identical configs and applications, the question of data transfer arises. In which case is the support of the test environment in the form of a mirror version of the prod appropriate? An important point is the depersonalization of the database with data. How do you feel about this practice when you submit to testing?A full set of data is needed most often for acceptance tests, when you need to look at the final result. Some problems can not be found if you have little data or they do not look like real ones. Performance tests in many cases are also conducted on real data.
Impersonation is a good, necessary practice. I would not like to have password hashes or even lists of your clients from the test environment to the outside world.
Selenium
- Now there is an active introduction and use of Selenium (product line) in the test environment. Have you seen how this project is modified, acquiring new functionality? What do you think about Selenium WebDriver in its current state?I actively follow the development of Selenium and have been using it since version 0.8. There is no point in retelling the whole story here, since the site selenium2.ru is very well
written about it .
Speaking about the current state of the project, the most significant can be called the fact that the project has become the
w3c standard for browsers, and the main browser manufacturers themselves implement drivers.
The ecosystem around WebDriver'a also does not stand still. Perhaps it develops even faster than the WebDriver API. If earlier any project for automation of testing began with writing your own framework, now the use of self-written solutions is considered a bad form. Practically in any language there are already ready libraries, which allow not to write again how to correctly check the presence of elements on the page or work with AJAX. In Java:
HtmlElements ,
Selenide . In the Ruby world:
Capybara and
page-object .
Webium in Python.
From the good resources devoted to Selenium, I can advise you to record the conference
SeleniumCamp . I regularly participate there and I like the way the subject develops every year.
- What do you think, what tools for creating test environments have you managed to do perfectly, on the top five, over the past few years? What services are definitely worth trying now? Maybe there are developing projects that are worth paying attention to right now?The theme of creating test environments closely echoes DevOps, and as such, the individual tools for testers somehow never come to mind. I consider it a remarkable achievement that the phrase “It works for me” is less common now, since the environments are becoming more and more the same. Keywords in this regard are ansible, puppet, docker, vagrant. Deployment scripts have become an integral part of projects and deliveries.
Not to mention the cloud-solutions (AWS, Google Cloud, DigitalOcean). Previously, everyone bought their servers and a lot of indignation arose when trying to give something to third parties. Now, few companies can afford their own data centers, and there is no need.
From promising areas, I can point out cloud-solutions, where you don’t have any servers at all, which need to be overloaded, install updates and spend time in every possible way, instead of useful activities. Push 'code - and this code in the test environment or on prod'e. This is Heroku, Google App Engine.
- Thanks for answers. We will wait for your next performances.

Anton Semenchenko is an activist of the community of automatists
www.COMAQA.BY and the “severe” development of C ++ and
www.CoreHard.by . Main specialization: automated testing, low-level development in C ++ and below.
- Anton, good evening. The topic of our conversation is “Evolution of the test environment”. I suggest a little touch on the history of your development as a specialist, and in development we will talk about testing.Kind! OK.
- How did you find yourself in this whole story? The process of transition to testing went in parallel with the processes of development of this segment in the company where you worked, or is it solely your choice?Everything happened by chance, but this is definitely my choice. I, like any other IT specialist with a broad profile, was indirectly associated with testing. Ensuring the quality of the final product is a complex and complex process, which is why coding standards, code review, unit tests, pair programming, formal discussions of key code sections, and work with ERD, PRD are still elements of Quality Assurance (hereinafter QA) in which developers are involved. Therefore, I was clear about the overall QA cycle, I certainly participated in quality assurance.
There were projects entirely related to QA, for example, Unit testing, for example, from Symantec there was a request to cover the Unit with tests of the core of its flagship products developed on pure C in the early 80s. On the one hand, this is a complex technical task of development, on the other hand, one hundred percent QA. We dealt exclusively with Unit tests of what was not intended for testing in principle. Sometimes there were functions with a
cyclomatic complexity of 250 or more. All arrived, a month, you can deal with only one function to understand how to cover it with tests. So I, of course, was associated with QA, but specifically, indirectly.
Automated Testing
At the previous place of work at ISsoft, the idea was to open an independent automated testing department. The company had automators, but, first, we offered automation not as a service, but as a resource. There are specialists, you yourself formulate goals, tasks, processes for them, and they work in the style of “I can dig, I can’t dig.” There was a desire, rather, a business need, to reach a new level of both the quality of service and the technical component of the solutions. Secondly, in the company, owing to the tasks that had been confronting colleagues for years, there were no guys ready to tackle these “challenges”, with all due respect for their professionalism.
I received a proposal to organize a similar department from scratch. This role required different skills both in software development and in the ability to work with people, because it was necessary to assemble a wonderful team, otherwise “the mountains could not be moved”. The choice fell on me not only and not so much because of the technical component only, but by a combination of factors. The “assignment” seemed interesting to me, as I then thought, and, apparently, was not mistaken - test automation is an absolute trend.
I started working in this direction about four years ago. I was very lucky, I immediately found the guys who fit perfectly into the team, complemented each other, in fact the backbone was assembled: Andrei Stakhievich, Vadim Zubovich and many others, known for numerous conferences, publications, trainings and other activities, professionals. Without a team, I would not cope with these tasks physically.
Naturally, to understand what kind of automation will be tomorrow, how to properly develop and sell expertise, it is necessary to understand what it is today. The simplest and surest solution is to ask the specialists. We began to actively attend conferences, participate in the role of listeners, plus engage in the development of prototypes. For example, let's take twenty tools that are now in the top, and write some prototype on each for a specific type of application, conduct a comparative analysis, draw our own conclusions. It turned out a good overview, which in most companies simply did not exist.
Representatives of other companies knew very deeply one or two directions, but twenty, there was no such wide coverage. Plus, we saw the problem of the information vacuum, at least four years ago, a few specialists knew what automation is today, and were ready to share their “sacramental” knowledge. We started to make reports ourselves, “fumble” the information, so the idea came to organize a community
www.COMAQA.BY , the goal of which is to build an effective platform for communication of specialists directly or indirectly related to test automation.
It was clear that the region is developing so dynamically, so wide and multifaceted that it is guaranteed that it cannot be reached by the efforts of one company, this requires the work of very different specialists from different companies, better from different countries. Now we are actively moving across the CIS, we are trying to cooperate, only in the fall I will participate in 25 events in all parts of Russia and Belarus. Something like this I came to this interesting field. Can't say that it was an exceptional choice of mine. If I had not received such a proposal, if I hadn’t managed to put together a great team, this would not have happened. This is largely due to the guys that I'm in automation now.
- Is it possible to say that the preparation of the correct test environment and the testing process itself gradually become the standard during the preparation and release of the product?It seems to me that this is a very complex, ambiguous question, it should be divided into several, into a whole group of questions. I will now voice my subjective opinion, perhaps many experts will disagree with it, the more interesting it is to see a hot debate. In my opinion, it is almost impossible to introduce anything conceptually new in approaches to the organization of a certain process, it does not matter, we are talking about managing a feudal castle or the software development process. In the broadest sense of the word, Socrates, in the formulation of Plato, was the first object-oriented programmer. He had archetypes, categories (hierarchies), imperfect implementations of archetypes in our world, etc. If this idea is developed and applied to IT, then here are the OOP with classes, meta-classes, objects, and other technical attributes.
In fact, already in the fifties there were specialists responsible for the installation and organization of stands, the formal test environment, serious test plans and other documentation, and in a much more rigid, standardized form than today. This was dictated by the fact that “hardware” was very expensive, so “machine time” was spent very economically, real gurus were allowed to the computer, who configured the environment very carefully and correctly.
This is hard to believe, but the hardware and software development described by Frederick Brooks in his canonical book Mythical Man-Month is the second most expensive research project in the history of mankind, the first is the American space program. Today we can incorrectly organize an env and skip a defect. In the past, experts, in addition to the standard "minuses", received in addition a situation where tens or hundreds of thousands of dollars were wasted, because "machine time" was "cosmically" expensive.
On the other hand, today, very, very much has changed fundamentally. The number of software products is growing exponentially, the complexity of the average software product falls exponentially. If in the sixties, extremely huge, extremely complex software for basic science, banking systems, military systems was developed, then today it can be a pet's web page, the task is incredibly simpler. But there is a third “side of the medal”, due to the fact that the quantity grows, a different environment develops, a transition of quantity to a new quality according to Hegel takes place.
The very “leap” to the new round of the Hegelian curve is dictated by the need, by virtue of the law of “Hierarchical Compensation” of Sedov. Developing an “applied” thought, a multitude of different operating systems and their versions, browsers, and other “binding” oriented to the user, as well as those. component, like the version of the JVM or the .Net Framework, "tools" in the broad sense of the word, physical and virtual environments, very different hardware - these are the realities of today.
It seems to me that the virtualization of the 60s and the innovative test environment today are just different turns of the same dialectic spiral. IT-specialists and end-users in search of the optimum throw from one extreme to another with access to a new, fundamentally different technological level. Sometimes the transition is so “acute” that we encounter the problem of a super-sharp exit from the comfort zone, we begin to look for new ways to resolve the IT-challenge, which are, in fact, long forgotten by the old ones, but they certainly need to be understood in a new context with the involvement of other professionals.
On the one hand, I cannot talk about "fundamental" changes, on the other - I cannot deny "qualitative" growth, since the number of different environments is growing exponentially and there has always been the problem of crossing and integration. It's one thing when we have n options for the environment, and quite another is e to the power of n, and we begin to “connect” them. This problem of combinatorics and dynamic environment as a whole is sooooo urgent now.
I cannot give an unequivocal answer to such a complex initially formulated question, perhaps the next, theoretically-significant, similar to the excuse: “The test environment today is the next round of the dialectical Hegelian spiral, and the diameter of the spiral is gradually reduced, and the“ step ”is growing, realizing the trend, you can see the “fracture” of the next round, using the past developments in the new context to prepare for it in a timely manner, the main thing is not to reach the “dialectical vertical”, or using the synergetic term by logic, do not reach the “Pannova-Snooks vertical” in the field of environments — we either build Skynet, or climb to the top, once and for all cope with the combinatorial complexity by introducing a huge number of layers of abstractions, forget how the bare metal works, what is the system administration "hands", but "all roads from the top lead down", both scenarios are utopian - pessimistic ".
Virtualization
Take the same virtualization on which most cloud services are built, take containerization as a stage in the development of virtualization. Virtualization prototypes appeared in the fifties, the first industrial serial virtualization, if my memory serves me, came out in the sixty-fourth year.
It cannot be said that virtualization is something new (novelty, “outdated” for fifty years), on the other hand, in its current form, when there are many different virtualization engines and all of them need to be “connected” with each other, the “challenge” is fundamentally different . There was even a separate class of applications, the only task of which is to unite the different virtualization engine in a uniformly effective way, to build a single management API, it does not matter whether it is a low level CLI or a common UI, then there can be some add-on.
I will give a few indirect personal examples. I have worked for many years to develop data protection solutions - endless variations of backup and restore data. One of the most popular tasks in the recent past, today and, I am sure, tomorrow is the “cunning” bare metal restore [2].
Abstract situation “from the head”: a physical machine on a 32 bit Intel-based hardware running Windows 2000; by back-up and bare metal restore is transferred to tens of minutes, ideally, with one click, to 64-bit AMD hardware with Windows Server 2008 ... and if you add a pinch of different virtualization engine to this "rich mess" ... and if try to solve the problem "hot", without turning off the machine from the point of view of the consumer "service"? Such nontrivial transformations are very much in demand.
Three years ago, a real IT-Klondike existed, many large companies tried to “squeeze” into this gold-bearing market, a subset of bare metal restore solutions, including (“youthful maximalism” is visible to the naked eye) my DPI.Solutions startup. As soon as Windows XP officially ceased to support, banks frantically began to look for a safe, fast and feasible way to massively switch to the new OS, since according to their internal policies they were not allowed to remain on the "OS" without current security packs.
Due to the inertia of any large organization and the enormous spending on OS upgrades, banks until the last day hoped for continued support of Microsoft Windows by Windows XP and did not initiate the transition. As a result, there was a “fatal” situation for the bank: in two months or six months to transfer hundreds of thousands of cars, each employee, in each tiny office to the next version of the OS. Everything, you can shoot.
Specialized bare metal restore solved a similar problem in the specified "terrible" restrictions. I made a backup of the machine with Windows XP and deployed it completely ready for work, with all the "historical" information and settings, already on a new or old hardware with a fresh Windows, where the security pack updates come out. Many companies have specialized in this area. This is an indirect, but bright situation, illustrating the current complexities of the “environments”.
Youth problems
It seems to me that the formal preparation of the test environment came to us only today, largely because IT in the CIS is a “young” professional field, in the memory of our teachers / teachers there was a generation gap.
At first, Soviet IT is very powerful, bright, with its pluses and minuses, it's hard for me to judge this time, to put it mildly, these are stories "from someone else's shoulder." Then we didn’t have IT at all, or almost none. : , , IT , , , , .
2005 , , ; , , «» , , . , , IT. , «-» . , , — « », « ».
, «» . IT — , , . , ; IT- – , , .
, , IT , IT- , , , «», .
DevOps-, ? , . «» IT- . , IT- , , .
, , , meet-up, IT-, IT- , .
. , , QA. , , , Quality Assurance , 1-2 , 3-4 SQA Days 20-40 . , , . , , , , « » — meet-up-.
, . COMAQA.by 4 , QA Automation, , IT-, meet-up- . 5-6 : 2 , 2 , 8 -, 16 , 500 , - . - Docker-. . .
. CoreHard, «» , ++ . 4 . 22 : 11 , 350 , - . CoreHard speak-: – Boost, «Boost C++ Application Development Cookbook», ++ ; — 8 Microsoft, Bing; — , PVS-Studio C C++ ; – 20- , OpenSource- C++; - . , .
, . , . , .
«» , , .
— — , , ? ?. , - , – , , .
, «» . , «» , . , ISTQB.
(, , « »), . , ISTQB -, , , .
, , . COMAQA.by - , , , . , «» « », Software-Testing.ru , - «».
, 6- - SECR . , QA- «» .
— ? ? ?. -: « !» :) , , , , — . . , .
: ROI, ROI, Calculator / stakeholder-. , , , . , , , .
, , , , . , , , , .
— — , — « », ?, — IT-, DevOps-. , , -, , Docker- , , «» -«». , , .
, , - , «» , , , Java stack, Web, Angular, , , 25.
ExTENT 2015. , DPI.Solutions EPAM. , , , , , , « », - , .
— . ? , ? ?, , . ?.. . , ? , , . ! .
— Docker? , , OpenVZ — ? . «» ? ?, , Docker — . , , , OpenVZ . Docker .
UI, , - «» , , . . , , , . , API, , , , , best practices.
— — -. , , . - -? ? ? , , ?, . production, test- env , , .
, ; - , , , , .
, , . . . , , .
— , , , / ? , ?: ? , «», – ! ! , , .
60- . , , « ». , , «», , Selenium- , , «» , Desktop- .
– .
You can buy
tickets for the conference right now, registration is open.
In addition to the reports of Igor (
“Autotest: Same, but better” ) and Anton (
“Good” and “bad” variants of the parallel launch of Selenium WebDriver tests ), we advise you to pay attention to these: