How do we at Tutu.ru achieve the effectiveness of each of the 9000+ UI tests?
Any project in the process of its development and growth is filled with new functionality. QA processes must respond to this promptly and adequately, for example, by increasing the number of tests of all kinds. In this report we will talk about UI-tests, which play an important role in creating a quality product. The UI test automation system not only shortens the time for regression testing, but also ensures the efficient operation of such tools and development processes as Continuous Integration and release engineering.
The number of tests gradually grows from 1000 to 3000, from 6000 to 9000+, etc., so that this “avalanche” does not cover our QA-process, we need to think about the effectiveness of the entire system and every test from the very early development stage of the automation project. in her. ')
In this report, I will tell you how to make the system flexible to the requests coming from a business, as well as about the effective use of each of the tests. In addition, we will talk about the evaluation and metrics not only of the automation processes, but also of the entire QA.
Report outline:
Let's start with the principles of "test-building", which make our system as user-friendly as possible;
Let us analyze the ways of integrating the UI-testing system into the processes of the QA-team;
Let's look at specific techniques to improve the effectiveness of each test;
Let's talk about the metrics of the UI-testing system and its relationship with the projects of Continuous Integration and release engineering;
Requirements for the system of UI-testing and the principles of "test-building"
We place the following requirements on the UI test automation system: the system should be easy to use, it should be intuitive, support for test coverage should not be time consuming, the system should be resistant to errors in the test code, and finally, the system should be very productive .
Based on this, the very first and most important principle is the maximum ease of perception of tests. This is necessary so that each test is understandable for any employee who can read English.
Use a high level of abstraction and correct naming of functions, variables, etc., watch this while conducting a code review.
The following principle - each project should be as independent as possible from the others. This is necessary so that each project could set for itself individual goals and objectives and, while working on them, did not interfere with the development of other projects.
All your changes in the code must pass the code review using the code control system. I advise you to use the system, which is used by developers in your team.
Use pre-push and post-commit hooks to protect the health of the “kernel” of your system. At least run unit tests on them.
Unit tests are tests that allow you to check for the correctness of individual modules of the source code of the program.It is important not to confuse unit tests and UI tests - they are not interchangeable, they complement each other.
We have the entire core part of the project covered by unit tests, at the moment there are more than 500 of them. We run them with each push and commit to the repository.
UI testing in QA team processes
How we integrated the UI testing system into teams and product processes. The main goal - any tests should benefit from an early stage of development. As soon as the test is written, it immediately goes into the Continuous Integration system. Support for test coverage should be a standard part of testing a task. Therefore, there are no testers in Tutu.ru who are engaged only in manual testing. We have every specialist engaged in a full range of testing.
The task should not be "merdzhitsya" in the master, if it breaks any tests. Constantly keep it in your head, even if the customer is in a hurry.
The effort of each stage of the QA process must be monitored. Several of our graphs: detailing labor costs for the release cycle and its key stages of one of the teams in man-hours. The teams show different results, but their goal is the same - to reduce labor costs while maintaining the stated level of quality.
Detailing the release cycle of one of the teams in man-hours
Man-hours for the main stages of the release cycle of one of the teams in man-hours
UI tests in the release cycle
The most important thing is that testing should start as often as possible, according to the stages of the task development. Each stage must pass with the most "green" tests, and for this we are watching. For some stages, we use test kits specially collected for them. It is worth noting that each kit is a sample from the common pool of tests, respectively, tests from different kits may overlap.
Any development begins with the stage «story branch» . At this stage, we run a test suite, which is formed by testers. Anyone can run it - a developer, tester or analyst. The test coverage is updated, and a QA-department employee responsible for testing this task participates.
The next stage is the Pre-rc . This is the "nightly assembly." A special branch is collected on a test stand every day. It runs the entire pool of tests, of which we have more than 9000. Each team uses the results of this work. At this stage, the final refinement of the test coverage.
The next step is RC . This is our release process, we are released twice a week. A special release test suite is used for this, and at this step the tests should be almost all “green”, if something is wrong, it is corrected.
The final release (stable) . The release also uses a release test suite.
Project support
A separate role is the project support for the prompt solution of problems of the QA-team. For the team is very important ongoing support tool. We use the Service Desk to ensure that each employee can receive support when using the system.
We improve the effectiveness of each test
In this section, I will talk about specific tools that, step by step, made using UI tests more convenient and, accordingly, increased the efficiency of the system.
Project of control and management of test containers
A lot of tests. The multi-threaded launch system consists of more than 150 test containers that need to be monitored. We have made a tool that allows you to manage test streams, provides information about workload and gives the best integration with the module Runner.
UI Test Container Management Interface
Own runner tests
We wrote Runner UI tests. For us, low consumption of resources was paramount. We need flexibility in development - we need to respond to the requirements of the business. Runner is able to balance the load of your system, taking into account the priorities of launches. Launches of the release cycle and devel-environment cycles have different priorities. Runner balances them, given these conditions. It also provides better integration with other modules of the system.
Internal organization
A special php script generates a queue of tests from the repository. It gets into our multi-threaded launch module, where it forks for individual tests (PHP processes). Each such separate process has access to a database where it receives user data under which the test will be executed.
Schematic device module Runner
Test cases and test suite management
We keep the test documentation next to the test code, so we link the UI tests and test cases into one, this is especially convenient when generating reports, each test has a description by which you can quickly understand what risks it covers. Implemented this functionality using PHPDoc.
For already covered test cases, the case tag is used:
For test cases that are not yet implemented in the code, the todocase tag is :
For test cases that need to be done only manually, the manualcase tag is :
Test coverage calculation
Also, with the help of the tag mechanism, we automatically consider the UI-coverage of each of the projects. Calculation according to the formula: Percent_coverage = (1 - (number_todocase-tests + number_manual-tests) / total_number_test_in__project) * 100%
Conclusion of the console with a percentage of coverage of UI-tests for the project "Buses"
Test Suite Management
We use the same mechanism to run test suites. For example, we use test sets for a specific functionality, there are sets for release, RC-cycles, and in general, the creation of sets is limited only by the imagination of QA-specialists. Each test can be included in several sets, we denote them with the help of the @labels tag.
Test refers to the release test suite
An example of running a test suite for the success-page functionality
HTML report
The report is formed individually for each launch. Each launch tester can make hands and get in the form of a report HTML-page, thereby QA-specialist gets the opportunity to quickly assess the quality of the product. HTML report reduces test update time. Reports are in the CI tool, but sometimes it is useful for the tester to see the report in its working copy.
"Soft asserts"
PHPUnit, like any other unit-testing system with assertions, works like this: if an assertion encounters data mismatch, the test does not continue. We changed this paradigm. Soft assertions, as we call them when faced with a problem, do not interrupt the test, but continue its execution with all other assertions, and, ultimately, the test completes its execution with an error at the teardown stage. Thus, soft asserts allow us to give information about the quality of a whole block in a single test, even if there are any problems in this block. Such asserts are safer in complex test logic. For example: we have a test that makes an order from a bank card for real money, and we don’t want this test to “fall” somewhere after placing an order and not have time to cancel it.
Flexible startup settings
It was created to meet the needs of QA-specialists, namely, to provide the best integration with CI. It is written using the Symfony Console and currently has more than 30 parameters.
A few of them are: On-demand. We have tests that are high risk and not run automatically. They are started if the tester indicates in the “on-demand” launch parameters. Bug-skipped tests. Tests that are blocked by any problem in the product can be labeled with a special label, and these tests will not run on the CI system. To catch the situation that the product has fixed something, and the test can already be turned off, we have a special plan that runs once a week, and it runs only those tests that are now deactivated. Js-error-seeker. This test is especially useful for front-tenders. Front-end tenders and testers use this feature to make checks for js-errors during the test. Using this mechanism, we can catch js-error all the way to the test. Notify-maintainers. Each test can be a maintainer. This is the employee who is responsible for this test and wants to be notified if this test falls. The above flag includes this notice.
Metrics
The project of control and management of test containers is able to monitor the system load, thereby showing the points of growth of the system. Schedules for passing tests on different stands on time. We have set a limit of 60 minutes, if any project goes out during this time, then this is a reason to go to the project and understand why the test suite passes for so long.
Production test time
Release engineering
Here we will talk about how the UI testing tool should interact with the release engineering project.
What a bunch of UI testing systems and Bamboo can do: run builds, generate reports, maintain the release process, automatically launch builds on a schedule. There is also the possibility of an automatic “rollback” of the release, if the smoke test suite showed that something went wrong.
In the CI system there can be a large number of different plans, and this is absolutely normal, do not be afraid of this.
findings
Each test should run as often as possible;
The mutual integration of the modules of the UI-testing system is very important; for this, it is worth writing your implementation;
The system must be flexible and supported, ready-made tools do not always respond to these requests;
Watch the performance of QA-processes and improve them as soon as you see that something is going wrong;
QA should be part of the development process, our tools should support this.