📜 ⬆️ ⬇️

Tips and tricks for deploying a test automation process from scratch

Foreword


The tips and recommendations described below were formed on the basis of the experience of creating testing and automation from scratch in two companies and the start of similar work in the third. Accordingly, there are no complaints about the laurels of an all-knowing specialist; this is rather an attempt to share experience, formed in the form of a step-by-step guide on a topical topic recently - testing automation in the company.

If you decide to read this, you should immediately take into account that the topic of creating autotests in the programming language and choosing tools for your specific project will be given little space here because it is impossible to unify them and display a strict list for you on which projects which tools will be better. . Here, of course, you have to dig yourself.

But on how to approach testing in general, where to start, how to think out a test plan and start to form test cases, how to select tests for further automation, estimate the time of work and whether you need automation at all and will be described below.
')
PS: And finally - this text would never have been formed if it were not for the useful lectures of Alexei Barantsev and Natalia Rukol, as well as the gulf of information written by good people in recent years on this topic.

Now everything, you are warned - you can start the story.

Part 1 - Deploy Test Automation


1. The choice of test automation strategy (hereinafter - AT)


There are several generally accepted variants of the AT strategy used. From the choice of a specific strategy depends on the order and intensity of certain works on AT. The choice of strategy is not the most important task, but it is best to start the process of deploying automation. I will give 3 variants of strategies that are characteristic of the very beginning of the deployment of automation. Of course, there are more strategy options; you can see the complete list at the seminars of Natalia Rukol.

1.1 “Let's try” strategy

It is used in the case when AT has never been in the project or in the company, and a cautious start is planned with a moderate allocation of resources.

It makes sense to apply the strategy when:


Description of the strategy:


1.2 Strategy "Here the target"

Feature of the strategy is the orientation to a specific result. The goal of a new stage of AT is chosen / determined, and the tasks are oriented towards the achievement of this result.

It makes sense to apply the strategy when:


Description of the strategy:


1.3 Strategy "Operation Uranum"

In essence, the strategy is constant and methodical work on the AT according to priorities set once every 2-3 weeks. Optimally - the presence of constantly working on the automation of a person who is not particularly distracted by third-party tasks.

It makes sense to apply the strategy when:


Description of the strategy:


Summarizing:

It is worth considering the general logic and strategy of automation, however, I would suggest the following option: At the beginning for 1 month (3-4 weeks), use the Let's try strategy, prepare the basis for further work, not diving deeply into writing the code itself tests and deep concrete modules. Upon completion of this stage we will have a ready basis for further work. And then you need to choose how it will be convenient to work further - roughly speaking, by waterfall or Agile. And continue to act in accordance with the chosen strategy.

2. Parallel tasks


This item makes sense if there are several people working on or will be working on testing a project. Then there is a crucial point in parallelizing tasks in a team. If one person will work on your team on AT, you can safely skip this item.

From the point of view of competences and close to each other knowledge, the testing automation process can be divided into roles that encapsulate various similar tasks.

Roles

Architecture


Development


Test design


Control


Testing

If several people work on testing the project, then it is logical to parallelize the roles described above to specific people. In this case, it makes sense to assign the role of “Management” to one person, to divide the roles of “Test Design” and “Testing” to all, and the roles of “Architecture” and “Development” to one or two heroes.

The logic is as follows.

  1. There is a clear test manager for this project who plans, sets deadlines, and is responsible if they do not comply.
  2. There are two common types of testers - manual testers and automators. At the same time, the tasks of the roles “Test Design” and “Testing” are equally relevant for both types. Accordingly, all testers write and design tests that can later be used in manual testing and automation.
  3. Further, manual testers according to the created test plans and test cases carry out manual testing, the automation specialists finish the necessary tests to a form suitable for development and are engaged in automation.

However, if you have a man-orchestra, he will do everything at once, but he will not be a professional in everything.

3. Creating a test plan


After choosing the AT strategy, the next important point will be the starting point of the work - creating a test plan. The test plan must be coordinated with the developers and product managers, since errors at the stage of creating a test plan can come back significantly later.

In a good way, a test plan should be made for any relatively large project on which testers work. I describe a less formalized test plan than the option that is usually used in large offices, yet the gulf of formalities is not needed for internal use.

The test plan consists of the following points:

3.1 Object of testing.

A brief description of the project, the main characteristics (web / desctop, ui on iOs, Android, works in specific browsers / OSes, and so on).

3.2 Composition of the project.

A logically broken list of separate, isolated from each other components and modules of the project (with possible decomposition, but without going into details), as well as functions outside of large modules.

In each module, list the set of available functions (without delving into the little things). From this list, the manager and test designer will be repelled when defining tasks for testing and automation for a new sprint (for example: “changes were made to the data editing module, the file upload module was affected and the function of sending notifications to the client was completely redone”).

3.3 Testing strategy and planned types of testing on the project.

Strategies are described in claim 1. In the case of automation, usually only one type of testing is used - regression (deep testing of the entire application, running the tests created earlier). By and large, autotests can be used in other types of testing, but as long as they do not reach at least 40% of the coverage, there will be no principal benefit from this.

However, if the test plan is planned to be used not only by automation engineers, but also by hand-held testers, then you need to consider the entire testing strategy (not automation), select or mark the used / desired types of testing and write down this paragraph as well.

3.4 Sequence of testing.

How will the preparation for testing, assessment of the timing of the tasks, the collection and analysis of statistics for testing.
If you have no idea what to write in this paragraph - you can safely skip it.

3.5 Criteria for completing testing

Briefly describe - when testing is considered completed in this release. If there are any specific criteria - describe them.

Summarizing:

It is necessary to write a test plan, without it all further automation will be chaotic and unsystematic. If in manual testing (in very bad manual testing), you can do without a test plan, test cases, and use monkey testers with relative success, then this will not work in automation.

4. Definition of primary tasks


After choosing a strategy and drawing up a test plan, you should choose a set of tasks with which we will begin testing automation.

The most common types of tasks that are set before automation:


At the very beginning of the deployment of automation, I recommend setting the task of automating acceptance testing as the least time consuming. In this case, the solution of the problem will allow to start acceptance testing already on the next accepted build.

The main criterion for smoke tests should be their relative simplicity and at the same time mandatory verification of the project’s critical functionality.

It also implies that Smoke tests will be positive (verifying the correct behavior of the system, while negative - checking whether the system will work incorrectly), so as not to waste time on unnecessary checks.

Summarizing:

Making the list of primary tasks for automation, it will be logical to be the first to describe and automate the smoke tests. In the future, they can be included in the project and run with each build. Due to their limited number, the execution of these tests should not particularly slow down the assembly, but each time it will be possible to know for sure whether the critical functions still work.

5. Writing test cases for selected tasks


With regard to test cases, it is customary to divide the testing process into two parts: testing according to ready-made scenarios (test cases) and research testing.

With regard to research testing, everything is quite understandable, it exists in two variations, either the study of new functionality without much prior preparation, or in the form of a banal decoy. Scenario testing implies that time was expended and test scenarios were created for the project's functionality to cover the largest possible amount.

The most reasonable, from my point of view, is a reasonable combination of approaches, in which new functions and modules are tested in a research style, trying to test possible and unlikely scenarios, and upon completion of testing, test cases are created, which are then used for regression testing.

Three options for further use of test cases, except for the obvious:


I will not describe in detail the principles for describing test cases. I will describe a lot of materials on this topic in the network, I will describe briefly.

A good test case consists of the following items:

  1. The name (description) is a very brief description of what the test checks.
  2. Preliminary state of the system - a description of the state of the system in which it should be at the time of the start of the test case.
  3. The sequence of steps - sequentially described actions that verify the goal stated in the Title.
  4. The expected result is the state of the system, which we are waiting for after passing the sequence of test case steps.

For convenient storage of test cases, there are many solutions, but one of those that I used proved to be quite good Testlink application, and the best - sitechco.ru system, a convenient free system for creating / storing and tracking test cases.

Summarizing:

For further AT, you need to write test cases for the tasks set out in Clause 4. They will simultaneously serve as the beginning of the creation of normal regression testing and will serve as a basis for further autotests.

As a recommendation, a tester planning to write test cases is recommended to read about pair wise, equivalence classes and test design techniques. Having studied at least superficially these topics to write good and useful test cases will become much easier.

6. The choice of tools for automation


Obviously - the tools for AT are selected depending on the platform on which the application runs.

I will give an example of choosing a toolkit for a project consisting of two parts - Backend on AngularJS and Frontend - a client for tablets and phones based on iOS.

1. Backend

Karma + Protractor (Jasmine).

Pros: I recommend using the Protractor tool as a shell, it is ideal for applications written in AngularJS. Protractor simulates user interaction, allows you to create autotests created by the BDD framework Jasmine. Well, Karma allows you to run these tests in different browsers.

Cons: The tester should be able to write at least simple JS scripts. Or the programmer must write these scripts to him, which with the development of AT can become overhead.

Selenium Webdriver.

Pros: A convenient, simple and reliable tool for automating the testing of GUI web applications. A lot of documentation, an abyss of examples, in general, is convenient. In the most primitive version does not require any programming knowledge from the tester.
Cons: Protractor is written by the AngularJS team to test AngularJS, while Selenium is universal. From my point of view, it will be more convenient to write tests for Protractor + Jasmine on the AngularJS project. In case serious self-testing is planned, and not just assistance to manual testers, then the tester will still need to know the programming language (java, python, ruby, c #), since flexible configuration of tests requires programming knowledge.

2. Frontend

Calabash + Cucumber.

By and large, the most convenient tools for automating iOS applications on tablets and phones is Calabash + Cucumber. Calabash is a framework for automating functional testing, which, in essence, is the driver that controls the operation of an application on a device or simulator. Cucumber provides a test infrastructure (running tests, parsing scripts, generating reports).

It should be borne in mind that Calabash is a paid solution (https://xamarin.com/test-cloud).

Summarizing:

Testing automation tools are described above, but these are far from the only available tools and I would recommend to anyone who sets up all of this infrastructure and deploys ATs in a company to dig deeper into the network, something new and more convenient than what I’ve chosen instruments.

7. Selection of tests for automation


So, for the current stage, we have formed a test plan and described some of the functionality of the modules as test cases. The next task will be the selection of the necessary tests from the existing variety of test cases. Right now you only have test cases prepared for Smoke testing, but after a few iterations of the development of test cases in the project will become significantly more, and not all of them make sense to automate.

1. It is very difficult to automate the following things:

  1. Checking the opening of a file in a third-party program (for example, checking the correctness of a document sent for printing)
  2. Checking the image content (there are programs that allow to partially solve this problem, but in a simple cut of the tasks, such tests are best not automated, but left for manual testing)
  3. Checks related to ajax scripts (this problem is easier to solve, different applications have their own solutions, but on the whole, ajax is much more difficult to automate).

2. Disposal of monotonous work.

As practice shows, testing just one function may require several test cases (for example, we have an input field in which you can enter any two-digit number. It can be checked with 1-2 tests, “2 characters”, “1 character”. If check more thoroughly - then add a test for the absence of value, zero, limit value and a negative test with character input). The advantage of autotests before manual testing in this case is that if we have one test that checks for data entry in the field - we can easily increase their number by changing the input parameters.

By and large autotests and should close the most tedious and monotonous part of testing, leaving testers room for research testing.

Accordingly, when choosing test cases for automation, this should also be taken into account.

3. Ease of tests.

And the last important criterion for the selection of test cases for automation is the relative simplicity of the tests. The more diverse steps in the test - the worse the test case itself, the more difficult it will be to automate and the more difficult it will be to find a bug if this auto test fails at launch.

Try to choose to automate test cases of small volumes, gradually gaining experience and automating more and more complex test cases, until you decide what length of test is optimal for you.

8. Design tests for automation
Test cases selected for automation will most likely need to be added and corrected, since test cases are usually written in simple human language, while test cases for further automation should be supplemented with the necessary technical details, for ease of translation into code (with time, it will come to understand which tests need to be described in a living language, and which - to be described in detail and clearly even at the stage of creating test cases).

Accordingly, it is possible to form the following recommendations on the content of test cases intended for automation:

1. The expected result in automated test cases should be described very clearly and specifically.


2. Take into account the synchronization features of the browser and the application running the tests.

For example, in the test, click on the link and the next step in the action on the new page. In this case, the page may load for a long time, and the application, without waiting for the download of the desired item to launch, will fall out with an error. Often this is easily solved by setting the item loading wait parameter.


3. It is not necessary to register hard values ​​in the test case.

Only if it is not necessary. In most cases, when creating a test environment, suitable data is determined, so it is more optimal to select values ​​when creating an autotest.


4. Automated test cases should be independent.
There are exceptions to any rule, but in most cases it should be assumed that we do not know which test cases will be performed before and after our test case.


5. It is necessary to carefully study the documentation for the tools used.

So you can avoid a situation where, due to an incorrectly chosen command, the test case becomes falsely positive, i.e. Successfully passes in a situation when the application is not working correctly.

Summarizing:

A correctly written test case designed for automation will be much more similar to a miniature technical task for developing a small program than a description of the correct behavior of the application under test that is understandable to humans. Below I will point out a few test cases, reworked for automation. The rest, I think, the project tester will be able to remake by the rules described above himself.

9. Configure the application stack for automation


The next step (or a parallel task in the case of several specialists) will be the deployment of a stack of applications, which we will use in further work on creating and launching autotests.
I will not describe these or other installation options in detail, all information is on the network, for each option I will attach 1-2 links to start searching for a solution.

Backend

1. Karma + Protractor (Jasmine)

- Karma + Protractor - Great tool deployment guide - mherman.org/blog/2015/04/09/testing-angularjs-with-protractor-and-karma-part-1/#.VpY21vmLSUk
- Protractor + Jasmine - Install and configure Jasmine engineering.wingify.com/posts/e2e-testing-with-webdriverjs-jasmine

If you select this scheme, it will be necessary to “friends” Karma and one of the Continuous Integration systems to automatically run the tests. I offer two of the most interesting options that seemed to me - Jenkins and Teamcity.

- Teamcity - The solution is quite simple, which consists in installing the karma-runner-reporter plugin;
- Jenkins - Similarly - a simple solution, installing the karma-jenkins-reporter plugin.

2. Selenium Webdriver

The decision itself is not too elegant, but this is written above. If you still decide to go in a simple way, then it suffices to put:

- Selenium IDE ;
- The principles of working with Selenium Webdriver in case the tests obtained from the IDE are clearly not enough to read here .

After installing the tools, it will remain to run them on the Continuous integration system. ( ), — Teamcity Jenkins.

— Teamcity — IDE (C#, Java, Python, Ruby), Teamcity. — .
— Jenkins ( — ).

Frontend

1. Calabash+Cucumber

— ;
— ;

— Calabash Continuous integration .

— Teamcity — , , .
— Jenkins — , .

:

— — — , . , , — .

10.


. , «», , , . , , , , , .

, — .
, .

, , , — . . , , .



11.


— , , — :


.

Good luck!

11


, , — .

-, , , , .

, , maintenance sustain . , , .
, !

!

, !

2 —


12.


- , , . , , . , , - .

:

1. .

:

  1. , ( , ), , — TAuto.
  2. , - -, — TMan.
  3. ( , ) , — TManRun.
  4. , — TAutoRun.
  5. , — TAutoMull.
  6. ( — ) — N.
  7. , . R.

:

TManTotal = N*Tman + N*R*TManRun
TAutoTotal = TAuto + N*TAutoRun + N*R*TAutoMull


, , TManTotal >= TAutoTotal .

, , , .

2.

( — ) .

, , :

1. .

, , . , . . . And so on.

( ), . , ?

, .

- — .

2. .

, , - . , , , , . , , .

3. .

, , , . , .

, .

— , , . . — , , .

4. .

.

:

:


, .

:


, — .

:


, ( , , ), , , . . , CI . .

:



:

, . . .

13.


( ), , . , — , .

, :

1. .

, . : , , , .

, , :


, , . :


2. Replicated.

If we can collect statistics on the implementation of similar tasks assigned to us - then the tasks relate to replicable. These are usually tasks for creating autotests without using new types of tests, expanding coverage with auto tests, regular tasks for supporting tests and infrastructure.

Such tasks are quite simple to evaluate, because the like of them have already been completed and we know the approximate time of their execution. Help us:


:

, , .

:

, , . , , !

Source: https://habr.com/ru/post/275171/


All Articles