📜 ⬆️ ⬇️

Testing: Manual or Automated?

I want to share my experience in organizing the testing process, which covers 3 years of my work and the creation of several large systems. The description will only affect the automation of "manual" testing without interfering with other aspects of software development.

I think it is worth mentioning at once that at all stages we used:


Everywhere, where I will talk about testing automation, we will talk about testing the interface with connection to external resources (database, file system, services, etc.).
')


Step 1. No testers



On the first major project in which I participated, there were no testers. The developers, before uploading to the combat server, themselves checked the main parts of the system. With this approach, bugs are often crawled out.

It was obvious to everyone that programmers love their code and are not very enthusiastic about testing a newly written feature. About testing previously written features, and even other developers, we can’t say anything at all. The programmers themselves will not re-run all sorts of scenarios to work with the system and go into all the tabs of all pages.

In fact, on this project we could afford a similar approach. Our users sat within reach. It was a project for the internal automation of the company. Operators and managers who worked with the System could come to us and discuss the functionality, point out errors. Accordingly, we could make a quick fix and upload the revised version.

Error reports boiled down to screams behind the wall. Losses at bugs were insignificant, the situation suited everyone.

Judging by the survey that I conducted. How is the testing process arranged for you? most can afford just such an approach to testing.

Step 2. Start test automation



We started to make a new system, which should have been used not only by the people behind the wall, but also by outside users. There is a need to hire a tester who would protect us from bad releases.

I hired a tester and, to begin with, we decided that testing should be automated, like running unit tests. Automation was supposed to save the tester from constantly repeating the same scenarios.

The idea was simple and seemed to everyone to solve problems. We wanted CI to launch an integration test suite at night. In the morning we come, if the tests are all green, it means you can make a release. If there are red tests, then we fix, run the entire test suite again, etc. until all the tests turn green.

For recording tests, we tried various options, settled on Selenium . At first, the tests were written on the Selenium API itself. Over time, a wrapper above the low-level API began to be created, and this wrapper was turned into a DSL for testing.

While the tester was dealing with the products for testing and was developing the test automation process, development did not stop. We had to release new versions after every 1-2 iterations. The tester was constantly in the stage of chasing programmers. His goal was 100% coverage of scenarios so that we can safely release new versions without fear of bugs.

Step 3. Fight for green tests



Problems with test automation were unexpected for us. In fact, we were not ready for them.

Long feedback


The larger the system, the more tests we wrote. Tests began to work really slowly. If they took place in 14 hours - it was incredible luck. Over time, the tests no longer fit into the gap between 6:00 pm of the last day and 9:00 am of the current day. We did not receive feedback from the tests, there were downtime at work. It also happened that the lights would be turned off or the server would reboot, then the time losses were too great.

Green tests? No, did not see


Green tests have never been. True once were, on December 30 they started and gave the result as 100% green. Perhaps it was their gift to us for the New Year. More of this all time was not.

Why they were not green? There were several reasons for this:



Due to the instability of the tests and the fact that we did not have time to update them, we had to:



Is the goal achieved?


We had a goal - to see green integration tests and, without fear of bugs, upload to the combat server. In fact, we have not achieved this goal.

As a result, the very first launch of the system in real conditions showed that auto tests pass a lot of bugs. We found these bugs when we manually poked the system. In 10 minutes of manual testing, the most critical bugs were discovered.

I decided to cancel the work on all integration tests and entered manual testing with writing test scripts in Google Docs. After that, we finished the main bugs and testing was perfectly integrated into the Kanban stream.

Current state



At the moment, in my company, we with a team of testers follow the approach: manual testing + writing test scripts in Google Docs. Testers manually pierce all the scripts before pouring.

Test scripts are considered part of the project artifacts and are delivered to the customer along with the source code.

In fact, it gives excellent results. In releases we have only minor bugs, or bugs, which are in fact features.

Against test automation?



It is necessary to understand the limits of applicability of the approach to test automation. For example, if you write a bug, which catches errors 404 and 500 on a live server, then the effort is justified.

If you have a simple application that sometimes changes, it makes sense to make a set of integration tests for it, but combine it with manual testing.

If you want to replace manual testing with 100% automatic, then consider how to avoid the problems described above. Perhaps we had to first use manual testing, and then, automate those parts, the automation of which will give the maximum effect without the need to maintain these tests.

I will be glad to hear how testing has evolved in your company.




Links

Top Five (Wrong) Reasons You Don't Have Testers , Joel Spolsky
The Joel Test: 12 Steps to Better Code , Joel Spolsky
Monkey against robots. Part I (TestLabs09) , Maxim Dorofeev
Monkey against robots. Part II (TestLabs09) , Maxim Dorofeev
Monkey against robots. Part III (TestLabs09) , Maxim Dorofeev
Organizing team work in Agile: development + testing

Source: https://habr.com/ru/post/145974/


All Articles