In the countries of the former USSR, a quite definite attitude towards the tester was formed as a supporting role:
- You are ready to take on the role of a tester anyone who can confidently press buttons
- Testers rarely participate in the fate of the project, make decisions on the requirements and deadlines
- Testers are trying to connect as late as possible, when you need to “click” and “find errors”
- With the exception of a small number of grocery companies, most employers offer testers a salary 1.5-2 times lower than developers.
Why this happens is quite understandable: very few people met live qualified testers, testers do not make a useful result (they don’t write the product), and in general we have decided to save on everything we get. Another interesting question:
what happens with this approach? Consider the examples.
Let's save on salary
Everything is clear here. To make the project more profitable, it is necessary to sell it better and reduce production costs.
OK, let's hire an inexpensive testing specialist (most often, a student or a fresh graduate) who will poke buttons and log errors into the system. When the amount of work grows, and he stops cope - we hire a second one, and gradually build a whole testing department. At first glance, everything seems to be quite good, if not for some trifles:
- Unqualified testers get bugs without detailed analysis. Such errors are difficult to localize and correct. Suppose, on each of the 50 mistakes that are started in a week, developers, on average, spend an extra 15 minutes on analysis. There are imperceptible little things! In total, 600 man-hours of developers per year - or 3.5 man- months !
- Over time, when the amount of supported code grows, you need regular regression testing in order to check that nothing is broken. Your test students purposefully and methodically perform the same tests from release to release, occasionally finding errors (hello, pesticide effect !) When someone gets the idea (most often, RM) to get an idea about automation, these guys will write you have several hundred scripts copy-paste - and then, instead of manual testing, each release will update the autotests. At some point, in order to maintain all this happiness, you need more testers: something between a lot and a lot . And a lot of it is always expensive.
- In the end, despite the increase in the number of testers, you will encounter errors in a productive environment. Swear with users, lose trust, waste your time. In general, you will continue to spend excess resources .

And at some point, sooner or later, all the RMs come to understand that testers must be sufficiently qualified and find themselves stronger guys for the team (who, however, can not always catch up and dump all this luggage which they have accumulated). Nevertheless, there are normal testers who design adequately and tests, and the automation framework prepares the appropriate one. And everything is fine, only because testers are just testers. Therefore,
Testers do not have to start up too soon
“Now we will write a test-worthy version, and they will only have to check that everything is fine and get the little things found!”
Here, for some reason, the problems start again:
- Late connecting to the process, testers do not have time to delve into the environment. There is a lack of understanding of business scenarios, the user environment, and the reasons for making many design, architectural, and design decisions. Approximately at this stage “graters” begin because of the different vision, how and what should be realized, since Testers either lack knowledge or have a different vision, but they did not participate in the development of design solutions. Congratulations, a lot has changed: you again have low-quality testing, mutual discontent, lack of time for serious changes in the product .
- After spending a long time on your project (which many adequate testers simply won't do under these conditions), testers begin to understand the environment much better. The chances of success increase if they are involved in developing requirements and communicating with customers. But you continue to connect testing in the final stages, because it is so accepted? Most likely, one of two things awaits you: if there is not enough time for testing, errors will be missed. If you can move deadlines, you just have to make too many major changes. Of course, this is just a bugfix, this is normal ... For now, don’t think about a simple one: how could it have been avoided, especially at the final stages of release?
Okay, smart people should learn from mistakes, at least on their own. Looking at all this happiness, you understand that something is wrong. You connect testers to the project in the early stages, they prepare unit tests until the integration solution is ready, they discuss the decisions made with the team, and you don’t spend time in hot pre-release times for empty discussions and making late changes to the product.
And everything could be good if it were not for the next "but":
')
Testers can not affect the quality of the product
With proper strength and desires, testers can evaluate it. With enough communication skills - to justify the importance of certain changes. But affect the result?
Medicine is not yet aware of cases where the ultrasound machine cures something, and testing is only a diagnosis. And at this stage, if your testing team still has quite competent and unbroken staff, new “graters” begin:
- For more competent automation, the product needs easily supported locators - pffff! “Again, spend extra time on testing!”, The RM will say, and postpone the task until those never-coming times when the team will have nothing to do.
- To more accurately assess the test coverage and its weak zones, we need detailed decomposed requirements - pffff! “Stop breeding bureaucracy, we have Ajile!”
- We need to better understand users and attract the target audience at the stage of working out business scenarios, assess usability - pfffff! “This will postpone the release for 2 weeks, then we’ll redo it according to user feedback!”
Gradually, all good undertakings fly away into the pipe, and instead of ensuring quality, the maximum that testers can do is promptly report any errors found. Declaring the need to improve the quality of the RM, in practice, they are not ready to invest in it, and there comes a quiet collapse. Support for the old functionality is worth more and more. The difference in the distribution of team resources over time in two approaches I tried to depict in the diagram:

And the sooner you start investing resources in quality assurance and workable processes, the more, ultimately, you have to develop new functionality! No matter how unexpected it sounds.
findings
I will summarize:
- Autotest developer is a developer. A test analyst is an analyst . If you recruit people for testing, making less qualifying requirements for them than for developers and analysts, then gradually you dig a hole for your entire team.
- Testing is part of the development , not the next step.
- Only testers cannot participate in testing and quality assurance . The product and process should contribute to the possibility of testing and improvements, and if you don’t invest in it from the very beginning, then you have to pay much more - on difficulties with support, making changes, testing and communicating with customers.
Still rarely, but in the ex-USSR, intelligent quality assurance processes are beginning to occur. Are you ready to change something for the better?