📜 ⬆️ ⬇️

Why testing is not limited to finding bugs

(from the Tester's Story Cycle )

Hello. As you may have noticed, the intensity of the launch of courses at OTUS is increasing every month, and in March there are especially many of them. We want to coincide today's material with the launch of the Automated Web Testing course, which starts in mid-March. Enjoy your reading.


')
I still see many testers who talk about the number of bugs and vulnerabilities found, as a measure of the success of testing. Recently, I saw another point of view, which stated that the essence is actually in the quality of errors, but not in their number. However, with this measure it is also worth being careful. Now we will talk about it.

The basic idea is that the testing method is determined by the type of errors that you need to find.

I have already talked about some aspects of today's topic earlier in the conversation about bug-hunting . I do not want to repeat, so I will try to be brief. I will formalize my thoughts in theses and in relation to the team in which I work.

What is important to me in testing is to influence users in such a way that they make the right decisions faster. To do this, you must use a hard feedback loop to shorten the period between how developers make a mistake and subsequently correct it. These errors are areas where different qualities are behavior, performance, safety, usability, etc. - either absent or deteriorated.

This is definitely not measured by the number of errors found, but the nature of the error plays a certain role. My task is to find errors that most threaten the integrity and quality of development. This can probably be attributed to the "quality" of errors, that is, these errors are all the more important than the more threatened integrity.

The key to effective error correction, in my opinion, is finding these errors as quickly as possible, ideally as soon as they appeared. Although from my point of view, even the “quality of the error” is far from being the highest measure.

We attach such great importance to the quality of the error, but is it really that their quantity is generally insignificant?

In fact, the number of errors matters if you are very much focused on reducing the amount of time to search for them. Let's say there are 10 critical bugs in the system. And I really quickly found two of them, and this is really cool! Two critical errors were found before the presentation of the product. But I did not find others before deploying. This means that 8 critical errors remained undiscovered. In this case, the number of errors is a key measure, even if we did not understand it at that time.

It is important to think in a slightly different way. The number of errors or their quality is not as important as the mechanisms by which they occur and, accordingly, the mechanisms for their search. There are many existing options:


Focusing on these aspects to the extent not less than on other known ones is important because it helps to get around some traditionally arising problems. For example, such when you drove a hundred tests already, but did not find a single bug. And this may be good, but only if there are really no errors. But if they are still there, then this is bad, if the applied testing methods cannot reveal them. Or the situation when I run a bunch of tests, find minor errors, while skipping the more difficult to find.

My team and I have to make certain decisions based on the tests performed. This means that we have to believe what the test results tell us, respectively, we must initially trust the detection methods that we have implemented in these tests.

Some detection methods come from the tests themselves, roughly speaking, from what they are looking for and how they are looking. Other detection methods must be inherent in the environment and testability itself, which we determine to determine how likely and possible it is in principle that tests will cause an error if it exists.

At the end of my brief thoughts, I want to conclude that I do not determine the success of testing with any specific factors. But if you still want to somehow define it for yourself, then you should determine not by the number of errors and vulnerabilities found and not by the quality of these errors, but by its specific ability of the testing mechanisms to detect them.

I found that inexperienced testers, after reading this note, will not see a significant difference between the idea of ​​detecting abilities and the results obtained after highlighting these possibilities. As for the specialists, they should differentiate them extremely well.

By being able to understand and formulate this distinction, testers can go beyond the useless (author's opinion only) of the difference between “testing” and “testing” and instead create a constructive understanding of detection methods, both human and automated, that allow testing to help people make better decisions faster.

This is a seemingly simple, but quite useful material. According to the established tradition, we are waiting for your comments and invite you to an open webinar , which will take place on March 11th by Mikhail Samoylov , the leading automator in testing at Group-IB.

Source: https://habr.com/ru/post/442832/


All Articles