📜 ⬆️ ⬇️

Risks and metrics in test automation



Good day!
Business loves to measure, management loves transparency, and employees don’t like all this paperwork, especially if they don’t want to know what ... Testing automation processes are no exception. I will give 5 risks that are most often encountered, which shoot, which cannot be underestimated, which can lead to the failure of all testing and projects in general. I will also give examples of metrics, the fair use of which will help you calm down, your superiors, and business.
In addition to these risks, there are global ones: the wrong choice of testing strategy, the lack of OOP in creating frameworks and tests ... But they, unlike the first, lead only to an increase in the cost of testing, and not to failure of testing as such, as a process, as ideology, as a tool ensure the quality of the product, and ultimately the loyalty of customers who bring you income. If you, as a specialist, can explain this to management, to earn respect from developers (you need to earn it :)), convince everyone of the correctness of the chosen approaches and strategies, you are on the right path.

Risks:


1. If we organize testing automation where it is not needed, we will throw out a lot of money
This is the first and main risk of any process. Executives, especially in the post-Soviet countries, are rarely flexible. If there is an idea in my head that testing automation is a blessing, then it will be shoved everywhere where needed and not needed. It is completely forgotten that a real return on investment in test automation occurs, at best, from the second release. We need to learn to explain to businesses that not all automation will provide high-quality coverage and that this will be just discarded motivation, time and money.
')
2. If we write tens of thousands of tests that will run on CI in the clouds, then we will be deceiving ourselves in terms of quality
This is the most common anti-pattern. I’ll dwell on it in more detail - the rest of the patterns and anti-patterns can be read here - the blue section will be most interesting to everyone who is writing unit tests. These are long-formulated anti-patterns that can already be attributed to the axioms.
No kidding - if we allow heavy tests, test liars and so on, we condemn ourselves to the failure of the project. More than once, taking part in the audition of testing processes in various companies, I came across this phenomenon, discouraged automatists and the manual from writing tests for the sake of tests. Some listened - and scooped up a lot of bad things before the implementations, some did not listen - 3 projects collapsed on the same day, although the green tests were about 8,000 thousand each.

CI in the clouds - yes, I love this topic. Why chase functional tests on a continuous integration server if tests all run for 10 minutes? Why does CI, if releases roll out once a month, not once a day? .. Like any specialist in test automation, I have mastered the skills to create scripts to run this whole miracle on TeamCity, but the fact is that never no matter if the team worked, I didn’t have to use CI except for building and running unit tests. All functional tests should be run before the commit, not after it. I am convinced of this ... With this approach, there are problems with crossover work. But they can be solved by competent organization of the process.

3. If we use isolated input for tests, we will skip the Critical bugs in production
In my previous article, I proposed to separate unit tests and functional auto tests. I would still try not to do this, and I believe that in most cases the data should not be isolated. If we can add randomness to the input behavior for the test, it should definitely be included. Users (calling parties of methods, if we are talking about unit testing) always on average produce an action with data in a certain range. You can make an input provider, which would rely on this distribution and thus bring everything closer to reality.
For example, recently faced with "features", which is clearly manifested in our country. The system fell on its knees, if in response to a request for payment from the bank received a card number longer than 16 characters. Yes, of course, this is unrealistic in our world of 16 digit cards, but, forgive me, but when it becomes real ... when bank customers will be reissued cards, and they will gradually withdraw from regular customers of the service to competitors, and the business will lose money, not even understanding that not this way.

Rem: for Java lovers, it is clear that using reflection, you can write an initializer of any object of any class, since in the end it is just a tree of primitives. Few people know, but everything has long been implemented in the wonderful library podam. Here is an example of use:

PodamFactory factory = new PodamFactoryImpl(); //       MyClass myPojo = factory.manufacturePojo(MyClass.class); 

You can also annotate to set ranges of values ​​and create your own generation strategies. In general - everything your heart desires. It is much more convenient than calling setters all over the tree and using Random and RandomUtils to fill an object with data. Using Podam with Mockito gives amazing results in terms of brevity of initialization of objects returned by the stub.

4. If you choose the wrong tool for testing, you can become dependent on technologies and specialists
There are lovers of cannon shoot at sparrows. It is almost impossible to choose technologies that are not just to be studied, but to find a specialist who can support all this good. If your testers are starting to use frameworks, tools, script testers, incomprehensible clickers, cloud services ... if they add to this, that these technologies are paid and you should definitely get them ... ask yourself if it’s not the case that you’ll start to depend on this whole good, on these specialists? Can you find people on the market who can continue this ideology, how much training will cost, etc.

5. If developers do not understand why testing automation is needed, do not cooperate in these matters, and perceive everything as an obligation, then test automation will be ineffective.
When auditing, I always start with a conversation with the developers. It turns out that 3 out of 5 developers are absolutely sure that these testers (it doesn't matter if they are manual or automated) just eat up the salary. To the question "why do you think so?", The answers are always different, but the essence is the same - "we do not need it, because we are so beautiful." One developer out of 5 believes that testing automation is needed, that he has already put these questions to the company a hundred times, but did not find support from his colleagues. Another 1 of all has long been sent and he writes tests, because he considers it necessary, to impose nothing on anyone wants. So he ensures the quality of his work. In such an environment, you need to start with redoing the attitude of self-confident developers to testing. Otherwise, you can not even try to put test automation processes, because the tests will not be chased, and even if they do, nobody will pay attention to the results.

Metrics


Any metrics in test automation must meet the following criteria:


Rushed. Multiplication by 100% will be omitted - do not be angry.

1. The percentage of automated tests.
Yes ... Alas, not everything needs to be automated and not everything is possible to automate. If you have a list of tests that you would like to automate, then it would be logical to measure

PA (%) = number of tests that can be automated / number of total tests.

2. Percentage advance automation

AP (%) = number of automated tests / number of tests that can be automated.

This metric is very useful when considering the automation process over time. If this percentage drops with each new sprint, you should think about why this happens - review the views, architecture, add people to the team, if necessary, etc. Of course, we strive here for 100%

3. Promotion testing

TP = number of tests written / time span

A useful metric, both for determining the change in performance and for estimating the terms of plan coverage. If the performance varies in the range - this is normal. If it suddenly falls sharply or soars, you should ask questions. In the first case, it is possible that a specialist has dropped motivation or systematic errors occur with an assessment of the complexity of work. In the second, it is possible that the appearance of work is created as a result of a strong crushing of checks, which is also not good.

4. Percent Coverage

TC (%) = number of tests written / number of requirements

A cloudy but useful metric when it comes to estimating the depth of coverage. It is even better, perhaps, to take an inverse proportion as a percentage ... If you use it correctly, for example, feature tests in Agile, you can not only estimate how many tests there will be in a few months, but also understand that it is time to optimize something to reduce the runtime of these tests.

5. The density of defects

TC (%) = number of open defects / total product size

An extremely important metric that is neglected due to the lack of a reasonable opportunity to estimate what product size is. There is a classic idea that on average, 3 lines of code have one defect each. For me, this is nonsense, and if this is so, then forgive me, testing will no longer help :) In general, for the Scrum process, you can add story points as this very product size - if there are few defects found, normalize the formula. In any case, this is a very useful metric both within the team and outside, especially when the product is being prepared for publication. In general, agile test automation is a separate song. Anyone can read about it here.

6. The effectiveness of the elimination of defects

DRE (%) = defects found during testing / (defects found during testing + defects found by users in production)

An extremely important metric, without which nowhere. If, according to the results of the autotest run, we have, say, 15 defects, we correct them, and after rolling out to users we notice another 15 new and treacherous, then sadness means we did not follow the metrics higher ... Having received this percentage, we should raise it as soon as possible 100%. So this metric should be considered in time after deployment. Tests for newly arisen defects should be written immediately and should fall, not pass :)

Conclusion:
Try to invent metrics not for managers, but for yourself. Try to be able to explain, not only to yourself, but to others, how the improvement of automation processes occurs. Show what these or other technologies give, in comparison with others, by formulating it in numbers that are understandable to those who do not understand anything about test automation.

On February 6, 2013, I published an article in Journal of Mathematical Sciences (New York), 2013, 188: 6, 758–760. Abstract and the beginning can be found here .
What I will not talk about in detail there, but as a result of one of the theorems, I will give an example that so often manifests itself in failed projects - if each new maximum of the number of open defects, provided that the past maximum of the number of open defects = x, reaches a value of the order of x ^ 2, the project is in conditions of uniform distribution of the number of defects. That is, after seeing such a trend, be prepared for the fact that all the functionality stops working - and very quickly. This pattern was confirmed many times in practice, and not only with the maxima of defects ...

Frankly speaking, I know companies that, the worse the quality, the better, because they live on contracts for technical support and make big money on the fact that the customer has no other alternatives. Such companies do not need testing automation, do not need metrics and do not need quality. This article is addressed to all other companies that are competing in trying to earn the right to loyalty and money of users.

Source: https://habr.com/ru/post/254957/


All Articles