📜 ⬆️ ⬇️

Key points of testing

Applications, goals and objectives of software testing are varied, so testing is evaluated and explained differently. Sometimes testers themselves may find it difficult to explain what the 'as is' testing is. There is confusion.

To disentangle this confusion, Alexei Barantsev (practitioner, trainer and consultant in software testing; a native of the Institute for System Programming of the Russian Academy of Sciences) anticipates his training in testing with an introductory video about the main points of testing.

It seems to me that in this report the lecturer was able to most adequately and balancedly explain “what testing is” from the point of view of a scientist and a programmer. It is strange that this text has not yet appeared on the site.
')
I give here a concise retelling of this report. At the end of the text there are links to the full version, as well as to the mentioned video.



Key points of testing


Dear Colleagues,

first try to understand what testing is NOT.

Testing is not development ,

even if testers can program, including tests (test automation = programming), they can develop some auxiliary programs (for themselves).

However, testing is not a software development activity.

Testing is not analysis ,

and not a collection and analysis activity.

Although, in the process of testing sometimes it is necessary to clarify the requirements, and sometimes they have to analyze them. But this activity is not the main one, rather, it has to be done simply out of necessity.

Testing is not control ,

Despite the fact that in many organizations there is such a role as a “test manager”. Of course, testers must be managed. But testing itself is not a management.

Testing is not technical writing ,

however, testers have to document their tests and their work.

Testing cannot be considered to be any of these activities simply because in the development process (or analyzing requirements, or writing documentation for their tests), testers do all this work for themselves and not for someone else.

Activity is significant only when it is in demand, that is, testers must produce something for export. What do they do for export?

Defects, defect descriptions, or test reports? This is partly true.

But this is not the whole truth.

The main activity of testers


is that they provide negative feedback on the quality of the software product to the software development project participants.



“Negative feedback” does not bear any negative connotation, and does not mean that testers are doing something bad, or that they are doing something bad. This is just a technical term that means quite a simple thing.

But this thing is very significant, and probably the only most significant component of the testers' activities.

There is a science - " systems theory ". It defines such a thing as “feedback”.

“Feedback” is some data that goes back to the input from the output, or some part of the data that goes back to the input from the output. This feedback can be positive and negative.

Both types of feedback are equally important.

In the development of software systems, positive feedback, of course, is some kind of information that we receive from end users. These are requests for some new functionality, this is an increase in sales (if we produce a quality product).

Negative feedback can also come from end users in the form of some kind of negative feedback. Or it can come from testers.

The earlier negative feedback is provided, the less energy is needed to modify this signal. That is why testing needs to start as early as possible, at the earliest stages of the project, and provide this feedback at the design stage, and, perhaps, earlier, at the stage of collecting and analyzing requirements.

By the way, hence the understanding that testers are not responsible for quality grows. They help those who are responsible for it.

Synonyms of the term "testing"


In terms of the fact that testing is the provision of negative feedback, the world-famous abbreviation QA (Quality Assurance) is synonymous with the term “testing” absolutely definitely NOT.

It is impossible to consider quality assurance as the simple provision of negative feedback, because the Security is some positive measures. It is understood that in this case we provide the quality, we take some measures in time to improve the quality of software development.

But "quality control" - Quality Control, can be considered in a broad sense, a synonym for the term "testing", because quality control is the provision of feedback in its most varied varieties, at the most different stages of a software project.



Sometimes testing is meant as some separate form of quality control.

Confusion comes from the history of testing. At different times, the term "testing" meant various actions that can be divided into two large classes: external and internal.

External definitions


Definitions that gave Myers, Beizer, Kaner at different times describe testing just from the point of view of its EXTERNAL significance. That is, from their point of view, testing is an activity that is intended for something, and not consists of something. All three of these definitions can be summarized as giving negative feedback.

Internal definitions


These are the definitions that are given in the standard terminology used in software engineering, for example, in the de facto standard, which is called SWEBOK.

Such definitions constructively explain WHAT is the testing activity, but they do not give the slightest idea what testing is for, for which all the results of checking the correspondence between the actual behavior of the program and its expected behavior will be used.

So,

testing is



Henceforth, we will consider this the working definition of "testing."



The general scheme of testing is approximately as follows:
  1. The tester receives the program and / or requirements at the entrance.
  2. He does something with them, oversees the work of the program in certain, artificially created situations.
  3. At the exit, it receives information about matches and inconsistencies.
  4. Further, this information is used to improve an existing program. Or in order to change the requirements for the program that is still being developed.

What is a test



No need to assume that the situation is something simultaneous. A test can be quite long, for example, when testing performance, this artificially created situation can be a continuous load on the system for quite a long time. And the observations that need to be done in this case are a set of different graphs or metrics that we measure in the process of performing this test.

A test developer is committed to choosing a limited set from a huge potentially infinite set of tests.

Well, in this way we can conclude that the tester does two things during the testing process.

1. First, he controls the execution of the program and creates these very artificial situations in which we are going to test the behavior of the program.

2. And, secondly, he observes the behavior of the program and compares what he sees with what is expected.

If the tester automates tests, he does not himself observe the program's behavior — he delegates this task to a special tool or a special program that he himself wrote. It is she who observes, she compares the observed behavior with the expected one, and only some final result is given to the tester - whether the observed behavior is the same as the expected one or not.

Any program is a mechanism for processing information. Information in one form is received at the input, information in some other form is output. In this case, the program can have many inputs and outputs, they can be different, that is, the program can have several different interfaces, and these interfaces can be of different types:

The most common interfaces are

Using all these interfaces, the tester:


This is testing.

Other classifications of types of testing


The most commonly used splitting into three levels is
  1. unit testing,
  2. integration testing
  3. system testing.

Under unit testing is usually meant testing at a fairly low level, that is, testing of individual operations, methods, functions.

By system testing is meant testing at the user interface level.

Sometimes some other terms are also used, such as “component testing”, but I prefer to single out these three, due to the fact that the technological division into unit and system testing does not make much sense. At different levels can use the same tools, the same technology. Separation is conditional.

Practice shows that tools that are positioned by the manufacturer as unit testing tools can equally well be applied at the testing level of the entire application as a whole.

And tools that test the entire application at the level of the user interface sometimes want to look, for example, into a database or call some separate stored procedure there.

That is, the division into system and unit testing is generally speaking purely conditional, if we speak from a technical point of view.

The same tools are used, and this is normal, the same techniques are used, at each level we can talk about testing of a different kind.

We combine:



That is, we can talk about unit testing functionality.

You can talk about system testing functionality.

You can talk about unit testing, for example, efficiency.

You can talk about system efficiency testing.

Either we consider the effectiveness of a single algorithm, or we consider the effectiveness of the entire system. That is, the technological separation of modular and system testing does not make much sense. Because at different levels the same tools, the same techniques can be used.

Finally, in integration testing, we check if, within a system, the modules interact with each other correctly. That is, we actually perform the same tests as in system testing, only paying additional attention to how exactly the modules interact with each other. We perform some additional checks. This is the only difference.

Let's try again to understand the difference between system and unit testing. Since this separation is quite common, this difference should be.

And this difference manifests itself when we perform not the technological classification, but the classification by the purpose of testing.

Classification by purpose is conveniently performed using the "magic square", which was originally invented by Brian Marik and then improved by Eri Tennen.



In this magic square, all types of testing are arranged in four quadrants, depending on what more attention is paid to these tests.

Vertically - the higher is the type of testing, the more attention is paid to some external manifestations of program behavior, the lower it is, the more attention we pay to its internal technological structure of the program.

Horizontal - the more left our tests are, the more attention we pay to their programming, the more right they are, the more attention we give to manual testing and study of the program by humans.

In particular, such terms as acceptance testing, Acceptance Testing, unit testing in the very sense in which it is most often used in literature can be easily entered into this square. This low-level testing with a large, with an overwhelming proportion of programming. That is, all of the tests are programmed, fully automatically performed, and attention is paid primarily to the internal structure of the program, specifically to its technological features.

In the upper right corner we will have manual tests aimed at some external behavior of the program, in particular, testing usability, and in the lower right corner we will most likely be checking various non-functional properties: performance, security and so on.

So, based on the classification by objectives, unit testing is in the lower left quadrant, and all other quadrants are system testing.

Thanks for attention.

Additionally


Source: https://habr.com/ru/post/110307/


All Articles