📜 ⬆️ ⬇️

Types of testing and approaches to their application

From the institute course on programming technologies I made the following classification of types of testing (the criterion is the degree of code isolation). Testing happens:The classification is good and clear. However, in practice it turns out that each type of testing has its own characteristics. And if they are not taken into account, the testing becomes burdensome and is not adequately addressed. Here I have gathered approaches to the real application of various types of testing. And since I'm writing on .NET, the links will be to the appropriate libraries.

Unit testing


Block (modular, unit testing) testing is the most understandable for a programmer. In fact, it is testing the methods of some class of the program in isolation from the rest of the program.

Not every class is easy to cover unit tests. When designing, the possibility of testability and class dependencies should be made explicit. To guarantee testability, you can apply the TDD methodology , which requires you to first write a test, and then the implementation code of the test method. Then the architecture is tested. Unraveling dependencies can be done using Dependency Injection . Then each dependency is explicitly associated with the interface and explicitly defined as the dependency is injected into the constructor, into the property, or into the method.

To implement unit testing, there are special frameworks. For example, NUnit or a test framework from Visual Studio 2008. For the ability to test classes in isolation, there are special Mock frameworks. For example, Rhino Mocks . They allow for interfaces to automatically create stubs for dependency classes, setting their desired behavior.
')
On unit testing many articles are written. I really like the MSDN article “ Write Maintainable Unit Tests” , which tells you how to create tests that will not become burdensome over time.

Integration testing


Integration testing, in my opinion, is the most difficult to understand. There is a definition - it is testing the interaction of several classes that do some work together. However, it is not clear how to test this definition. You can, of course, build on other types of testing. But it is fraught.

If we approach it as unit-testing, in which dependencies are not replaced by mock-objects in tests, we get problems. For good coverage, you need to write a lot of tests, since the number of possible combinations of interacting components is a polynomial dependence. In addition, unit tests test how the interaction is performed (see white box testing ). Because of this, after refactoring, when some kind of interaction turned out to be allocated to a new class, the tests fail. It is necessary to apply a less invasive method.

To approach integration testing as a more detailed system test also fails. In this case, on the contrary, there will be few tests to check all interactions used in the program. System testing is too high level.

A good article on integration testing I came across only once - Scenario Driven Tests . After reading it and Ayende's DSL DSLs in Boo book , Domain-Specific Languages ​​in .NET , I had an idea how to do integration testing.

The idea is simple. We have input data, and we know how the program should work on them. We write this knowledge in a text file. This will be a specification for test data, which records what results are expected from the program. Testing will determine compliance with the specification and what the program actually finds.
I will illustrate with an example. The program converts one document format to another. Conversion tricky and with a bunch of mathematical calculations. The customer gave a set of typical documents that he needs to convert. For each such document we will write a specification, where we will write any intermediate results that our program will reach when converting.

1) Suppose there are several sections in the submitted documents. Then in the specification we can specify that the document being parsed should have sections with the specified names:

$SectionNames = , , ,

2) Another example. When converting you need to break the geometric shapes into primitives. The splitting is considered successful if, in sum, all the primitives completely cover the original figure. From the sent documents we will choose various figures and we will write our specifications for them. The fact of primitiveness of a figure can be reflected as follows:

$IsCoverable = true
It is clear that to check such specifications you will need an engine that reads the specifications and checks their compliance with the program's behavior. I wrote this engine and was pleased with this approach. Soon lay out the engine in Open Source. (UPD: Laid out )

This type of testing is an integration, since the verification invokes the interaction code of several classes. And only the result of the interaction is important, not the details and the order of calls. Therefore, code refactoring does not affect tests. There is no excessive or insufficient testing - only those interactions that are encountered when processing real data are tested. The tests themselves are easy to maintain, since the specification is well readable and easy to change according to new requirements.

System testing


System - this is testing the program as a whole. For small projects, this is usually manual testing — launched, clicked, made sure that (did not) work. You can automate. There are two approaches to automation.

The first approach is to use the MVC pattern variation - Passive View (here’s another good article on MVC pattern variations ) and to formalize the user interaction with the GUI in the code. Then system testing is reduced to testing the Presenter classes, as well as the logic of transitions between the View. But there is a nuance. If you test Presenter classes in the context of system testing, then you need to replace mock objects with as few dependencies as possible. And then there is the problem of initialization and bringing the program into the desired state to start testing. The above Scenario Driven Tests article describes this in more detail.

The second approach is to use special tools to record user actions. That is, as a result, the program itself starts, but clicking on the buttons is performed automatically. For .NET, an example of such a tool is the White library . WinForms, WPF and some more GUI platforms are supported. The rule is - for each use case is written in a script that describes the actions of the user. If all use cases are covered and tests pass, then you can pass the system to the customer. The act of acceptance must sign.

Source: https://habr.com/ru/post/81226/


All Articles