📜 ⬆️ ⬇️

Testing on the 1C: Enterprise 8 platform

Introduction.


This article is informational and informational and does not contain any advertising and especially attempts to extract material profit. The objective of the article is to shed light on the ability to test the configurations code created, briefly introduce the successfully used tool, hear feedback and draw conclusions. Perhaps this is an unnecessary bike and someone will share the right approach to testing for this platform.

Development on the 1C: Enterprise platform is not the most difficult process. The most difficult thing is to accept the concept of writing code in your native language :)
But despite the fact that in the 8th version of the platform a positive qualitative leap was made in the platform device and the embedded language (for example, the emergence of the MVC concept for configuration metadata objects), many popular coders continue to issue megabytes of "garbage" that miraculously works in framework of what could / had time to check such a coder.
I have no rich “experience” of working in various franchisees, in all my years before my eyes only the experience of the only department that produces boxed products, products are sold, customers find bugs, terrorize technical support, technical support runs to the developers, developers happily fix the found bugs, incidentally making a variety of new, in short - the work is all and enough for a long time.
How much time it takes to fix bugs, and how much to create a new functional - the proportion is known only approximately. Bugs found by customers are perceived as an inevitable evil, and to reduce the time to fix them, they resort to enhanced manual testing of releases and the hiring / training of competent developers. Manual testing by the QA department is quite time consuming and it is not possible to accurately determine the golden mean of the ratio of the testing depth to the time spent on testing. About the presence of a huge number of talented developers in general can not speak.
In the "adult" programming languages, such problems are trying to solve the ubiquitous testing. Starting from the developer level - unit-tests, further functional and regression, and ending with integration tests. In particularly interesting cases, tests are run for each commit in a particular repository branch.
Unfortunately, 1C does not indulge developers on its platform with any worthy tools, well, even though the repository / repository was done.
Several years ago, starting a new project, I personally got tired of regular blows with the same rake on the forehead. Time was agreed with the management on the development of my knee test system as I saw it and the work began to boil.

Application.

Personally, on one project, this system saved at least a year of time spent in the debugger. Another project has been using the system for several years for functional tests of a huge number of options for discounts / payments, etc. Three more projects are starting to use it. In general, of course, to introduce such a system “from below” is like hammering your forehead into the wall of a skyscraper. But if you hammer every day, the result will appear sooner or later. The main thing is a successful example and support from the top.

Description.


The system works on the principle - “execute the method, compare the result with the standard”. The only and very unpleasant, although natural, limitation is that the method should be export (public). In principle, you can screw up the unloading of source codes into files, parsing the listed method names, marking their export and loading back and forth. But such a method, firstly, complicates the testing itself, and most importantly, there are pitfalls in the form of intersecting the names of the methods "suddenly" become visible from the outside. Therefore, we believe that the problem of encapsulation is not as critical as the problem of untested code.
Based on this simple principle, the system can:

* regression is not brought to mind, there are time stamps for the beginning of the test and the end, there is no normal report on the change in the difference between these marks.
')
Device.

Since the system was created at the time of platform 8.1, there is no support for managed forms, because I don’t need it, and I don’t need current projects. But the main functionality of the system is testing the code and the whole drawback is that the test processing is not written on controlled forms.

The main interface for creating tests and executing them is external processing. Processing runs directly in the information database, the configuration of which they want to test.
The tests themselves are stored in a separate information database, where external processing is connected and authorized:

The working form looks like this:

1. Current project
2. Username of the testing system
3. Tests can be divided into groups, it is convenient when setting up automatic testing and allows you to structure the tests themselves.
4. The test with code 400 is selected, the name of the test is indicated, the test method is in the “Translator” embedded processing form, the last column shows the name of the test method.
5. The full signature of the test method is inserted here, which is automatically parsed (by pressing the button 8) on the incoming parameters - 6 and the results - 7.
6. The input parameters of the primitive and reference types are set here. You can also specify the path to the file if the File checkbox is checked.
7. The selected test has only one return value of the function, for it the name _ReturnPerm is predefined .
8. Parse the method signature to the incoming parameters and the returned results.

To understand the further narration, you need to take a break from field 5, where there are some more tabs.
Before performing :

The field is intended to execute arbitrary code before calling the method being tested. You can modify the incoming parameters in any way, prepare collections or objects of the infobase.
A similar field After execution allows you to process the results of a call to the test method before comparing them with previously stored reference values.
Executable code :

This shows all the code that will be executed when the test runs.
Description tab - here you can describe in detail what the test is.
Well, finally the Versions tab:

All versions of the current project configurations are listed in the test database. The test creator can indicate on which versions this test should pass, and on which versions it is not applicable. This information is used for automatic testing.

9. Run the test. The code being executed is visible on the Executable Code tab. Field 7 is filled with the results of the work of the executed code.
10. Button to save the results obtained after pressing button 9 as a reference.
11. Run the test and compare the results with the previously saved benchmark.
12. View saved reference values.

Example.

Something like this is an interactive message that the test failed:

You can see that the return value is of the type Structure , the window lists the structure fields whose values ​​differ from the saved reference.
Text values ​​can be compared with the diff-tool built into the platform:


Total

In principle, the described functionality is enough for most tasks to test a specific configuration in semi-automatic mode. But in addition to this, on the server side there is the possibility of setting up an automatic testing of a set of configurations (within one project) on a schedule (procedural tasks). The system is able to take the current version of the configuration from the storage, roll it onto the database under test and run the specified test groups. Upon fulfillment of the listed observers, a letter with detailed test results is sent.

The system was not created for sale, the system where many are not brought to mind, but even that which already exists greatly helps to avoid at least the stupid mistakes from the series - corrected in one place, fell off in the other three .

Source: https://habr.com/ru/post/214651/


All Articles